PPO Implementation in PyTorch

In this blog post, we will explore the Proximal Policy Optimization (PPO) algorithm. We’ll compare it to other deep reinforcement learning algorithms like Double Deep Q-learning and TRPO. Additionally, we’ll learn how to implement PPO using PyTorch.

Double Deep Q-Network

In double DQNs, we use a separate network to estimate the target rather than the prediction network. The separate network has the same structure as the prediction network. And its weights are fixed for every T episode (T is a hyperparameter we can tune), which means they are only updated after every T episode. The update is simply done by […]

Climbing the Mountain with Neural Network

Function Approximation For problems with very large number of states it will not be feasible for our agent to use table to record the value of all the action for each state and make its policy accordingly. In Function approximation agent learns a function which will approxmately give it best action for particular state. In this example we will use […]

SARSA in the Wind

We will use SARSA algorithm to find the optimal policy so that our agent can navigate in windy world. SARSA State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy, used in the reinforcement learning area of machine learning. SARSA focuses on state-action values. It updates the Q-function based on the following equation: Q(s,a) = Q(s,a) + α (r + γ Q(s’,a’) – Q(s,a)) Here s’ […]

Balancing pole with Policy Gradient

The policy gradient algorithm trains an agent by taking small steps and updating the weight based on the rewards associated with those steps at the end of an episode. The technique of having the agent run through an entire episode and then updating the policy based on the rewards obtained is called Monte Carlo policy gradient. The action is selected […]

Logistic Regression in PyTorch

Load the Data Converting each column data to numpy array From numpy array to pytorch tensor Plotting the data Using the GPU Defining the neural network Putting neural network on GPU Loss function Setting ADAM as an optimizer Defining accuracy The main loop Plotting Loss Plotting accuracy After one epoch Plots to show performance of neural network over epochs

Basic Neural Network in PyTorch

Making a simple neural network with a single hidden layer and four neurons in hidden layer. Schema of our Neural Network Import necessary libraries Inputs and outputs Converting basic python array to PyTorch tensors Code to use GPU if available Putting our variable to GPU Defining neural network instantiate neural network Weights of different layers Loss function Only one forward […]