This is a series of projects where I solve AI gym environments by building RL algorithms from scratch using Python, Pytorch and Tensorflow
Use the Q-Learning algorithm to solve the Mountain-Car-v0 environment by discretizing a continuous state space
A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain on the right; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum. Here, the reward is greater if you spend less energy to reach the goal
The agent (a car) is started at the bottom of a valley. For any given state the agent may choose to accelerate to the left, right or cease any acceleration.
Num | Observation | Min | Max |
0 | Car Position | -1.2 | 0.6 |
1 | Car Velocity | -0.07 | 0.07 |
The agent (a car) is started at the bottom of a valley. For any given state the agent may choose to accelerate to the left, right or cease any acceleration.
Num | Observation | Min | Max |
0 | The Power coeffecient | -1.0 | 1.0 |
Reward of 100 is awarded if the agent reached the flag (position = 0.45) on top of the mountain. Reward is decreased based on amount of energy consumed each step.
The position of the car is assigned a uniform random value in [-0.6 , -0.4]. The starting velocity of the car is always assigned to 0.
The car position is more than 0.45. Episode length is greater than 200