Skip to content

Experimented with different reinforcement learning algorithms to train an agent on Custom Grid World Environment, CartPole-v1, Acrobot-v1, and Atari Breakout.

Notifications You must be signed in to change notification settings

sagar118/OpenAI-Gym-Breakout-Env

Repository files navigation

OpenAI Gym Breakout Environment

In this project we experimented with different deep reinforcement learning algorithms developed over the years on environments provided in Open AI gym. We’ll compare the performance of these algorithms in each of the environment to better understand how the algorithm affects the agent behaviour in those environments.

The following environments were used in the project:

  1. Custom Grid World Environment
  2. CartPole-v1
  3. Acrobot-v1
  4. Atari Breakout

The report (Report.pdf) contains detailed information on our different experiments and results obtained for each algorithm on each environment. We have also listed down the technical difficulties we faced while developing the project.

Note: While working with the jupyter notebook, the absolute path for weights might not work as after the project was finished we have reorganized the files into different folders for convenience. To run the code please make sure the paths are correct.

Collaborators:

  • Anuja Katkar
  • Sagar Thacker

About

Experimented with different reinforcement learning algorithms to train an agent on Custom Grid World Environment, CartPole-v1, Acrobot-v1, and Atari Breakout.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published