Skip to content

Maze Solver from scratch (under 260 lines) using Naive Reinforcement Learning with Q-Table construction

License

Notifications You must be signed in to change notification settings

ironhide23586/naive-dqn-maze

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

naive-dqn-maze

Maze Solver using Naive Reinforcement Learning with Q-Table construction

Full detailed article at - https://towardsdatascience.com/maze-rl-d035f9ccdc63

Sample video at - https://www.youtube.com/watch?v=kOIVUGqz7gI

alt text

This is an implementation of the Q-Learning algorithm in Reinforcement Learning from scratch using python, numpy and opencv for visualization. Everything including the game-world , visualization and AI is in one python file dqn_grid_world.py with the following dependencies-

  • numpy
  • opencv
  • moviepy

The AI is implemented in numpy, while the rest is used for game world and visualization.

Simply run python dqn_grid_world.py. It will start a progress bar training the agent. The training process video is written out in the same directory.

Feel free to use the code if you find it useful! :)

About

Maze Solver from scratch (under 260 lines) using Naive Reinforcement Learning with Q-Table construction

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages