Skip to content

davnish/smart_dino

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

smart_dino

This is the pytorch implementation of the DQN algorithm which is used to solve the chrome's dino game. Chrome's dino environment is captured through selenium.

files/folders structure

.
├── .gitignore
├── DQN.py (Implementation of DeepQLearning class and createNetwork class)
├── README.md
├── dino_env.yml (requirements file for creating the conda env)
├── env.py (Class implementation of WebDino, which containes all the necessary functions and attributes for creating the environment)
├── main.py (Here all the classes come together and the model is sent into training)
├── misc
│   └── dino_model.gif
├── results
│   ├── DQ_6.png
├── models (Contains the pretrained model)
│   ├── ckpt_1800.pt 
└── vis.py

overview architecture

overview_arch

dependencies

  • pytorch
  • selenium for interacting with the browser.
  • pandas for creating and storing rewards graphs.
  • pillow for the taking the screenshots on a specified location on display.
  • opencv for image processing before feeding to model.

rewards/episodes

overview_arch

install

Important

To install conda or miniconda follow this link

If you have conda already installed, then to create the separate env which will contain all the necessary libraries run the below commands.

git clone https://github.com/davnish/smart_dino # cloning the repo
cd smart_dino # Moving inside the repo
conda env create --name dino --file=dino_env.yml # Installing the libraries

Then just activate the environment by,

conda activate dino

About

solving chrome's dino using DQN

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages