Skip to content

Official repository for the 2024/2025 Reinforcement Learning Laboratory at the University of Verona

Notifications You must be signed in to change notification settings

Isla-lab/RL-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

105 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RL-Laboratory🤖

Code for the Reinforcement Learning lab of Reinforcement Learning and Advanced programming for AI course, MSc degree in Artificial Intelligence 2024/2025 at the University of Verona.

First Set-Up (Conda)

  1. Download Miniconda for your System.

  2. Install Miniconda

    • On Linux/Mac
      • Use ./Miniconda3-latest-Linux-{version}.sh to install.
      • sudo apt-get install git (may be required).
    • On Windows
      • Double click the installer to launch.
      • NB: Ensure to install "Anaconda Prompt" and use it for the other steps.
  3. Set-Up conda environment:

First Set-Up (Python Virtual Environments)

Python virtual environments users (venv) can avoid the Miniconda installation. The following package should be installed:

  • scipy, numpy, gym
  • jupyter, matplotlib, tqdm
  • tensorflow, keras

Spinning Up Set-Up (Conda)

  1. Ensure you already have Miniconda installed from the previous lessons

  2. Set-up a new and separate conda environment for Spinning Up:

    • conda create -n spinningup python=3.6
    • conda activate spinningup
    • sudo apt-get update && sudo apt-get install libopenmpi-dev
  3. Finally, install the Spinning Up dependencies:

    • navigate to RL-lab/spinningup/spinningup
    • pip install opencv-python==4.1.2.30
    • pip install -e .

Spinning Up Usage

  1. Remember to activate your miniconda environment: conda activate spinningup

  2. To train a RL agent, run the train.py script located inside the spinningup folder using the following arguments:

    • env: the environment to train the RL agent on (required)
    • algo: the RL algorithm to be used during training (required)
    • exp_name: the name of the experiment, necessary to save the results and the agent weights (required)
    • hid: a list representing the neural network hidden sizes (default is [32, 32])
    • epochs: the number of training epochs (default is 50)

An example of usage to train VPG over the CartPole environment may be: python train.py --env CartPole-v1 --algo vpg --exp_name first_experiment. Once the training is complete, a graph showing the performance will be visualized.

  1. To test a RL agent, run the test.py script located inside the spinningup folder using the following arguments:

    • exp_name: the name of a past training experiment to be tested (required)
  2. The available RL algorithms are: vpg, ddpg, ppo, sac (note that ddpg, ppo, and sac are to be completed as part of the lessons!)

  3. The available environments are: CartPole-v1, LunarLander-v2, BipedalWalker-v3, Pendulum-v0, Acrobot-v1, MountainCar-v0, MountainCarContinuous-v0, FrozenLake-v0

An example of usage to test a previous training experiment may be: python test.py --exp_name first_experiment.

Assignments

Following the link to the code snippets for the lessons:

First Semester

Second Semester

Extra exercise

Tutorials

This repo includes a set of introductory tutorials to help accomplish the exercises. In detail, we provide the following Jupyter notebook that contains the basic instructions for the lab:

  • Tutorial 1 - Gym Environment: Here!
  • Tutorial 2 - TensorFlow (Keras): Here!
  • Tutorial 3 - PyTorch: Here!

Contact information

About

Official repository for the 2024/2025 Reinforcement Learning Laboratory at the University of Verona

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors