Cogment Verse is a SDK helping researchers and developers in the fields of human-in-the-loop learning (HILL) and multi-agent reinforcement learning (MARL) train and validate their agents at scale. Cogment Verse instantiates the open-source Cogment platform for environments following the OpenAI Gym mold, making it easy to get started.
Simply clone the repo and start training.
- Getting started
- Tutorials
- Develop
- Deploy
- Experimental results 🚧
- Changelog
- Contributors guide
- Community code of conduct
The following will show you how to setup Cogment Verse locally, it is possible to use a Docker based setup instead. Instructions for this can be found here
-
Clone this repository
-
Install Python 3.9
-
Depending on your specific machine, you might also need to following dependencies:
swig
, which is required for the Box2d gym environments, it can be installed usingapt-get install swig
on ubuntu orbrew install swig
on macOSpython3-opencv
, which is required on ubuntu systems, it can be installed usingapt-get install python3-opencv
libosmesa6-dev
andpatchelf
are required to run the environment libraries usingmujoco
. They can be installed usingapt-get install libosmesa6-dev patchelf
.
-
Create and activate a virtual environment
$ python -m venv .venv $ source .venv/bin/activate
-
Install the python dependencies.
$ pip install -r requirements.txt
-
Depending on the environment you want to use, you might need to take additional steps.
-
In another terminal, launch a mlflow server on port 3000
$ source .venv/bin/activate $ python -m simple_mlflow
-
Start the default Cogment Verse run using
python -m main
-
Open Chrome (other web browser might work but haven't tested) and navigate to http://localhost:8080/
-
Play the game!
That's the basic setup for Cogment Verse, you are now ready to train AI agents.
Cogment Verse relies on hydra for configuration. This enables easy configuration and composition of configuration directly from yaml files and the command line.
The configuration files are located in the config
directory, with defaults defined in config/config.yaml
.
Here are a few examples:
-
Launch a Simple Behavior Cloning run with the Mountain Car Gym environment (which is the default environment)
$ python -m main +experiment=simple_bc/mountain_car
-
Launch a Simple Behavior Cloning run with the Lunar Lander Gym environment
$ python -m main +experiment=simple_bc/mountain_car services/environment=lunar_lander
-
Launch and play a single trial of the Lunar Lander Gym environment with continuous controls
$ python -m main services/environment=lunar_lander_continuous
-
Launch an A2C training run with the Cartpole Gym environment
$ python -m main +experiment=simple_a2c/cartpole
This one is completely headless (training doens't involve interaction with a human player). It will take a little while to run, you can monitor the progress using mlflow at http://localhost:3000
-
Launch an DQN self training run with the Connect Four PettingZoo environment
$ python -m main +experiment=simple_dqn/connect_four
The same experiment can be launched with a ratio of human-in-the-loop training trials (that are playable on in the web client)
$ python -m main +experiment=simple_dqn/connect_four +run.hill_training_trials_ratio=0.05
-
PettingZoo's Atari Pong Environment
Example #1: Play against RL agent
$ python -m main +experiment=ppo_atari_pz/play_pong_pz
Example #2: Observing RL agents playing against each other
$ python -m main +experiment=ppo_atari_pz/observe_play_pong_pz
Example #3: Training with human's demonstrations
$ python -m main +experiment=ppo_atari_pz/hill_pong_pz
Example #4: Training with human's feedback
$ python -m main +experiment=ppo_atari_pz/hfb_pong_pz
Example #5: Self-training
$ python -m main +experiment=ppo_atari_pz/pong_pz
NOTE: Example 2&3 require users to open Chrome and navigate to http://localhost:8080 in order to provide either demonstrations or feedback.
- Analyzing and Overcoming Degradation in Warm-Start Off-Policy Reinforcement Learning code
- Multi-Teacher Curriculum Design for Sparse Reward Environments code
(please open a pull request to add missing entries)