- Features
- Installation
- Host installation
- Usage
- Envoronment
- Troubleshooting and known issues
- References
- OpenAI gym style api
- supports RGB vision and standard scr_server sensors
- torcs is run on Docker, for a simpler installation and configuration
- no need for xautomation, as the menu is skipped automatically
- extensive configuration though .yaml files (eg car and track can be changed through a parameter). More info here
- latest torcs version (1.3.7)
This project is designed to run on a Linux system, ideally with an Nvidia GPU.
The Docker image allows for easier porting on multiple systems.
1 install docker -> guide
Verify docker works
sudo docker run hello-world
2 docker postinstall -> guide
Additionally you can set up docker to run without sudo
sudo groupadd docker
sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
Eventually configure docker to start on boot.
sudo systemctl enable docker
3 install nvidia-docker -> guide.
The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. Nvidia-docker essentially exposes the GPU to the containers to use: https://github.com/NVIDIA/nvidia-docker
4a pull the torcs image
docker pull gerkone/torcs
4b build the torcs image yourself
docker build -t <your image name> torcs/
5 install python requirements
pip install -r requirements.txt
(optional) install konsole
Konsole is already shipped with every KDE installation.
On Ubuntu
sudo apt-get install konsole
On Arch/Manjaro
sudo pacman -S konsole
It is possible to install TORCS on the host without using docker. To do so follow this guide guide. Pay attention to the configuration section.
To run the example you can use the script pytorcs.py.
python pytorcs.py
This will start the TORCS container, open a new window with the game and start running the agent.
You can change some settings and options by editing the simulation.yaml file.
For more details on how the action and state space work check this.
For more details on the parameters and on how to use your test code and custom algorithm check this.
You can run pytorcs in a detached shell with your emulator of choice with
python pytorcs.py --console <konsole|terminator|xterm|gnome-terminal...>
If you want to run the TORCS container manually you can use
nvidia-docker run --ipc=host -v /tmp/.X11-unix:/tmp/.X11-unix:ro -e DISPLAY=unix$DISPLAY -p 3001:3001/udp -it --rm -d gerkone/torcs
More info on the environment and its usages can be found on here.
- The TORCS window does not pop up (OR error "freeglut (/usr/local/lib/torcs/torcs-bin): failed to open display ':0'").
Allow access to your X display server by using
xhost local:root
-
Failed to initialize NVML: Unknown Error
Docker can't access the GPU, usually due to permissions. Run pytorcs with
--privileged
.Common when using the nvidia-docker AUR package. For more info check this discussion.
This torcs is a modified version of 1.3.7 taken from here.
I made the following changes to the source:
- The main menu is completely skipped and the race can be configured by using an .xml file. This was done to allow a faster restart and most importantly to avoid using xautomation.
- The countdown at the beginning of each race was removed, to save 3 seconds each time.
- The vision works with shared memory out-of-the-box, but I made some changes to keep it simple and readable with pure python.
- Deleted most of the cars and tracks to lighten the build.
The torcs server used is scr_server by Daniele Loiacono et al.
The Python-side client is an extended version of snakeoil3.
An adapted version of the tf2rl library by keiohta is included with the agents.