Skip to content

Latest commit

 

History

History
234 lines (156 loc) · 19.3 KB

Ros.md

File metadata and controls

234 lines (156 loc) · 19.3 KB

ROS inside Docker

Author: Tobit Flatscher (2021 - 2023)

1. ROS/ROS 2 with Docker

I personally think that Docker is a good choice for developing robotic applications as it allows you to quickly switch between different projects but less so for deploying software. Refer to this Canonical article to more details about the drawbacks of deploying software with Docker, in particular regarding security. For the deployment I personally would rather turn to Debian or Snap packages (see e.g. Bloom). But there are actually quite a few companies that I am aware of that actually use Docker for deployment as well, first and foremost Boston Dynamics.

The ROS Wiki offers tutorials on ROS and Docker here, there is also a very useful tutorial paper out there and another interesting article can be found here. Furthermore a list of important commands can be found here. After installing Docker one simply pulls an image of ROS, specifying the version tag:

$ sudo docker pull ros

gives you the latest ROS 2 version whereas

$ sudo docker pull ros:noetic-robot

will pull the latest version of ROS1.

Finally you can run it with

$ sudo docker run -it ros

(where ros should be replaced by the precise version tag e.g. ros:noetic-robot).

The OSRF ROS Docker provides a readily available entrypoint script ros_entrypoint.sh that automatically sources /opt/ros/<distro>/setup.bash.

1.1 Folder structure

I generally structure Dockers for ROS in the following way:

ros_ws
├─ docker
|  ├─ Dockerfile
|  └─ docker-compose.yml # Context is chosen as ..
└─ src # Only this folder is mounted into the Docker
   ├─ .repos # Configuration file for VCS tool
   └─ packages_as_submodules

Each ROS-package or a set of ROS packages are bundled together in a Git repository. These are then included as submodules in the src folder or even better by using VCS tool. Inside the docker-compose.yml file one then mounts only the src folder as volume so that it can be also accessed from within the container. This way the build and devel folders remain inside the Docker container and you can compile the code inside the Docker as well as outside (e.g. having two version of Ubuntu and ROS for testing in with different distributions).

Generally I have more than a single docker-compose.yml as discussed in Gui.md and I will add configuration folders for Visual Studio Code and a configuration for the Docker itself, as well as dedicated tasks. I usually work inside the container and install new software there first. I will keep then track of the installed software manually and add it to the Dockerfile as soon as it has proved useful.

1.2 Docker configurations

You will find quite a few Docker configurations for ROS online, in particular this one for ROS 2 is quite well made. Another one for ROS comes with this repository.

1.4 Larger software stacks

When working with larger software stacks tools like Docker-Compose and Docker Swarm might be useful for orchestration, in particular when deploying with Docker. This is discussed in several separate repositories such as this one by Tomoya Fujita, member of the Technical Steering Committee of ROS 2.

2. ROS

This section will got into how selected ROS 1 configuration settings can be simplified with Docker, in particular the network set-up.

2.1 External ROS master

Sometimes you want to run nodes inside the Docker and communicate with a ROS master outside of it. This can be done by adding the following environment variables to the docker-compose.yml file

 9    environment:
10      - ROS_MASTER_URI=http://localhost:11311
11      - ROS_HOSTNAME=localhost

where in this case localhost stands for your local machine (the loop-back device 127.0.0.1).

2.1.1 Multiple machines

In case you want the Docker to communicate with another device on the network be sure to activate the option

15    network_mode: host
16    extra_hosts:
17      - "my_device:192.168.100.1"

as well, where my_device corresponds to the host name followed by its IP. The extra_hosts option basically adds another entry to your /etc/hosts file inside the container, similar to what you would do manually normally in the ROS network setup guide. This way the network set-up inside the Docker does not pollute the host system.

Now make sure that pinging works in both directions. In case there was an issue with a firewall (or VPN) pinging might only work in one direction. If you would continue with your set-up you might be able to receive information (e.g. visualize the robot and its sensors) but not send it (e.g. command the robot). Furthermore ROS relies on the correct host name being set: If it does not correspond to the name of the remote computer the communication might also only work in one direction. For time synchronization of multiple machines it should be possible to run chrony from inside a container without any issues. For how this can be done please refer to cturra/ntp or alternatively to this guide. After setting it up use $ chronyc sources as well as $ chronyc tracking to verify the correct set-up.

You can test the communication between the two machines by sourcing the environment, launching a roscore on your local or remote computer, then launch the Docker source the local environment and see if you can see any topics inside $ rostopic list. Then you can start publishing a topic $ rostopic pub /testing std_msgs/String "Testing..." -r 10 on one side (either Docker or host) and check if you receive the messages on the other side with $ rostopic echo /testing. If that works fine in both directions you should be ready to go. If it only works in one direction check your host configuration and your ROS_MASTER_URI and ROS_HOSTNAME environement variables.

As a best practice I normally use a .env file that I place in the same folder as the Dockerfile and the docker-compose.yaml containing the IPs:

CATKIN_WORKSPACE_DIR="/catkin_ws"
YOUR_IP="192.168.100.2"
ROBOT_IP="192.168.100.1"
ROBOT_HOSTNAME="some_host"

The IPs inside this file can then be modified and are used inside the Docker-Compose file to set-up the container: The /etc/hosts file as well as the ROS_MASTER_URI and the ROS_HOSTNAME:

version: "3.9"
services:
  ros_docker:
    build:
      context: ..
      dockerfile: docker/Dockerfile
    environment:
      - ROS_MASTER_URI=http://${ROBOT_IP}:11311
      - ROS_IP=${YOUR_IP}
    network_mode: "host"
    extra_hosts:
      - "${ROBOT_HOSTNAME}:${ROBOT_IP}"
    tty: true
    volumes:
      - ../src:/${CATKIN_WORKSPACE_DIR}/src

In order to update the IPs though with this approach you will have to rebuild the container. As long as you did not make any modifications to the Dockerfile it should though use the cached layers and should be very quick. But any progress inside the container will be lost when switching IP!

An example configuration can be found here.

2.1.2 Combining different package and ROS versions

Combining different ROS 1 versions is not officially supported but largely works as long as message definitions have not changed. This is problematic with constantly evolving packages such as Moveit. The interface between the containers in this case has to be chosen wisely such that the used messages do not change across between the involved distributions. You can use rosmsg md5 <message_type> in order to verify quickly if the message definitions have changed: If the two md5 hashes are the same then the two distributions should be able to communicate via this message. And even if the message hashes are different you might go ahead and compile the message, as well as packages depending on it, from source (do not forget to uninstall the ones installed through Debian packages). This way both distributions will have again the same message definitions.

2.3 Healthcheck

Docker gives you the possibility to add a custom healthcheck to your container. This test should tell Docker whether your container is working correctly or not. Such a healthcheck can be defined from inside the Dockerfile or from a Docker-Compose file.

In a Dockerfile it might look as follows:

HEALTHCHECK [OPTIONS] CMD command

such as

HEALTHCHECK CMD /ros_entrypoint.sh rostopic list || exit 1

or anything similar.

While for Docker-Compose you might add something like:

 9    healthcheck:
10      test: /ros_entrypoint.sh rostopic list || exit 1
11      interval: 1m30s
12      timeout: 10s
13      retries: 3
14      start_period: 1m

3. ROS 2

ROS 2 was an important update to ROS that makes it much more suitable for industrial applications. It broke backwards compatability to fix some of the limitations of ROS and for this purpose followed a more thorough and structured code design that is largely documented here. One of the important changes was going away from a custom middleware for communication that is tightly integrated, such as is the case with ROS' TCP/UDP communication, towards an abstraction of the communication layer (see here) and the introduction of DDS as the primary communication layer in ROS 2. ROS 2 is primary intended to be used with DDS but also allows other middleware to be used, in particular Zenoh in ROS 2 Iron. Another example of such a custom middleware is rmw_email (see here and here). For wrapping a custom middleware one has to provide a wrapper for it, rmw_* that respects the API, and set the environment variable RMW_IMPLEMENTATION to the corresponding implementation.

3.1 DDS middleware configuration

When using DDS as the middleware in ROS 2 the ROS_DOMAIN_ID replaces the IP-based set-up. For a container this means one would create a ROS_DOMAIN_ID environment variable that again might be controlled by an .env file:

 9    environment:
10      - ROS_DOMAIN_ID=1 # Any number in the range of 0 and 101; 0 by default

Choosing a safe range for the Domain ID largely depends on the operating system and is described in more details in the corresponding article. There might be additional settings for a DDS client such as telling it which interfaces to use. For this purpose it might make sense to mount the corresponding DDS configuration file into the Docker.

When working on a network with several participats that use ROS (e.g. a company or research institution), you will have to make sure that people are using different ROS_DOMAIN_IDs. When only using ROS 2 locally, e.g. in simulation set the variable ROS_LOCALHOST_ONLY=1. This restricts the network traffic to your local PC only.

Another thing you might want to configure is the DDS middleware to be used. In ROS 2 one might choose between different (free or payment) middleware implementations such as FastDDS and CycloneDDS. This will be outlined in more detail in the next section.

What I generally do is define the corresponding environment variables such as RMW_IMPLEMENTATION and CYCLONEDDS_URI in the case of cyclone and mount the dds configuration as a volume inside the container.

 9    environment:
10      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
11      - CYCLONEDDS_URI=${AMENT_WORKSPACE_DIR}/dds/cyclone.xml
12    network_mode: "host"
13    volumes:
14      - ../dds:${AMENT_WORKSPACE_DIR}/dds

For an example of what this configuration might look like have a look at this folder.

3.1.1 Intra-process communication over shared memory

ROS 2 introduced some design changes that are aiming at drastically improving the communication speed in between nodes on the same computational unit. One such optimization is that intra-process communication is left to the underlying middleware. This means it is down to the chosen middleware to use mechanisms like shared memory communication for nodes on the same computer. E.g. FastDDS uses shared memory communication by default in case the environment variable ROS_LOCALHOST_ONLY is set to 1 and CycloneDDS lets you configure it through manually as described here. In Linux /dev/shm is used for shared memory communication. Therefore when communicating in between containers that set ROS_LOCALHOST_ONLY (or use shared memory explicitly) one might have to mount /dev/shm into the containers. For more information about shared memory and Docker refer to this post.

3.2 Zenoh middleware configuration

A new alternative to DDS-based communication in ROS 2 Iron is Zenoh, implemented in rmw_zenoh. Similar to the roscore in ROS it relies on at least a single router that establishes the connection between different nodes running on different computers (it is also possible to do so without but this is not recommended). A good introduction to this can be found in this video. I will add this configuration once rmw_zenoh becomes installable from Debian packages.

4. Bridging ROS 1 and ROS 2

Setting the DDS middleware as described above is in particular important for cross-distro communication as different ROS distros ship with different default DDS implementations. Neither communication between different DDS implementations nor different ROS 2 distributions is currently officially supported (generally it works but there can be problems with lower frequency etc., for more details see here) but similarly to ROS 1 if the messages have not changed (or you compile the messages as well as the packages using them from source) and you are using the same DDS vendor across all involved distros generally communication between the different distros can be achieved. This can also be useful for bridging ROS 1 to ROS 2. The last Ubuntu version to support both ROS 1 (Noetic) and ROS 2 (Galactic) is Ubuntu 20.04. You can use the corresponding Galactic ROS 1 bridge Docker. In case message definitions have changed from Galactic to the distro that you are using (the ROS 2 API is not stable yet!) you might have to compile the corresponding messages from source. The main advantage over other solutions for having the two run alongside is that unlike to other solutions (here or here) you will have none or only very few repositories that have to be compiled from source and can't be installed from a Debian package.

5. CUDA and ROS

When running CUDA inside a container make sure you are setting runtime: nvidia, as well as the environment variables NVIDIA_VISIBLE_DEVICES=all as well as NVIDIA_DRIVER_CAPABILITIES=all inside your Docker Compose file.

For the image to use, you might find some online, e.g. here or for the Nvidia Jetson edge computing platforms here and their Dockerfiles here. If you are not able to find an image that contains what you want, it is generally easier to start from an existing nvidia/cuda image on the Dockerhub that uses the right version of Ubuntu (e.g. 20.04 for ROS Noetic or 22.04 for ROS 2 Humble) and install ROS on top of it by copying the instructions from the official ROS Docker images. The other way around, starting from a ROS image and installing CUDA on top of it, is generally way more tricky! For the available Nvidia CUDA images browse the tags here. By combining the two we can generate a custom image containing ROS and Nvidia as follows:

ARG UBUNTU_VERSION=20.04
ARG NVIDIA_CUDA_VERSION=11.8.0

##############
# Base image #
##############
FROM nvidia/cuda:${NVIDIA_CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION} as base

ARG ROS_DISTRO=noetic

ENV DEBIAN_FRONTEND=noninteractive

ENV ROS_DISTRO=${ROS_DISTRO}
RUN apt-get update \
&& apt-get install -y \
   curl \
   lsb-release \
 && sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' \
 && curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | apt-key add - \
 && apt-get update \
 && apt-get install -y --no-install-recommends \
    ros-${ROS_DISTRO}-ros-base \
 && rm -rf /var/lib/apt/lists/*