# Go to https://127.0.0.1:8000 to view the site.
+# (Optional) Create and activate python virtual environment
+virtualenv venv -p python3
+source venv/bin/activate
+# Install dependencies and start serving
+cd docs
+pip install -r requirements.txt
+mkdocs serve
+# Go to https://127.0.0.1:8000 to view the site.
+
+Linking the README
+rm docs/index.md
+ln "${PWD}/README.md" docs/index.md
Acknowledgement
The code is mainly contributed by Johnson, Yu-Zhong Chen, Assume Zhan, Lam Chon Hang, and others. For a full list of contributors, please refer to the contribution list.
diff --git a/search/search_index.json b/search/search_index.json
index 59294b15..c1e8f2f4 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"ROS 2 Essentials","text":"A repo containing essential ROS2 Humble features for controlling Autonomous Mobile Robots (AMRs). Please setup an Ubuntu environment before using this repo.
Please note that this repo is under rapid development. The code is not guaranteed to be stable, and breaking changes may occur.
The documentation is hosted on https://j3soon.github.io/ros2-essentials/.
"},{"location":"#pre-built-workspaces","title":"Pre-built Workspaces","text":"Pre-built Docker images for each workspace can be pulled by running docker compose pull
in the corresponding workspace directory.
Pulling the pre-built Docker images can bypass the time-consuming building process (for both docker compose & devcontainers).
Workspace amd64 arm64 Notes Maintainer Template \u2714\ufe0f \u2714\ufe0f Yu-Zhong Chen ORB-SLAM3 \u2714\ufe0f \u274c Assume Zhan RTAB-Map \u2714\ufe0f \u274c Assume Zhan ROS1 Bridge \u2714\ufe0f \u2714\ufe0f Skip linting Yu-Zhong Chen Cartographer \u2714\ufe0f \u2714\ufe0f Assume Zhan Clearpath Husky \u2714\ufe0f \u2714\ufe0f Real-world support Yu-Zhong Chen Yujin Robot Kobuki \u2714\ufe0f \u2714\ufe0f Real-world support Yu-Zhong Chen Velodyne VLP-16 \u2714\ufe0f \u2714\ufe0f Real-world support Assume Zhan Gazebo World \u2714\ufe0f \u274c\ufe0f Yu-Zhong Chen ALOHA \u2714\ufe0f \u2714\ufe0f Simulation only"},{"location":"#building-documentation","title":"Building Documentation","text":"virtualenv venv -p python3\nsource venv/bin/activate\ncd docs\npip install -r requirements.txt\nmkdocs serve\n# Go to https://127.0.0.1:8000 to view the site.\n
"},{"location":"#acknowledgement","title":"Acknowledgement","text":"The code is mainly contributed by Johnson, Yu-Zhong Chen, Assume Zhan, Lam Chon Hang, and others. For a full list of contributors, please refer to the contribution list.
We extend our gratitude to ElsaLab and NVIDIA AI Technology Center (NVAITC) for their support in making this project possible.
Disclaimer: this is not an official NVIDIA product.
"},{"location":"#license","title":"License","text":"All modifications are licensed under Apache License 2.0.
However, this repository includes many dependencies released under different licenses. For information on these licenses, please check the commit history. Make sure to review the license of each dependency before using this repository.
The licenses for dependencies will be clearly documented in the workspace README in the future.
"},{"location":"#supplementary","title":"Supplementary","text":""},{"location":"#installing-docker","title":"Installing Docker","text":"Follow this post for the installation instructions.
"},{"location":"#installing-nvidia-container-toolkit","title":"Installing NVIDIA Container Toolkit","text":"Follow this post for the installation instructions.
"},{"location":"aloha-ws/","title":"ALOHA","text":""},{"location":"aloha-ws/#start-container","title":"Start Container","text":"cd ~/ros2-essentials/aloha_ws/docker\nxhost +local:docker\ndocker compose up\n# The first build will take a while (~10 mins), please wait patiently.\n
The commands in the following sections assume that you are inside the Docker container:
# in a new terminal\ndocker exec -it ros2-aloha-ws bash\n
"},{"location":"aloha-ws/#view-robot-model-in-rviz","title":"View Robot Model in RViz","text":"ros2 launch interbotix_xsarm_descriptions xsarm_description.launch.py robot_model:=vx300s use_joint_pub_gui:=true\n
It is worth noting that aloha_vx300s.urdf.xacro
and vx300s.urdf.xacro
files are identical. We opt to use vx300s
since aloha_vx300s
seems to lack corresponding configs, such as those for MoveIt 2.
"},{"location":"aloha-ws/#view-robot-model-in-gazebo","title":"View Robot Model in Gazebo","text":"ros2 launch interbotix_xsarm_sim xsarm_gz_classic.launch.py robot_model:=vx300s\n
"},{"location":"aloha-ws/#ros-2-control","title":"ROS 2 Control","text":"ros2 launch interbotix_xsarm_control xsarm_control.launch.py robot_model:=vx300s use_sim:=true\n# and then use the `Interbotix Control Panel`.\n
"},{"location":"aloha-ws/#moveit-2-with-rviz","title":"MoveIt 2 with RViz","text":"ros2 launch interbotix_xsarm_moveit xsarm_moveit.launch.py robot_model:=vx300s hardware_type:=fake\n
"},{"location":"aloha-ws/#moveit-2-with-gazebo","title":"MoveIt 2 with Gazebo","text":"ros2 launch interbotix_xsarm_moveit xsarm_moveit.launch.py robot_model:=vx300s hardware_type:=gz_classic\n
"},{"location":"aloha-ws/#moveit-2-with-isaac-sim","title":"MoveIt 2 with Isaac Sim","text":"Prepare USD files:
cd /home/ros2-essentials/aloha_ws/isaacsim/scripts\n./create_urdf_from_xacro.sh\npython3 create_vx300s_from_urdf.py\npython3 create_vx300s_with_omnigraph.py\n
and run:
ros2 launch interbotix_xsarm_moveit xsarm_moveit.launch.py robot_model:=vx300s hardware_type:=isaac\n# and then move the target and use the `MotionPlanning` panel.\n
"},{"location":"aloha-ws/#debugging-with-isaac-sim","title":"Debugging with Isaac Sim","text":"The Isaac Sim app can be launched with:
isaacsim omni.isaac.sim\n
Keep in mind that the standalone scripts can be easily debugged in Isaac Sim's Script Editor
. Simply copy the code, omitting anything related to SimulationApp (remove the beginning and end), and paste to the Script Editor
and run it.
To open pre-configured USD file with OmniGraph:
View the current joint states:
# in a new terminal\ndocker exec -it ros2-aloha-ws bash\nros2 topic echo /vx300s/joint_states\n
A specific world can also be directly launched and played with:
isaacsim omni.isaac.sim --exec '/home/ros2-essentials/aloha_ws/isaacsim/scripts/open_isaacsim_stage.py --path /home/ros2-essentials/aloha_ws/isaacsim/assets/vx300s_og.usd'\n
To access Nucleus from Isaac Sim, you should install Nucleus with default username/password admin:admin
on your host machine or connect to an external Nucleus server.
"},{"location":"aloha-ws/#references","title":"References","text":" - Interbotix X-Series Arms | Trossen Robotics Documentation
- ROS 2 Interface
- ROS 2 Standard Software Setup
- ROS 2 Open Source Packages
- Stationary ALOHA Software Setup | Trossen Robotics Documentation
"},{"location":"cartographer-ws/","title":"Cartographer","text":""},{"location":"cartographer-ws/#run-with-docker","title":"Run with docker","text":"git clone https://github.com/j3soon/ros2-essentials.git\n
cd ros2-essentials/cartographer_ws/docker\ndocker compose pull\ndocker compose up -d --build\n
"},{"location":"cartographer-ws/#simple-test-with-turtlebot3","title":"Simple Test With Turtlebot3","text":" -
Attach to the container
docker attach ros2-cartographer-ws\ncd /home/ros2-essentials/cartographer_ws\n
- Open the turtlebot simulation in tmux
export TURTLEBOT3_MODEL=burger\nros2 launch turtlebot3_gazebo turtlebot3_world.launch.py\n
- Run the SLAM node in new window of tmux
ros2 launch turtlebot3_cartographer cartographer.launch.py is_sim:=True\n
- Run the control tool in new window of tmux
rqt_robot_steering\n
"},{"location":"cartographer-ws/#building-packages","title":"Building Packages","text":"docker attach ros2-cartographer-ws\ncd /home/ros2-essentials/cartographer_ws\nrosdep update\nrosdep install --from-paths src --ignore-src --rosdistro humble -y\ncolcon build\n
After the build process, make sure to source the install/setup.bash
file. Otherwise, ROS2 will not locate the executable files. You can open a new terminal to accomplish this.
"},{"location":"cartographer-ws/#multi-lidar-single-robot-slam-test","title":"Multi LiDAR - Single Robot SLAM test","text":""},{"location":"cartographer-ws/#simulation","title":"Simulation","text":""},{"location":"cartographer-ws/#run-the-slam-node","title":"Run the SLAM node","text":" - Run the control tool in new window of
tmux
rqt_robot_steering\n
"},{"location":"cartographer-ws/#references","title":"References","text":""},{"location":"gazebo-world-ws/","title":"Gazebo World","text":"This repository contains several Gazebo worlds, which are valuable for testing robots or agents in both indoor and outdoor environments.
"},{"location":"gazebo-world-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"ros2-essentials\n\u251c\u2500\u2500 gazebo_world_ws\n| \u251c\u2500\u2500 .devcontainer\n| \u251c\u2500\u2500 docker\n| \u251c\u2500\u2500 figure\n| \u251c\u2500\u2500 src\n| | \u251c\u2500\u2500 aws-robomaker-hospital-world\n| | \u251c\u2500\u2500 aws-robomaker-small-house-world\n| | \u251c\u2500\u2500 aws-robomaker-small-warehouse-world\n| | \u251c\u2500\u2500 citysim\n| | \u251c\u2500\u2500 clearpath_playpen\n| | \u251c\u2500\u2500 gazebo_launch\n| | \u2514\u2500\u2500 turtlebot3_gazebo\n| \u251c\u2500\u2500 .gitignore\n| \u2514\u2500\u2500 README.md\n\u2514\u2500\u2500 ...\n
"},{"location":"gazebo-world-ws/#how-to-use","title":"\ud83d\udea9 How to use \ud83d\udea9","text":"Available target worlds: - aws_hospital - aws_small_house - aws_warehouse - citysim - clearpath_playpen - turtlebot3
The turtlebot3 offers multiple worlds to choose from. For more information, you can refer to the launch file located at turtlebot3.launch.py
in the gazebo_launch
package.
"},{"location":"gazebo-world-ws/#use-in-gazebo_world_ws-container","title":"Use in gazebo_world_ws
container","text":"Normally, you wouldn\u2019t want to use it inside the gazebo_world_ws
container, since this workspace doesn\u2019t include any robots by default. However, we still provide the Dockerfile for this workspace, you can use it if you have specific requirements.
# Build the workspace\ncd /home/ros2-essentials/gazebo_world_ws\ncolcon build --symlink-install\nsource /home/ros2-essentials/gazebo_world_ws/install/setup.bash\n\n# Launch the world\n# Replace <target world> with the name of the world you wish to launch.\nros2 launch gazebo_launch <target world>.launch.py\n# or launch turtlebot3 worlds, such as:\nros2 launch gazebo_launch turtlebot3.launch.py gazebo_world:=turtlebot3_dqn_stage3.world\n
"},{"location":"gazebo-world-ws/#use-in-other-containerworkspace","title":"Use in other container/workspace","text":""},{"location":"gazebo-world-ws/#1-compile-packages","title":"1. Compile packages","text":"To use it in other containers, remember to compile gazebo_world_ws
first. Generally, you can compile it using other containers directly, as the required dependencies for these packages should already be installed in all workspaces. You should only use the Docker environment provided by gazebo_world_ws
if you encounter issues with compilation or path settings.
# Build the workspace\ncd /home/ros2-essentials/gazebo_world_ws\ncolcon build --symlink-install\n
"},{"location":"gazebo-world-ws/#2-source-the-local_setupbash","title":"2. Source the local_setup.bash
","text":"Add the following lines into .bashrc
file.
# Source gazebo_world_ws environment\nGAZEBO_WORLD_WS_DIR=\"${ROS2_WS}/../gazebo_world_ws\"\nif [ ! -d \"${GAZEBO_WORLD_WS_DIR}/install\" ]; then\n echo \"gazebo_world_ws has not been built yet. Building workspace...\"\n cd ${GAZEBO_WORLD_WS_DIR}\n colcon build --symlink-install\n cd -\n echo \"gazebo_world_ws built successfully!\"\nfi\nsource ${GAZEBO_WORLD_WS_DIR}/install/local_setup.bash\n
"},{"location":"gazebo-world-ws/#3-launch-gazebo-in-the-launch-file","title":"3. Launch gazebo in the launch file","text":"Add the code into your launch file.
Remember to replace the <target world>
with the one you want.
from launch import LaunchDescription\nfrom launch.actions import IncludeLaunchDescription, DeclareLaunchArgument\nfrom launch.substitutions import PathJoinSubstitution, LaunchConfiguration\nfrom launch_ros.substitutions import FindPackageShare\n\nARGUMENTS = [\n DeclareLaunchArgument(\n \"launch_gzclient\",\n default_value=\"True\",\n description=\"Launch gzclient, by default is True, which shows the gazebo GUI\",\n ),\n]\n\n\ndef generate_launch_description():\n\n ...\n\n # Launch Gazebo\n launch_gazebo = IncludeLaunchDescription(\n PathJoinSubstitution(\n [\n FindPackageShare(\"gazebo_launch\"), \n \"launch\",\n \"<target world>.launch.py\",\n ],\n ),\n launch_arguments={\n \"launch_gzclient\": LaunchConfiguration(\"launch_gzclient\"),\n }.items(),\n )\n\n ...\n\n ld = LaunchDescription(ARGUMENTS)\n ld.add_action(launch_gazebo)\n\n ...\n\n return ld\n
"},{"location":"gazebo-world-ws/#snapshot","title":"\u2728 Snapshot \u2728","text":"World Snapshot aws_hospital aws_small_house aws_warehouse citysim turtlebot3_stage3 turtlebot3_house turtlebot3_world clearpath_playpen"},{"location":"gazebo-world-ws/#troubleshooting","title":"\ud83d\udd0d Troubleshooting \ud83d\udd0d","text":""},{"location":"gazebo-world-ws/#getting-stuck-when-launching-gazebo","title":"Getting stuck when launching Gazebo","text":"The first time you launch a Gazebo world might take longer because Gazebo needs to download models from the cloud to your local machine. Please be patient while it downloads. If it takes too long, like more than an hour, you can check the gzserver
logs in ~/.gazebo
to see where it\u2019s getting stuck. The most common issue is using a duplicate port, which prevents Gazebo from starting. You can use lsof -i:11345
to identify which process is using the port and then use kill -9
to terminate it.
"},{"location":"gazebo-world-ws/#unable-to-find-gazebo_launch","title":"Unable to find gazebo_launch
","text":"Please make sure you have sourced the local_setup.bash
and compiled gazebo_world_ws
. If you encounter a path issue, try removing the install
, build
, and log
folders in gazebo_world_ws
and compile the workspace in your container again.
"},{"location":"husky-ws/","title":"Clearpath Husky","text":"This repository will help you configure the environment for Husky quickly.
"},{"location":"husky-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"Here is the structure of this workspace:
husky_ws\n\u251c\u2500\u2500 .devcontainer\n\u251c\u2500\u2500 docker\n\u251c\u2500\u2500 figure\n\u251c\u2500\u2500 install\n\u251c\u2500\u2500 build\n\u251c\u2500\u2500 log\n\u251c\u2500\u2500 script\n| \u251c\u2500\u2500 husky-bringup.sh\n| \u251c\u2500\u2500 husky-generate.sh\n| \u2514\u2500\u2500 husky-teleop.sh\n\u251c\u2500\u2500 src\n| \u251c\u2500\u2500 husky\n| | \u251c\u2500\u2500 husky_base\n| | \u251c\u2500\u2500 husky_bringup\n| | \u251c\u2500\u2500 husky_control\n| | \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 udev_rules\n| \u251c\u2500\u2500 41-clearpath.rules\n| \u2514\u2500\u2500 install_udev_rules.sh\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 README.md\n
build
/ install
/ log
folders will appear once you've built the packages.
"},{"location":"husky-ws/#introduction","title":"\u2728 Introduction \u2728","text":"This repository has been derived from the Clearpath Husky's repository. Here is the original repository. However, the original repository was designed for ROS1, and it is in the process of being upgraded to ROS2.
Below are the main packages for Husky:
- husky_base : Base configuration
- husky_control : Control configuration
- husky_description : Robot description (URDF)
- husky_navigation : Navigation configuration
- husky_gazebo : Simulate environment
- husky_viz : Visualize data
"},{"location":"husky-ws/#testing","title":"\ud83d\udea9 Testing \ud83d\udea9","text":""},{"location":"husky-ws/#building-packages","title":"Building packages","text":"Before attempting any examples, please remember to build the packages first. If you encounter any dependency errors, please use rosdep to resolve them.
cd /home/ros2-essentials/husky_ws\nrosdep update\nrosdep install --from-paths src --ignore-src --rosdistro humble -y\ncolcon build\n
After the build process, make sure to source the install/setup.bash
file. Otherwise, ROS2 will not locate the executable files. You can open a new terminal to accomplish this.
"},{"location":"husky-ws/#view-the-model","title":"View the model","text":"ros2 launch husky_viz view_model_launch.py\n
"},{"location":"husky-ws/#demonstration-of-slam","title":"Demonstration of SLAM.","text":"ros2 launch husky_navigation slam_launch.py\n
Rendering the model may take some time, so please be patient !
"},{"location":"husky-ws/#control-real-robot","title":"Control real robot","text":"Before you proceed, please ensure that you've plugged the USB adapter of the Husky into the computer and mounted it into the container. (plugging in the USB adapter before creating the container is preferred but not required)
```bash=
"},{"location":"husky-ws/#move-to-the-workspace-source-bashrc-and-bringup-husky","title":"Move to the workspace, source .bashrc, and bringup husky.","text":"cd /home/ros2-essentials/husky_ws source ~/.bashrc ./script/husky-bringup.sh
"},{"location":"husky-ws/#optional-open-a-new-terminal-control-the-robot-via-keyboard-teleoperation","title":"(Optional) Open a new terminal & control the robot via keyboard teleoperation.","text":"./script/husky-teleop.sh ```
"},{"location":"husky-ws/#license","title":"License","text":"To maintain reproducibility, we have frozen the following packages at specific commits. The licenses of these packages are listed below:
- husky/husky (at commit 1e0b1d1,
humble-devel
branch) is released under the BSD-3-Clause License. - clearpathrobotics/LMS1xx (at commit 90001ac,
humble-devel
branch) is released under the LGPL License. - osrf/citysim (at commit 3928b08) is released under the Apache-2.0 License.
- clearpathrobotics/clearpath_computer_installer (at commit 7e7f415) is released under the BSD-3-Clause License.
- clearpathrobotics/clearpath_robot/clearpath_robot/debian/udev (at commit 17d55f1) is released under the BSD 3-Clause License.
Further changes based on the packages above are release under the Apache-2.0 License, as stated in the repository.
"},{"location":"kobuki-ws/","title":"Yujin Robot Kobuki","text":"This repository facilitates the quick configuration of the simulation environment and real robot driver for Kobuki.
"},{"location":"kobuki-ws/#introduction","title":"\u25fb\ufe0f Introduction \u25fb\ufe0f","text":"This repository is primarily based on the kobuki-base. Below are the main packages for Kobuki:
Package Introduction kobuki_control The localization configuration of Kobuki kobuki_core Base configuration for Kobuki kobuki_gazebo Simulating Kobuki in Gazebo kobuki_launch Launch Kobuki in Gazebo or with a real robot kobuki_navigation SLAM setup for Kobuki kobuki_description Robot description (URDF) kobuki_node Kobuki Driver kobuki_rviz Visualizing data in RViz2 velodyne_simulator Simulating VLP-16 in Gazebo"},{"location":"kobuki-ws/#testing","title":"\ud83d\udea9 Testing \ud83d\udea9","text":""},{"location":"kobuki-ws/#building-packages","title":"Building packages","text":"If you only need to bring up the real Kobuki robot, you don't need to compile the workspace. The Kobuki driver is already included in the Docker image. After the Docker image is built, you can directly bring up the robot.
cd /home/ros2-essentials/kobuki_ws\n\n# For x86_64 architecture\ncolcon build --symlink-install\n# For arm64 architecture\ncolcon build --symlink-install --packages-ignore velodyne_gazebo_plugins\n
- The
--symlink-install
flag is optional, adding this flag may provide more convenience. See this post for more info. - The
--packages-ignore
flag is used to ignore the velodyne_gazebo_plugins
package. This package will use the gazebo_ros_pkgs
package, which is not supported in the arm64 architecture. - Please note that the building process for the embedded system may take a long time and could run out of memory. You can use the flag
--parallel-workers 1
to reduce the number of parallel workers, or you can build the packages on a more powerful machine and then transfer the executable files to the embedded system. For guidance on building the packages on a x86_64 architecture and then transferring the files to an arm64 architecture, refer to the cross-compilation
section at the end of this README.
After the build process, make sure to source the install/setup.bash
file. Otherwise, ROS2 will not locate the executable files. You can open a new terminal to accomplish this.
"},{"location":"kobuki-ws/#visualize-the-model-in-rviz","title":"Visualize the model in RViz","text":"ros2 launch kobuki_rviz view_model_launch.py\n
You can view the published states under TF > Frames > wheel_left_link
and TF > Frames > wheel_right_link
in RViz.
"},{"location":"kobuki-ws/#launch-the-robot-in-gazebo","title":"Launch the robot in Gazebo","text":"ros2 launch kobuki_launch kobuki.launch.py is_sim:=true\n
"},{"location":"kobuki-ws/#launch-the-robot-in-the-real-world","title":"Launch the robot in the real world","text":"# Inside the container\ncd /home/ros2-essentials/kobuki_ws\n./script/kobuki-bringup.sh\n\n# or Outside the container\ncd /path/to/kobuki_ws/docker\ndocker compose run kobuki-ws /home/ros2-essentials/kobuki_ws/script/kobuki-bringup.sh\n
If you have successfully connected to the Kobuki, you should hear a sound from it. Otherwise, there may be errors. You can try re-plugging the USB cable, restarting the Kobuki, or even restarting the container.
If you encounter an error message like the one below or a similar error in the terminal, please ensure that your USB cable is properly connected to the Kobuki and that the connection is stable. Additionally, if the Kobuki's battery is low, communication failures are more likely to occur. Please charge the Kobuki fully before trying again.
To control the Kobuki with a keyboard, you can use the teleop_twist_keyboard
package.
# Recommend speed: \n# - Linear 0.1\n# - Angular 0.3\n\n# Inside the container\ncd /home/ros2-essentials/kobuki_ws\n./script/kobuki-teleop.sh\n\n# or Outside the container\ncd /path/to/kobuki_ws/docker\ndocker compose run kobuki-ws /home/ros2-essentials/kobuki_ws/script/kobuki-teleop.sh\n
"},{"location":"kobuki-ws/#launch-the-demo-of-slam","title":"Launch the demo of SLAM","text":"This demo is based on the slam_toolbox package and only supports the simulation environment.
ros2 launch kobuki_navigation slam.launch.py\n
"},{"location":"kobuki-ws/#cross-compilation","title":"Cross-compilation","text":"Since the embedded system may not have enough memory to build the packages, or it may take a long time to build them, you can build the packages on a x86_64 machine and then copy the executable files to the arm64 machine.
First, you need to set up the Docker cross-compilation environment. See this repo and this website for more info.
# Install the QEMU packages\nsudo apt-get install qemu binfmt-support qemu-user-static\n# This step will execute the registering scripts\ndocker run --rm --privileged multiarch/qemu-user-static --reset -p yes --credential yes\n# Testing the emulation environment\ndocker run --rm -t arm64v8/ros:humble uname -m\n
Secondly, modify the target platform for the Docker image in docker/compose.yaml
by uncommenting the two lines below.
platforms:\n - \"linux/arm64\"\n
Next, navigate to the docker
folder and use the command docker compose build
to build the Docker image.
Note that the arm64 architecture is emulated by the QEMU, so it may consume a lot of CPU and memory resources. You should not use the devcontainer
when building the packages for the arm64 architecture on x86_64 architecture, Otherwise, you may encounter some problems such as running out of memory. If you really want to use devcontainer
, remove all vscode extensions in the .devcontainer/devcontainer.json
file first.
When the building process ends, use docker compose up -d
and attach to the container by running docker attach ros2-kobuki-ws
. After that, we can start building the ROS packages. If you have built the packages for the x86_64 architecture before, remember to delete the build
, install
, and log
folders.
cd /home/ros2-essentials/kobuki_ws\ncolcon build --symlink-install --packages-ignore velodyne_gazebo_plugins\n
Once everything is built, we can start copying the files to the target machine (arm64 architecture). Please modify the command below to match your settings, such as the IP address.
# (On Host) \ncd /path/to/kobuki_ws\n# Save the Docker image to the file.\ndocker save j3soon/ros2-kobuki-ws | gzip > kobuki_image.tar.gz\n# Copy the file to the target machine.\nscp -C kobuki_image.tar ubuntu@192.168.50.100:~/\n# Copy the kobuki_ws to the target machine. \n# If the workspace exists on the target machine, you can simply copy the `build` and `install` folders to it.\nrsync -aP kobuki_ws ubuntu@192.168.50.100:~/\n
# (On Remote)\ncd ~/\n# Decompress the file.\ngunzip kobuki_image.tar.gz\n# Load the Docker image from the file.\ndocker load < kobuki_image.tar\n# Verify that the Docker image has loaded successfully.\ndocker images | grep ros2-kobuki-ws\n
If you have completed all the commands above without encountering any errors, you can proceed to launch the robot. Refer to the steps above for more information.
"},{"location":"orbslam3-ws/","title":"ORB-SLAM3","text":""},{"location":"orbslam3-ws/#run-with-docker","title":"Run with docker","text":"git clone https://github.com/j3soon/ros2-essentials.git\n
cd ros2-essentials/orbslam3_ws/docker\ndocker compose pull\ndocker compose up -d --build\n
"},{"location":"orbslam3-ws/#simple-test-with-dataset","title":"Simple Test With Dataset","text":" - Attach to the container
docker attach ros2-orbslam3-ws\ncd /home/ros2-essentials/orbslam3_ws\n
- Prepare data, only need to be done once
- Play the bag file in
tmux
ros2 bag play V1_02_medium/V1_02_medium.db3 --remap /cam0/image_raw:=/camera\n
- Run the ORB-SLAM3 in a new
tmux
window source ~/test_ws/install/local_setup.bash\nros2 run orbslam3 mono ~/ORB_SLAM3/Vocabulary/ORBvoc.txt ~/ORB_SLAM3/Examples_old/Monocular/EuRoC.yaml false\n
2 windows will pop up, showing the results.
"},{"location":"orbslam3-ws/#reference-repo-or-issues","title":"Reference repo or issues","text":" - Solve build failure
- ORB-SLAM3
- SLAM2 and Foxy docker
- Error when using humble
"},{"location":"ros1-bridge-ws/","title":"ROS1 Bridge","text":"This workspace is utilized to create a bridge between ROS1 and ROS2-humble.
"},{"location":"ros1-bridge-ws/#introduction","title":"\u25fb\ufe0f Introduction \u25fb\ufe0f","text":"ros1_bridge
provides a network bridge that enables the exchange of messages between ROS 1 and ROS 2. You can locate the original repository here.
Within this workspace, you'll find a Dockerfile specifically crafted to build both ros-humble and ros1_bridge from their source code. This necessity arises due to a version conflict between the catkin-pkg-modules
available in the Ubuntu repository and the one in the ROS.
The official explanation
Assuming you are already familiar with the ROS network architecture. If not, I recommend reading the tutorial provided below first.
- https://wiki.ros.org/Master
- https://docs.ros.org/en/humble/Concepts/Basic/About-Discovery.html
"},{"location":"ros1-bridge-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"Here is the structure of this repo:
ros1_bridge_ws\n\u251c\u2500\u2500 .devcontainer\n| \u2514\u2500\u2500 devcontainer.json\n\u251c\u2500\u2500 docker\n| \u251c\u2500\u2500 .dockerignore\n| \u251c\u2500\u2500 .env\n| \u251c\u2500\u2500 compose.yaml\n| \u251c\u2500\u2500 compose.debug.yaml\n| \u251c\u2500\u2500 Dockerfile\n| \u2514\u2500\u2500 start-bridge.sh\n\u2514\u2500\u2500 README.md\n
"},{"location":"ros1-bridge-ws/#how-to-use","title":"\ud83d\udea9 How to use \ud83d\udea9","text":"The docker compose includes two services: ros1-bridge
and ros1-bridge-build
. The ros1-bridge
service is typically sufficient for regular use, while ros1-bridge-build
is intended for debugging, as it retains all the necessary build tools in the docker image.
If you are not debugging ros1-bridge
, it is recommended to use the terminal rather than VScode-devcontainer. By default, the VScode-devcontainer uses the ros1-bridge-build
service.
"},{"location":"ros1-bridge-ws/#1-build-the-docker-image","title":"1. Build the docker image","text":"While building the image directly from the Dockerfile is possible, it may not be the most efficient choice. To save time, you can pull the image from Dockerhub instead of compiling it from the source.
If you still prefer to build the image yourself, please follow the instructions below:
- VScode user
- Open the workspace in VScode, press
F1
, and enter > Dev Containers: Rebuild Container
.
- Terminal user
- Open a terminal, change the directory to the docker folder, and type
docker compose build
.
Please note that the build process may take approximately 1 hour to complete, with potential delays depending on your computer's performance and network conditions.
"},{"location":"ros1-bridge-ws/#2-adjust-the-parameters-in-the-env-file","title":"2. Adjust the parameters in the .env
file","text":"We've placed all ROS-related parameters in the .env
file. Please adjust these parameters according to your environment. By default, we set the ROS1
master at 127.0.0.1
, and the ROS2
domain ID to 0
. These settings should work for most scenarios.
Please note that if these parameters are not configured correctly, the ros1_bridge
will not function properly!
"},{"location":"ros1-bridge-ws/#3-start-the-container","title":"3. Start the container","text":" - VScode user
- If you build the image through the devcontainer, it will automatically start the container. After you get into the container, type
./start-bridge.sh
in the terminal.
- Terminal user
- Open a terminal, change the directory to the docker folder, and type
docker compose up
.
"},{"location":"ros1-bridge-ws/#4-launch-rosmaster-in-ros1","title":"4. Launch rosmaster in ROS1","text":"This step is automatically executed when you run docker compose up
.
As mentioned in https://github.com/ros2/ros1_bridge/issues/391, you should avoid using roscore
in ROS1 to prevent the issue of not bridging /rosout
. Instead, use rosmaster --core
as an alternative.
"},{"location":"ros1-bridge-ws/#5-begin-communication","title":"5. Begin communication","text":"You have successfully executed all the instructions. Now, you can proceed to initiate communication between ROS1 and ROS2-humble.
Please keep in mind that the bridge will be established only when there are matching publisher-subscriber pairs active for a topic on either side of the bridge.
"},{"location":"ros1-bridge-ws/#example","title":"\u2728 Example \u2728","text":""},{"location":"ros1-bridge-ws/#run-the-bridge-and-the-example-talker-and-listener","title":"Run the bridge and the example talker and listener","text":"Before beginning the example, ensure you have four containers ready:
ros-core
ros2-ros1-bridge-ws
ros1
ros2
When using ros1-bridge
in your application scenarios, you only need the ros-core
and ros2-ros1-bridge-ws
containers. Please replace the ros1
and ros2
containers with your application containers, as those are only included for demonstration purposes and are not required for using ros1-bridge
.
Furthermore, ensure that you mount /dev/shm
into both the ros2-ros1-bridge-ws
and ros2
containers, and that all containers share the host network.
"},{"location":"ros1-bridge-ws/#1-start-the-ros1_bridge-and-other-container","title":"1. Start the ros1_bridge
and other container","text":"# In docker folder\ndocker compose up\n
This command will start the four containers mentioned above.
"},{"location":"ros1-bridge-ws/#2-run-the-talker-and-listener-node","title":"2. Run the talker and listener node","text":"We run the listener node in the ros1
container and the talker node in the ros2
container. You can run the talker node in ros1
and the listener node in ros2
if you'd like. To achieve this, modify the command provided below.
"},{"location":"ros1-bridge-ws/#ros1","title":"ROS1","text":"docker exec -it ros1 /ros_entrypoint.sh bash\n# Inside ros1 container\nrosrun roscpp_tutorials listener\n# or\n# rosrun roscpp_tutorials talker\n
"},{"location":"ros1-bridge-ws/#ros2","title":"ROS2","text":"docker exec -it ros2 /ros_entrypoint.sh bash\n# Inside ros2 container\n# Use the same UID as ros1_bridge to prevent Fast-DDS shared memory permission issues.\n# Ref: https://github.com/j3soon/ros2-essentials/pull/9#issuecomment-1795743063\nuseradd -ms /bin/bash user\nsu user\nsource /opt/ros/humble/setup.bash\nros2 run demo_nodes_cpp talker\n# or\n# ros2 run demo_nodes_cpp listener\n
Certainly, you can try the example provided by ros1_bridge
. However, there's no need to source the setup script within the ros2-ros1-bridge-ws
container, simply starting the container will suffice.
"},{"location":"ros1-bridge-ws/#troubleshooting","title":"\ud83d\udd0d Troubleshooting \ud83d\udd0d","text":"If you are trying to debug ros1_bridge
, it is recommended to use the ros1-bridge-build
service in docker compose. It contains all the necessary build tools, which should be helpful for you.
"},{"location":"ros1-bridge-ws/#failed-to-contact-ros-master","title":"Failed to contact ros master","text":"Before launching ros-core
, make sure to adjust the ROS_MASTER_URI
correctly. For more information, please check the .env
file and this section.
You can replace 127.0.0.1
with the actual IP address or hostname of your ros master. This configuration ensures that your ros nodes know where to find the ros master for communication. Remember, in addition to modifying the parameters for ros1_bridge
, you also need to adjust the parameters for your own container!
"},{"location":"ros1-bridge-ws/#ros2-cant-receive-the-topic","title":"ROS2 can't receive the topic","text":"The latest releases of Fast-DDS come with the shared memory transport enabled by default. Therefore, you need to mount shared memory, also known as /dev/shm
, into every container you intend to communicate with when using Fast-DDS. This ensures proper communication between containers. Ensure that you use the same UID as ros2-ros1-bridge-ws
to avoid Fast-DDS shared memory permission issues.
Reference: https://github.com/eProsima/Fast-DDS/issues/1698#issuecomment-778039676
"},{"location":"rtabmap-ws/","title":"RTAB-Map","text":""},{"location":"rtabmap-ws/#run-with-docker","title":"Run with docker","text":"git clone https://github.com/j3soon/ros2-essentials.git\n
cd ros2-essentials/rtabmap_ws/docker\ndocker compose pull\ndocker compose up -d --build\n
"},{"location":"rtabmap-ws/#lidar-test-with-gazebo","title":"LiDAR test with gazebo","text":""},{"location":"rtabmap-ws/#rgbd-test-with-gazebo","title":"RGBD test with gazebo","text":""},{"location":"rtabmap-ws/#dual-sensor-test-with-gazebo","title":"Dual sensor test with gazebo","text":""},{"location":"rtabmap-ws/#run-with-rqt","title":"Run with rqt","text":" - Running in a new
tmux
window rqt_robot_steering\n
"},{"location":"rtabmap-ws/#result","title":"Result","text":" - After you've run the demo, you could find the following result directly.
- LiDAR test
- RGBD test
- Dual sensor test
"},{"location":"rtabmap-ws/#reference","title":"Reference","text":""},{"location":"rtabmap-ws/#existing-issues","title":"Existing issues","text":""},{"location":"template-ws/","title":"Template","text":"This template will help you set up a ROS-Humble environment quickly.
"},{"location":"template-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"Here is the structure of this template:
ros2-essentials\n\u251c\u2500\u2500 scripts\n| \u2514\u2500\u2500 create_workspace.sh\n\u251c\u2500\u2500 template_ws\n| \u251c\u2500\u2500 .devcontainer\n| | \u2514\u2500\u2500 devcontainer.json\n| \u251c\u2500\u2500 docker\n| | \u251c\u2500\u2500 .bashrc\n| | \u251c\u2500\u2500 .dockerignore\n| | \u251c\u2500\u2500 compose.yaml\n| | \u2514\u2500\u2500 Dockerfile\n| \u251c\u2500\u2500 install\n| \u251c\u2500\u2500 build\n| \u251c\u2500\u2500 log\n| \u251c\u2500\u2500 src\n| | \u251c\u2500\u2500 minimal_pkg\n| | | \u251c\u2500\u2500 include\n| | | \u251c\u2500\u2500 minimal_pkg\n| | | \u251c\u2500\u2500 scripts\n| | | \u2514\u2500\u2500 ...\n| | \u2514\u2500\u2500 ...\n| \u2514\u2500\u2500 README.md\n
build
/ install
/ log
folders will appear once you've built the packages.
"},{"location":"template-ws/#how-to-use-this-template","title":"\ud83d\udea9 How to use this template \ud83d\udea9","text":""},{"location":"template-ws/#1-use-the-script-to-copy-the-template-workspace","title":"1. Use the script to copy the template workspace.","text":"We have provided a script to create a new workspace. Please use it to avoid potential issues.
# Open a terminal, and change the directory to ros2-essentials.\n./scripts/create_workspace.sh <new_workspace_name>\n
To unify the naming style, we will modify the string <new_workspace_name>
in some files.
"},{"location":"template-ws/#2-configure-settings","title":"2. Configure settings.","text":"To help you easily find where changes are needed, we have marked most of the areas you need to adjust with # TODO:
. Usually, you only need to modify the configurations without removing anything. If you really need to remove something, make sure you clearly understand your goal and proceed with caution.
docker/Dockerfile
- Add the packages you need according to the comments inside. docker/compose.yaml
- By default, the Docker image is built according to your current computer's architecture. If you need to cross-compile, please modify the platforms
parameter to your desired architecture and set up the basic environment. - If you want to access the GPU in the container, please uncomment the lines accordingly. - If you want to add any environment variables in the container, you can include them in the environment
section, or you can use export VARIABLE=/the/value
in docker/.bashrc
. docker/.bashrc
- We will automatically compile the workspace in .bashrc. If you don't want this to happen, feel free to remove it. If you\u2019re okay with it, remember to adjust the compilation commands according to your packages. src
- Add the ros packages you need here. - minimal_pkg
is the ROS2 package used to create a publisher and subscriber in both Python and C++. You can remove it if you don't need it.
"},{"location":"template-ws/#3-open-the-workspace-folder-using-visual-studio-code","title":"3. Open the workspace folder using Visual Studio Code.","text":"Haven't set up the devcontainer yet ?
Please refer to the tutorial provided by Visual Studio Code first. You can find it here: https://code.visualstudio.com/docs/devcontainers/containers
We recommend using VScode
+ devcontainer
for development. This plugin can significantly reduce development costs and can be used on local computers, remote servers, and even embedded systems. If you don't want to use devcontainer
, you can still use Docker commands for development, such as docker compose up
and docker exec
.
Open the workspace folder using Visual Studio Code, spotting the workspace folder within your Explorer indicates that you've selected the wrong folder. You should only observe the .devcontainer
, docker
and src
folders there.
"},{"location":"template-ws/#4-build-the-container","title":"4. Build the container.","text":"We have pre-built some Docker images on Docker Hub. If the building time is too long, you might consider downloading them from Docker Hub instead. For more information, please refer to the README.md
on the repository's main page.
Press F1
and enter > Dev Containers: Rebuild Container
. Building the images and container will take some time. Please be patient.
You should see the output below.
Done. Press any key to close the terminal.\n
For non-devcontainer users, please navigate to the docker
folder and use docker compose build
to build the container. We have moved all commands that need to be executed into the .bashrc
file. No further action is needed after creating the Docker container.
"},{"location":"template-ws/#5-start-to-develop-with-ros","title":"5. Start to develop with ROS.","text":"You've successfully completed all the instructions. Wishing you a productive and successful journey in your ROS development !
"},{"location":"template-ws/#warning","title":"\u26a0\ufe0f Warning \u26a0\ufe0f","text":" - Do not place your files in any folder named
build
, install
, or log
. These folders will not be tracked by Git. - If you encounter an error when opening Gazebo, consider closing the container and reopen it. Alternatively, you can check the log output in
~/.gazebo
, which may contain relevant error messages. The most common issue is using a duplicate port, which prevents Gazebo from starting. You can use lsof -i:11345
to identify which process is using the port and then use kill -9
to terminate it. xhost +local:docker
is required if the container is not in privileged mode.
"},{"location":"vlp-ws/","title":"Velodyne VLP-16","text":""},{"location":"vlp-ws/#simulation-setup","title":"Simulation Setup","text":""},{"location":"vlp-ws/#add-description-in-defined-robot","title":"Add description in defined robot","text":" -
Declared necessary argument and your own robot_gazebo.urdf.xacro
<xacro:arg name=\"gpu\" default=\"false\"/>\n<xacro:arg name=\"organize_cloud\" default=\"false\"/>\n<xacro:property name=\"gpu\" value=\"$(arg gpu)\" />\n<xacro:property name=\"organize_cloud\" value=\"$(arg organize_cloud)\" />\n
-
Include the velodyne_description
package
<xacro:include filename=\"$(find velodyne_description)/urdf/VLP16.urdf.xacro\" />\n
-
Add LiDAR in the robot
<xacro:VLP-16 parent=\"base_footprint\" name=\"velodyne\" topic=\"/velodyne_points\" organize_cloud=\"${organize_cloud}\" hz=\"10\" samples=\"440\" gpu=\"${gpu}\">\n <origin xyz=\"0 0 0.1\" rpy=\"0 0 0\" />\n</xacro:VLP-16>\n
- You could refer to more information from
veloyne_simulator/velodyne_description/urdf/template.urdf.xacro
"},{"location":"vlp-ws/#launch-lidar-driver-with-simulated-lidar","title":"Launch LiDAR driver with simulated LiDAR","text":" "},{"location":"vlp-ws/#sample-robot","title":"Sample Robot","text":""},{"location":"vlp-ws/#lidar-setup","title":"LiDAR setup","text":""},{"location":"vlp-ws/#hardware-setup","title":"Hardware Setup","text":" - Connect the LiDAR to power.
- Connect the LiDAR to the computer or router using the provided ethernet cable.
"},{"location":"vlp-ws/#directly-using-computer","title":"Directly Using Computer","text":" - Connect the LiDAR to the computer using the ethernet cable.
- Open the computer settings and navigate to Network > Wired.
- Set the IPv4 configuration to 'manual' and configure the settings as shown in the image below:
"},{"location":"vlp-ws/#launch-the-driver","title":"Launch the Driver","text":""},{"location":"vlp-ws/#pipeline","title":"Pipeline","text":" - Data process as following: raw data -> pointcloud -> laser scan -> slam method
- Velodyne driver:
velodyne_driver
get the raw data from LiDAR. - Transform the raw data to pointcloud:
velodyne_pointcloud
- Transform the pointcloud to laser scan:
velodyne_laserscan
"},{"location":"vlp-ws/#operating-in-a-single-launch-file","title":"Operating in a single launch file","text":"ros2 launch vlp_cartographer vlp_driver.launch.py\n
- By the above command, the driver, pointcloud and laserscan will be launched.
"},{"location":"vlp-ws/#published-topics","title":"Published topics","text":"Topic Type Description /velodyne_packets
velodyne_msgs/VelodyneScan
raw data /velodyne_points
sensor_msgs/PointCloud2
Point cloud message /scan
sensor_msgs/LaserScan
laser scan message"},{"location":"vlp-ws/#test-with-cartographer","title":"Test with cartographer","text":""},{"location":"vlp-ws/#bringup","title":"Bringup","text":" "},{"location":"vlp-ws/#reference","title":"Reference","text":" - Velodyne_driver with ROS2 Humble
- Cartographer ROS
- Turtlebot3 with Cartographer
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"ROS 2 Essentials","text":"A repo containing essential ROS2 Humble features for controlling Autonomous Mobile Robots (AMRs). Please setup an Ubuntu environment before using this repo.
Please note that this repo is under rapid development. The code is not guaranteed to be stable, and breaking changes may occur.
The documentation is hosted on https://j3soon.github.io/ros2-essentials/.
"},{"location":"#cloning-the-repository","title":"Cloning the Repository","text":"git clone https://github.com/j3soon/ros2-essentials.git\ncd ros2-essentials\n
"},{"location":"#pre-built-workspaces","title":"Pre-built Workspaces","text":"Pre-built Docker images for each workspace can be pulled by running docker compose pull
in the corresponding workspace directory. Pulling these images bypasses the time-consuming build process (for both Docker Compose and Dev Containers).
Click on the following workspaces to navigate to their respective documentation.
Workspace amd64 arm64 Notes Maintainer Template \u2714\ufe0f \u2714\ufe0f Yu-Zhong Chen ORB-SLAM3 \u2714\ufe0f \u274c Assume Zhan RTAB-Map \u2714\ufe0f \u274c Assume Zhan ROS1 Bridge \u2714\ufe0f \u2714\ufe0f Skip linting Yu-Zhong Chen Cartographer \u2714\ufe0f \u2714\ufe0f Assume Zhan Clearpath Husky \u2714\ufe0f \u2714\ufe0f Real-world support Yu-Zhong Chen Yujin Robot Kobuki \u2714\ufe0f \u2714\ufe0f Real-world support Yu-Zhong Chen Velodyne VLP-16 \u2714\ufe0f \u2714\ufe0f Real-world support Assume Zhan Gazebo World \u2714\ufe0f \u274c\ufe0f Yu-Zhong Chen ALOHA \u2714\ufe0f \u2714\ufe0f Simulation only"},{"location":"#building-documentation","title":"Building Documentation","text":"# (Optional) Create and activate python virtual environment\nvirtualenv venv -p python3\nsource venv/bin/activate\n# Install dependencies and start serving\ncd docs\npip install -r requirements.txt\nmkdocs serve\n# Go to https://127.0.0.1:8000 to view the site.\n
"},{"location":"#linking-the-readme","title":"Linking the README","text":"rm docs/index.md\nln \"${PWD}/README.md\" docs/index.md\n
"},{"location":"#acknowledgement","title":"Acknowledgement","text":"The code is mainly contributed by Johnson, Yu-Zhong Chen, Assume Zhan, Lam Chon Hang, and others. For a full list of contributors, please refer to the contribution list.
We extend our gratitude to ElsaLab and NVIDIA AI Technology Center (NVAITC) for their support in making this project possible.
Disclaimer: this is not an official NVIDIA product.
"},{"location":"#license","title":"License","text":"All modifications are licensed under Apache License 2.0.
However, this repository includes many dependencies released under different licenses. For information on these licenses, please check the commit history. Make sure to review the license of each dependency before using this repository.
The licenses for dependencies will be clearly documented in the workspace README in the future.
"},{"location":"#supplementary","title":"Supplementary","text":""},{"location":"#installing-docker","title":"Installing Docker","text":"Follow this post for the installation instructions.
"},{"location":"#installing-nvidia-container-toolkit","title":"Installing NVIDIA Container Toolkit","text":"Follow this post for the installation instructions.
"},{"location":"aloha-ws/","title":"ALOHA","text":""},{"location":"aloha-ws/#start-container","title":"Start Container","text":"cd ~/ros2-essentials/aloha_ws/docker\nxhost +local:docker\ndocker compose up\n# The first build will take a while (~10 mins), please wait patiently.\n
The commands in the following sections assume that you are inside the Docker container:
# in a new terminal\ndocker exec -it ros2-aloha-ws bash\n
"},{"location":"aloha-ws/#view-robot-model-in-rviz","title":"View Robot Model in RViz","text":"ros2 launch interbotix_xsarm_descriptions xsarm_description.launch.py robot_model:=vx300s use_joint_pub_gui:=true\n
It is worth noting that aloha_vx300s.urdf.xacro
and vx300s.urdf.xacro
files are identical. We opt to use vx300s
since aloha_vx300s
seems to lack corresponding configs, such as those for MoveIt 2.
"},{"location":"aloha-ws/#view-robot-model-in-gazebo","title":"View Robot Model in Gazebo","text":"ros2 launch interbotix_xsarm_sim xsarm_gz_classic.launch.py robot_model:=vx300s\n
"},{"location":"aloha-ws/#ros-2-control","title":"ROS 2 Control","text":"ros2 launch interbotix_xsarm_control xsarm_control.launch.py robot_model:=vx300s use_sim:=true\n# and then use the `Interbotix Control Panel`.\n
"},{"location":"aloha-ws/#moveit-2-with-rviz","title":"MoveIt 2 with RViz","text":"ros2 launch interbotix_xsarm_moveit xsarm_moveit.launch.py robot_model:=vx300s hardware_type:=fake\n
"},{"location":"aloha-ws/#moveit-2-with-gazebo","title":"MoveIt 2 with Gazebo","text":"ros2 launch interbotix_xsarm_moveit xsarm_moveit.launch.py robot_model:=vx300s hardware_type:=gz_classic\n
"},{"location":"aloha-ws/#moveit-2-with-isaac-sim","title":"MoveIt 2 with Isaac Sim","text":"Prepare USD files:
cd /home/ros2-essentials/aloha_ws/isaacsim/scripts\n./create_urdf_from_xacro.sh\npython3 create_vx300s_from_urdf.py\npython3 create_vx300s_with_omnigraph.py\n
and run:
ros2 launch interbotix_xsarm_moveit xsarm_moveit.launch.py robot_model:=vx300s hardware_type:=isaac\n# and then move the target and use the `MotionPlanning` panel.\n
"},{"location":"aloha-ws/#debugging-with-isaac-sim","title":"Debugging with Isaac Sim","text":"The Isaac Sim app can be launched with:
isaacsim omni.isaac.sim\n
Keep in mind that the standalone scripts can be easily debugged in Isaac Sim's Script Editor
. Simply copy the code, omitting anything related to SimulationApp (remove the beginning and end), and paste to the Script Editor
and run it.
To open pre-configured USD file with OmniGraph:
View the current joint states:
# in a new terminal\ndocker exec -it ros2-aloha-ws bash\nros2 topic echo /vx300s/joint_states\n
A specific world can also be directly launched and played with:
isaacsim omni.isaac.sim --exec '/home/ros2-essentials/aloha_ws/isaacsim/scripts/open_isaacsim_stage.py --path /home/ros2-essentials/aloha_ws/isaacsim/assets/vx300s_og.usd'\n
To access Nucleus from Isaac Sim, you should install Nucleus with default username/password admin:admin
on your host machine or connect to an external Nucleus server.
"},{"location":"aloha-ws/#references","title":"References","text":" - Interbotix X-Series Arms | Trossen Robotics Documentation
- ROS 2 Interface
- ROS 2 Standard Software Setup
- ROS 2 Open Source Packages
- Stationary ALOHA Software Setup | Trossen Robotics Documentation
"},{"location":"cartographer-ws/","title":"Cartographer","text":""},{"location":"cartographer-ws/#run-with-docker","title":"Run with docker","text":"git clone https://github.com/j3soon/ros2-essentials.git\n
cd ros2-essentials/cartographer_ws/docker\ndocker compose pull\ndocker compose up -d --build\n
"},{"location":"cartographer-ws/#simple-test-with-turtlebot3","title":"Simple Test With Turtlebot3","text":" -
Attach to the container
docker attach ros2-cartographer-ws\ncd /home/ros2-essentials/cartographer_ws\n
- Open the turtlebot simulation in tmux
export TURTLEBOT3_MODEL=burger\nros2 launch turtlebot3_gazebo turtlebot3_world.launch.py\n
- Run the SLAM node in new window of tmux
ros2 launch turtlebot3_cartographer cartographer.launch.py is_sim:=True\n
- Run the control tool in new window of tmux
rqt_robot_steering\n
"},{"location":"cartographer-ws/#building-packages","title":"Building Packages","text":"docker attach ros2-cartographer-ws\ncd /home/ros2-essentials/cartographer_ws\nrosdep update\nrosdep install --from-paths src --ignore-src --rosdistro humble -y\ncolcon build\n
After the build process, make sure to source the install/setup.bash
file. Otherwise, ROS2 will not locate the executable files. You can open a new terminal to accomplish this.
"},{"location":"cartographer-ws/#multi-lidar-single-robot-slam-test","title":"Multi LiDAR - Single Robot SLAM test","text":""},{"location":"cartographer-ws/#simulation","title":"Simulation","text":""},{"location":"cartographer-ws/#run-the-slam-node","title":"Run the SLAM node","text":" - Run the control tool in new window of
tmux
rqt_robot_steering\n
"},{"location":"cartographer-ws/#references","title":"References","text":""},{"location":"gazebo-world-ws/","title":"Gazebo World","text":"This repository contains several Gazebo worlds, which are valuable for testing robots or agents in both indoor and outdoor environments.
"},{"location":"gazebo-world-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"ros2-essentials\n\u251c\u2500\u2500 gazebo_world_ws\n| \u251c\u2500\u2500 .devcontainer\n| \u251c\u2500\u2500 docker\n| \u251c\u2500\u2500 figure\n| \u251c\u2500\u2500 src\n| | \u251c\u2500\u2500 aws-robomaker-hospital-world\n| | \u251c\u2500\u2500 aws-robomaker-small-house-world\n| | \u251c\u2500\u2500 aws-robomaker-small-warehouse-world\n| | \u251c\u2500\u2500 citysim\n| | \u251c\u2500\u2500 clearpath_playpen\n| | \u251c\u2500\u2500 gazebo_launch\n| | \u2514\u2500\u2500 turtlebot3_gazebo\n| \u251c\u2500\u2500 .gitignore\n| \u2514\u2500\u2500 README.md\n\u2514\u2500\u2500 ...\n
"},{"location":"gazebo-world-ws/#how-to-use","title":"\ud83d\udea9 How to use \ud83d\udea9","text":"Available target worlds: - aws_hospital - aws_small_house - aws_warehouse - citysim - clearpath_playpen - turtlebot3
The turtlebot3 offers multiple worlds to choose from. For more information, you can refer to the launch file located at turtlebot3.launch.py
in the gazebo_launch
package.
"},{"location":"gazebo-world-ws/#use-in-gazebo_world_ws-container","title":"Use in gazebo_world_ws
container","text":"Normally, you wouldn\u2019t want to use it inside the gazebo_world_ws
container, since this workspace doesn\u2019t include any robots by default. However, we still provide the Dockerfile for this workspace, you can use it if you have specific requirements.
# Build the workspace\ncd /home/ros2-essentials/gazebo_world_ws\ncolcon build --symlink-install\nsource /home/ros2-essentials/gazebo_world_ws/install/setup.bash\n\n# Launch the world\n# Replace <target world> with the name of the world you wish to launch.\nros2 launch gazebo_launch <target world>.launch.py\n# or launch turtlebot3 worlds, such as:\nros2 launch gazebo_launch turtlebot3.launch.py gazebo_world:=turtlebot3_dqn_stage3.world\n
"},{"location":"gazebo-world-ws/#use-in-other-containerworkspace","title":"Use in other container/workspace","text":""},{"location":"gazebo-world-ws/#1-compile-packages","title":"1. Compile packages","text":"To use it in other containers, remember to compile gazebo_world_ws
first. Generally, you can compile it using other containers directly, as the required dependencies for these packages should already be installed in all workspaces. You should only use the Docker environment provided by gazebo_world_ws
if you encounter issues with compilation or path settings.
# Build the workspace\ncd /home/ros2-essentials/gazebo_world_ws\ncolcon build --symlink-install\n
"},{"location":"gazebo-world-ws/#2-source-the-local_setupbash","title":"2. Source the local_setup.bash
","text":"Add the following lines into .bashrc
file.
# Source gazebo_world_ws environment\nGAZEBO_WORLD_WS_DIR=\"${ROS2_WS}/../gazebo_world_ws\"\nif [ ! -d \"${GAZEBO_WORLD_WS_DIR}/install\" ]; then\n echo \"gazebo_world_ws has not been built yet. Building workspace...\"\n cd ${GAZEBO_WORLD_WS_DIR}\n colcon build --symlink-install\n cd -\n echo \"gazebo_world_ws built successfully!\"\nfi\nsource ${GAZEBO_WORLD_WS_DIR}/install/local_setup.bash\n
"},{"location":"gazebo-world-ws/#3-launch-gazebo-in-the-launch-file","title":"3. Launch gazebo in the launch file","text":"Add the code into your launch file.
Remember to replace the <target world>
with the one you want.
from launch import LaunchDescription\nfrom launch.actions import IncludeLaunchDescription, DeclareLaunchArgument\nfrom launch.substitutions import PathJoinSubstitution, LaunchConfiguration\nfrom launch_ros.substitutions import FindPackageShare\n\nARGUMENTS = [\n DeclareLaunchArgument(\n \"launch_gzclient\",\n default_value=\"True\",\n description=\"Launch gzclient, by default is True, which shows the gazebo GUI\",\n ),\n]\n\n\ndef generate_launch_description():\n\n ...\n\n # Launch Gazebo\n launch_gazebo = IncludeLaunchDescription(\n PathJoinSubstitution(\n [\n FindPackageShare(\"gazebo_launch\"), \n \"launch\",\n \"<target world>.launch.py\",\n ],\n ),\n launch_arguments={\n \"launch_gzclient\": LaunchConfiguration(\"launch_gzclient\"),\n }.items(),\n )\n\n ...\n\n ld = LaunchDescription(ARGUMENTS)\n ld.add_action(launch_gazebo)\n\n ...\n\n return ld\n
"},{"location":"gazebo-world-ws/#snapshot","title":"\u2728 Snapshot \u2728","text":"World Snapshot aws_hospital aws_small_house aws_warehouse citysim turtlebot3_stage3 turtlebot3_house turtlebot3_world clearpath_playpen"},{"location":"gazebo-world-ws/#troubleshooting","title":"\ud83d\udd0d Troubleshooting \ud83d\udd0d","text":""},{"location":"gazebo-world-ws/#getting-stuck-when-launching-gazebo","title":"Getting stuck when launching Gazebo","text":"The first time you launch a Gazebo world might take longer because Gazebo needs to download models from the cloud to your local machine. Please be patient while it downloads. If it takes too long, like more than an hour, you can check the gzserver
logs in ~/.gazebo
to see where it\u2019s getting stuck. The most common issue is using a duplicate port, which prevents Gazebo from starting. You can use lsof -i:11345
to identify which process is using the port and then use kill -9
to terminate it.
"},{"location":"gazebo-world-ws/#unable-to-find-gazebo_launch","title":"Unable to find gazebo_launch
","text":"Please make sure you have sourced the local_setup.bash
and compiled gazebo_world_ws
. If you encounter a path issue, try removing the install
, build
, and log
folders in gazebo_world_ws
and compile the workspace in your container again.
"},{"location":"husky-ws/","title":"Clearpath Husky","text":"This repository will help you configure the environment for Husky quickly.
"},{"location":"husky-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"Here is the structure of this workspace:
husky_ws\n\u251c\u2500\u2500 .devcontainer\n\u251c\u2500\u2500 docker\n\u251c\u2500\u2500 figure\n\u251c\u2500\u2500 install\n\u251c\u2500\u2500 build\n\u251c\u2500\u2500 log\n\u251c\u2500\u2500 script\n| \u251c\u2500\u2500 husky-bringup.sh\n| \u251c\u2500\u2500 husky-generate.sh\n| \u2514\u2500\u2500 husky-teleop.sh\n\u251c\u2500\u2500 src\n| \u251c\u2500\u2500 husky\n| | \u251c\u2500\u2500 husky_base\n| | \u251c\u2500\u2500 husky_bringup\n| | \u251c\u2500\u2500 husky_control\n| | \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 udev_rules\n| \u251c\u2500\u2500 41-clearpath.rules\n| \u2514\u2500\u2500 install_udev_rules.sh\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 README.md\n
build
/ install
/ log
folders will appear once you've built the packages.
"},{"location":"husky-ws/#introduction","title":"\u2728 Introduction \u2728","text":"This repository has been derived from the Clearpath Husky's repository. Here is the original repository. However, the original repository was designed for ROS1, and it is in the process of being upgraded to ROS2.
Below are the main packages for Husky:
- husky_base : Base configuration
- husky_control : Control configuration
- husky_description : Robot description (URDF)
- husky_navigation : Navigation configuration
- husky_gazebo : Simulate environment
- husky_viz : Visualize data
"},{"location":"husky-ws/#testing","title":"\ud83d\udea9 Testing \ud83d\udea9","text":""},{"location":"husky-ws/#building-packages","title":"Building packages","text":"Before attempting any examples, please remember to build the packages first. If you encounter any dependency errors, please use rosdep to resolve them.
cd /home/ros2-essentials/husky_ws\nrosdep update\nrosdep install --from-paths src --ignore-src --rosdistro humble -y\ncolcon build\n
After the build process, make sure to source the install/setup.bash
file. Otherwise, ROS2 will not locate the executable files. You can open a new terminal to accomplish this.
"},{"location":"husky-ws/#view-the-model","title":"View the model","text":"ros2 launch husky_viz view_model_launch.py\n
"},{"location":"husky-ws/#demonstration-of-slam","title":"Demonstration of SLAM.","text":"ros2 launch husky_navigation slam_launch.py\n
Rendering the model may take some time, so please be patient !
"},{"location":"husky-ws/#control-real-robot","title":"Control real robot","text":"Before you proceed, please ensure that you've plugged the USB adapter of the Husky into the computer and mounted it into the container. (plugging in the USB adapter before creating the container is preferred but not required)
```bash=
"},{"location":"husky-ws/#move-to-the-workspace-source-bashrc-and-bringup-husky","title":"Move to the workspace, source .bashrc, and bringup husky.","text":"cd /home/ros2-essentials/husky_ws source ~/.bashrc ./script/husky-bringup.sh
"},{"location":"husky-ws/#optional-open-a-new-terminal-control-the-robot-via-keyboard-teleoperation","title":"(Optional) Open a new terminal & control the robot via keyboard teleoperation.","text":"./script/husky-teleop.sh ```
"},{"location":"husky-ws/#license","title":"License","text":"To maintain reproducibility, we have frozen the following packages at specific commits. The licenses of these packages are listed below:
- husky/husky (at commit 1e0b1d1,
humble-devel
branch) is released under the BSD-3-Clause License. - clearpathrobotics/LMS1xx (at commit 90001ac,
humble-devel
branch) is released under the LGPL License. - osrf/citysim (at commit 3928b08) is released under the Apache-2.0 License.
- clearpathrobotics/clearpath_computer_installer (at commit 7e7f415) is released under the BSD-3-Clause License.
- clearpathrobotics/clearpath_robot/clearpath_robot/debian/udev (at commit 17d55f1) is released under the BSD 3-Clause License.
Further changes based on the packages above are release under the Apache-2.0 License, as stated in the repository.
"},{"location":"kobuki-ws/","title":"Yujin Robot Kobuki","text":"This repository facilitates the quick configuration of the simulation environment and real robot driver for Kobuki.
"},{"location":"kobuki-ws/#introduction","title":"\u25fb\ufe0f Introduction \u25fb\ufe0f","text":"This repository is primarily based on the kobuki-base. Below are the main packages for Kobuki:
Package Introduction kobuki_control The localization configuration of Kobuki kobuki_core Base configuration for Kobuki kobuki_gazebo Simulating Kobuki in Gazebo kobuki_launch Launch Kobuki in Gazebo or with a real robot kobuki_navigation SLAM setup for Kobuki kobuki_description Robot description (URDF) kobuki_node Kobuki Driver kobuki_rviz Visualizing data in RViz2 velodyne_simulator Simulating VLP-16 in Gazebo"},{"location":"kobuki-ws/#testing","title":"\ud83d\udea9 Testing \ud83d\udea9","text":""},{"location":"kobuki-ws/#building-packages","title":"Building packages","text":"If you only need to bring up the real Kobuki robot, you don't need to compile the workspace. The Kobuki driver is already included in the Docker image. After the Docker image is built, you can directly bring up the robot.
cd /home/ros2-essentials/kobuki_ws\n\n# For x86_64 architecture\ncolcon build --symlink-install\n# For arm64 architecture\ncolcon build --symlink-install --packages-ignore velodyne_gazebo_plugins\n
- The
--symlink-install
flag is optional, adding this flag may provide more convenience. See this post for more info. - The
--packages-ignore
flag is used to ignore the velodyne_gazebo_plugins
package. This package will use the gazebo_ros_pkgs
package, which is not supported in the arm64 architecture. - Please note that the building process for the embedded system may take a long time and could run out of memory. You can use the flag
--parallel-workers 1
to reduce the number of parallel workers, or you can build the packages on a more powerful machine and then transfer the executable files to the embedded system. For guidance on building the packages on a x86_64 architecture and then transferring the files to an arm64 architecture, refer to the cross-compilation
section at the end of this README.
After the build process, make sure to source the install/setup.bash
file. Otherwise, ROS2 will not locate the executable files. You can open a new terminal to accomplish this.
"},{"location":"kobuki-ws/#visualize-the-model-in-rviz","title":"Visualize the model in RViz","text":"ros2 launch kobuki_rviz view_model_launch.py\n
You can view the published states under TF > Frames > wheel_left_link
and TF > Frames > wheel_right_link
in RViz.
"},{"location":"kobuki-ws/#launch-the-robot-in-gazebo","title":"Launch the robot in Gazebo","text":"ros2 launch kobuki_launch kobuki.launch.py is_sim:=true\n
"},{"location":"kobuki-ws/#launch-the-robot-in-the-real-world","title":"Launch the robot in the real world","text":"# Inside the container\ncd /home/ros2-essentials/kobuki_ws\n./script/kobuki-bringup.sh\n\n# or Outside the container\ncd /path/to/kobuki_ws/docker\ndocker compose run kobuki-ws /home/ros2-essentials/kobuki_ws/script/kobuki-bringup.sh\n
If you have successfully connected to the Kobuki, you should hear a sound from it. Otherwise, there may be errors. You can try re-plugging the USB cable, restarting the Kobuki, or even restarting the container.
If you encounter an error message like the one below or a similar error in the terminal, please ensure that your USB cable is properly connected to the Kobuki and that the connection is stable. Additionally, if the Kobuki's battery is low, communication failures are more likely to occur. Please charge the Kobuki fully before trying again.
To control the Kobuki with a keyboard, you can use the teleop_twist_keyboard
package.
# Recommend speed: \n# - Linear 0.1\n# - Angular 0.3\n\n# Inside the container\ncd /home/ros2-essentials/kobuki_ws\n./script/kobuki-teleop.sh\n\n# or Outside the container\ncd /path/to/kobuki_ws/docker\ndocker compose run kobuki-ws /home/ros2-essentials/kobuki_ws/script/kobuki-teleop.sh\n
"},{"location":"kobuki-ws/#launch-the-demo-of-slam","title":"Launch the demo of SLAM","text":"This demo is based on the slam_toolbox package and only supports the simulation environment.
ros2 launch kobuki_navigation slam.launch.py\n
"},{"location":"kobuki-ws/#cross-compilation","title":"Cross-compilation","text":"Since the embedded system may not have enough memory to build the packages, or it may take a long time to build them, you can build the packages on a x86_64 machine and then copy the executable files to the arm64 machine.
First, you need to set up the Docker cross-compilation environment. See this repo and this website for more info.
# Install the QEMU packages\nsudo apt-get install qemu binfmt-support qemu-user-static\n# This step will execute the registering scripts\ndocker run --rm --privileged multiarch/qemu-user-static --reset -p yes --credential yes\n# Testing the emulation environment\ndocker run --rm -t arm64v8/ros:humble uname -m\n
Secondly, modify the target platform for the Docker image in docker/compose.yaml
by uncommenting the two lines below.
platforms:\n - \"linux/arm64\"\n
Next, navigate to the docker
folder and use the command docker compose build
to build the Docker image.
Note that the arm64 architecture is emulated by the QEMU, so it may consume a lot of CPU and memory resources. You should not use the devcontainer
when building the packages for the arm64 architecture on x86_64 architecture, Otherwise, you may encounter some problems such as running out of memory. If you really want to use devcontainer
, remove all vscode extensions in the .devcontainer/devcontainer.json
file first.
When the building process ends, use docker compose up -d
and attach to the container by running docker attach ros2-kobuki-ws
. After that, we can start building the ROS packages. If you have built the packages for the x86_64 architecture before, remember to delete the build
, install
, and log
folders.
cd /home/ros2-essentials/kobuki_ws\ncolcon build --symlink-install --packages-ignore velodyne_gazebo_plugins\n
Once everything is built, we can start copying the files to the target machine (arm64 architecture). Please modify the command below to match your settings, such as the IP address.
# (On Host) \ncd /path/to/kobuki_ws\n# Save the Docker image to the file.\ndocker save j3soon/ros2-kobuki-ws | gzip > kobuki_image.tar.gz\n# Copy the file to the target machine.\nscp -C kobuki_image.tar ubuntu@192.168.50.100:~/\n# Copy the kobuki_ws to the target machine. \n# If the workspace exists on the target machine, you can simply copy the `build` and `install` folders to it.\nrsync -aP kobuki_ws ubuntu@192.168.50.100:~/\n
# (On Remote)\ncd ~/\n# Decompress the file.\ngunzip kobuki_image.tar.gz\n# Load the Docker image from the file.\ndocker load < kobuki_image.tar\n# Verify that the Docker image has loaded successfully.\ndocker images | grep ros2-kobuki-ws\n
If you have completed all the commands above without encountering any errors, you can proceed to launch the robot. Refer to the steps above for more information.
"},{"location":"orbslam3-ws/","title":"ORB-SLAM3","text":""},{"location":"orbslam3-ws/#run-with-docker","title":"Run with docker","text":"git clone https://github.com/j3soon/ros2-essentials.git\n
cd ros2-essentials/orbslam3_ws/docker\ndocker compose pull\ndocker compose up -d --build\n
"},{"location":"orbslam3-ws/#simple-test-with-dataset","title":"Simple Test With Dataset","text":" - Attach to the container
docker attach ros2-orbslam3-ws\ncd /home/ros2-essentials/orbslam3_ws\n
- Prepare data, only need to be done once
- Play the bag file in
tmux
ros2 bag play V1_02_medium/V1_02_medium.db3 --remap /cam0/image_raw:=/camera\n
- Run the ORB-SLAM3 in a new
tmux
window source ~/test_ws/install/local_setup.bash\nros2 run orbslam3 mono ~/ORB_SLAM3/Vocabulary/ORBvoc.txt ~/ORB_SLAM3/Examples_old/Monocular/EuRoC.yaml false\n
2 windows will pop up, showing the results.
"},{"location":"orbslam3-ws/#reference-repo-or-issues","title":"Reference repo or issues","text":" - Solve build failure
- ORB-SLAM3
- SLAM2 and Foxy docker
- Error when using humble
"},{"location":"ros1-bridge-ws/","title":"ROS1 Bridge","text":"This workspace is utilized to create a bridge between ROS1 and ROS2-humble.
"},{"location":"ros1-bridge-ws/#introduction","title":"\u25fb\ufe0f Introduction \u25fb\ufe0f","text":"ros1_bridge
provides a network bridge that enables the exchange of messages between ROS 1 and ROS 2. You can locate the original repository here.
Within this workspace, you'll find a Dockerfile specifically crafted to build both ros-humble and ros1_bridge from their source code. This necessity arises due to a version conflict between the catkin-pkg-modules
available in the Ubuntu repository and the one in the ROS.
The official explanation
Assuming you are already familiar with the ROS network architecture. If not, I recommend reading the tutorial provided below first.
- https://wiki.ros.org/Master
- https://docs.ros.org/en/humble/Concepts/Basic/About-Discovery.html
"},{"location":"ros1-bridge-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"Here is the structure of this repo:
ros1_bridge_ws\n\u251c\u2500\u2500 .devcontainer\n| \u2514\u2500\u2500 devcontainer.json\n\u251c\u2500\u2500 docker\n| \u251c\u2500\u2500 .dockerignore\n| \u251c\u2500\u2500 .env\n| \u251c\u2500\u2500 compose.yaml\n| \u251c\u2500\u2500 compose.debug.yaml\n| \u251c\u2500\u2500 Dockerfile\n| \u2514\u2500\u2500 start-bridge.sh\n\u2514\u2500\u2500 README.md\n
"},{"location":"ros1-bridge-ws/#how-to-use","title":"\ud83d\udea9 How to use \ud83d\udea9","text":"The docker compose includes two services: ros1-bridge
and ros1-bridge-build
. The ros1-bridge
service is typically sufficient for regular use, while ros1-bridge-build
is intended for debugging, as it retains all the necessary build tools in the docker image.
If you are not debugging ros1-bridge
, it is recommended to use the terminal rather than VScode-devcontainer. By default, the VScode-devcontainer uses the ros1-bridge-build
service.
"},{"location":"ros1-bridge-ws/#1-build-the-docker-image","title":"1. Build the docker image","text":"While building the image directly from the Dockerfile is possible, it may not be the most efficient choice. To save time, you can pull the image from Dockerhub instead of compiling it from the source.
If you still prefer to build the image yourself, please follow the instructions below:
- VScode user
- Open the workspace in VScode, press
F1
, and enter > Dev Containers: Rebuild Container
.
- Terminal user
- Open a terminal, change the directory to the docker folder, and type
docker compose build
.
Please note that the build process may take approximately 1 hour to complete, with potential delays depending on your computer's performance and network conditions.
"},{"location":"ros1-bridge-ws/#2-adjust-the-parameters-in-the-env-file","title":"2. Adjust the parameters in the .env
file","text":"We've placed all ROS-related parameters in the .env
file. Please adjust these parameters according to your environment. By default, we set the ROS1
master at 127.0.0.1
, and the ROS2
domain ID to 0
. These settings should work for most scenarios.
Please note that if these parameters are not configured correctly, the ros1_bridge
will not function properly!
"},{"location":"ros1-bridge-ws/#3-start-the-container","title":"3. Start the container","text":" - VScode user
- If you build the image through the devcontainer, it will automatically start the container. After you get into the container, type
./start-bridge.sh
in the terminal.
- Terminal user
- Open a terminal, change the directory to the docker folder, and type
docker compose up
.
"},{"location":"ros1-bridge-ws/#4-launch-rosmaster-in-ros1","title":"4. Launch rosmaster in ROS1","text":"This step is automatically executed when you run docker compose up
.
As mentioned in https://github.com/ros2/ros1_bridge/issues/391, you should avoid using roscore
in ROS1 to prevent the issue of not bridging /rosout
. Instead, use rosmaster --core
as an alternative.
"},{"location":"ros1-bridge-ws/#5-begin-communication","title":"5. Begin communication","text":"You have successfully executed all the instructions. Now, you can proceed to initiate communication between ROS1 and ROS2-humble.
Please keep in mind that the bridge will be established only when there are matching publisher-subscriber pairs active for a topic on either side of the bridge.
"},{"location":"ros1-bridge-ws/#example","title":"\u2728 Example \u2728","text":""},{"location":"ros1-bridge-ws/#run-the-bridge-and-the-example-talker-and-listener","title":"Run the bridge and the example talker and listener","text":"Before beginning the example, ensure you have four containers ready:
ros-core
ros2-ros1-bridge-ws
ros1
ros2
When using ros1-bridge
in your application scenarios, you only need the ros-core
and ros2-ros1-bridge-ws
containers. Please replace the ros1
and ros2
containers with your application containers, as those are only included for demonstration purposes and are not required for using ros1-bridge
.
Furthermore, ensure that you mount /dev/shm
into both the ros2-ros1-bridge-ws
and ros2
containers, and that all containers share the host network.
"},{"location":"ros1-bridge-ws/#1-start-the-ros1_bridge-and-other-container","title":"1. Start the ros1_bridge
and other container","text":"# In docker folder\ndocker compose up\n
This command will start the four containers mentioned above.
"},{"location":"ros1-bridge-ws/#2-run-the-talker-and-listener-node","title":"2. Run the talker and listener node","text":"We run the listener node in the ros1
container and the talker node in the ros2
container. You can run the talker node in ros1
and the listener node in ros2
if you'd like. To achieve this, modify the command provided below.
"},{"location":"ros1-bridge-ws/#ros1","title":"ROS1","text":"docker exec -it ros1 /ros_entrypoint.sh bash\n# Inside ros1 container\nrosrun roscpp_tutorials listener\n# or\n# rosrun roscpp_tutorials talker\n
"},{"location":"ros1-bridge-ws/#ros2","title":"ROS2","text":"docker exec -it ros2 /ros_entrypoint.sh bash\n# Inside ros2 container\n# Use the same UID as ros1_bridge to prevent Fast-DDS shared memory permission issues.\n# Ref: https://github.com/j3soon/ros2-essentials/pull/9#issuecomment-1795743063\nuseradd -ms /bin/bash user\nsu user\nsource /opt/ros/humble/setup.bash\nros2 run demo_nodes_cpp talker\n# or\n# ros2 run demo_nodes_cpp listener\n
Certainly, you can try the example provided by ros1_bridge
. However, there's no need to source the setup script within the ros2-ros1-bridge-ws
container, simply starting the container will suffice.
"},{"location":"ros1-bridge-ws/#troubleshooting","title":"\ud83d\udd0d Troubleshooting \ud83d\udd0d","text":"If you are trying to debug ros1_bridge
, it is recommended to use the ros1-bridge-build
service in docker compose. It contains all the necessary build tools, which should be helpful for you.
"},{"location":"ros1-bridge-ws/#failed-to-contact-ros-master","title":"Failed to contact ros master","text":"Before launching ros-core
, make sure to adjust the ROS_MASTER_URI
correctly. For more information, please check the .env
file and this section.
You can replace 127.0.0.1
with the actual IP address or hostname of your ros master. This configuration ensures that your ros nodes know where to find the ros master for communication. Remember, in addition to modifying the parameters for ros1_bridge
, you also need to adjust the parameters for your own container!
"},{"location":"ros1-bridge-ws/#ros2-cant-receive-the-topic","title":"ROS2 can't receive the topic","text":"The latest releases of Fast-DDS come with the shared memory transport enabled by default. Therefore, you need to mount shared memory, also known as /dev/shm
, into every container you intend to communicate with when using Fast-DDS. This ensures proper communication between containers. Ensure that you use the same UID as ros2-ros1-bridge-ws
to avoid Fast-DDS shared memory permission issues.
Reference: https://github.com/eProsima/Fast-DDS/issues/1698#issuecomment-778039676
"},{"location":"rtabmap-ws/","title":"RTAB-Map","text":""},{"location":"rtabmap-ws/#run-with-docker","title":"Run with docker","text":"git clone https://github.com/j3soon/ros2-essentials.git\n
cd ros2-essentials/rtabmap_ws/docker\ndocker compose pull\ndocker compose up -d --build\n
"},{"location":"rtabmap-ws/#lidar-test-with-gazebo","title":"LiDAR test with gazebo","text":""},{"location":"rtabmap-ws/#rgbd-test-with-gazebo","title":"RGBD test with gazebo","text":""},{"location":"rtabmap-ws/#dual-sensor-test-with-gazebo","title":"Dual sensor test with gazebo","text":""},{"location":"rtabmap-ws/#run-with-rqt","title":"Run with rqt","text":" - Running in a new
tmux
window rqt_robot_steering\n
"},{"location":"rtabmap-ws/#result","title":"Result","text":" - After you've run the demo, you could find the following result directly.
- LiDAR test
- RGBD test
- Dual sensor test
"},{"location":"rtabmap-ws/#reference","title":"Reference","text":""},{"location":"rtabmap-ws/#existing-issues","title":"Existing issues","text":""},{"location":"template-ws/","title":"Template","text":"This template will help you set up a ROS-Humble environment quickly.
"},{"location":"template-ws/#structure","title":"\ud83c\udf31 Structure \ud83c\udf31","text":"Here is the structure of this template:
ros2-essentials\n\u251c\u2500\u2500 scripts\n| \u2514\u2500\u2500 create_workspace.sh\n\u251c\u2500\u2500 template_ws\n| \u251c\u2500\u2500 .devcontainer\n| | \u2514\u2500\u2500 devcontainer.json\n| \u251c\u2500\u2500 docker\n| | \u251c\u2500\u2500 .bashrc\n| | \u251c\u2500\u2500 .dockerignore\n| | \u251c\u2500\u2500 compose.yaml\n| | \u2514\u2500\u2500 Dockerfile\n| \u251c\u2500\u2500 install\n| \u251c\u2500\u2500 build\n| \u251c\u2500\u2500 log\n| \u251c\u2500\u2500 src\n| | \u251c\u2500\u2500 minimal_pkg\n| | | \u251c\u2500\u2500 include\n| | | \u251c\u2500\u2500 minimal_pkg\n| | | \u251c\u2500\u2500 scripts\n| | | \u2514\u2500\u2500 ...\n| | \u2514\u2500\u2500 ...\n| \u2514\u2500\u2500 README.md\n
build
/ install
/ log
folders will appear once you've built the packages.
"},{"location":"template-ws/#how-to-use-this-template","title":"\ud83d\udea9 How to use this template \ud83d\udea9","text":""},{"location":"template-ws/#1-use-the-script-to-copy-the-template-workspace","title":"1. Use the script to copy the template workspace.","text":"We have provided a script to create a new workspace. Please use it to avoid potential issues.
# Open a terminal, and change the directory to ros2-essentials.\n./scripts/create_workspace.sh <new_workspace_name>\n
To unify the naming style, we will modify the string <new_workspace_name>
in some files.
"},{"location":"template-ws/#2-configure-settings","title":"2. Configure settings.","text":"To help you easily find where changes are needed, we have marked most of the areas you need to adjust with # TODO:
. Usually, you only need to modify the configurations without removing anything. If you really need to remove something, make sure you clearly understand your goal and proceed with caution.
docker/Dockerfile
- Add the packages you need according to the comments inside. docker/compose.yaml
- By default, the Docker image is built according to your current computer's architecture. If you need to cross-compile, please modify the platforms
parameter to your desired architecture and set up the basic environment. - If you want to access the GPU in the container, please uncomment the lines accordingly. - If you want to add any environment variables in the container, you can include them in the environment
section, or you can use export VARIABLE=/the/value
in docker/.bashrc
. docker/.bashrc
- We will automatically compile the workspace in .bashrc. If you don't want this to happen, feel free to remove it. If you\u2019re okay with it, remember to adjust the compilation commands according to your packages. src
- Add the ros packages you need here. - minimal_pkg
is the ROS2 package used to create a publisher and subscriber in both Python and C++. You can remove it if you don't need it.
"},{"location":"template-ws/#3-open-the-workspace-folder-using-visual-studio-code","title":"3. Open the workspace folder using Visual Studio Code.","text":"Haven't set up the devcontainer yet ?
Please refer to the tutorial provided by Visual Studio Code first. You can find it here: https://code.visualstudio.com/docs/devcontainers/containers
We recommend using VScode
+ devcontainer
for development. This plugin can significantly reduce development costs and can be used on local computers, remote servers, and even embedded systems. If you don't want to use devcontainer
, you can still use Docker commands for development, such as docker compose up
and docker exec
.
Open the workspace folder using Visual Studio Code, spotting the workspace folder within your Explorer indicates that you've selected the wrong folder. You should only observe the .devcontainer
, docker
and src
folders there.
"},{"location":"template-ws/#4-build-the-container","title":"4. Build the container.","text":"We have pre-built some Docker images on Docker Hub. If the building time is too long, you might consider downloading them from Docker Hub instead. For more information, please refer to the README.md
on the repository's main page.
Press F1
and enter > Dev Containers: Rebuild Container
. Building the images and container will take some time. Please be patient.
You should see the output below.
Done. Press any key to close the terminal.\n
For non-devcontainer users, please navigate to the docker
folder and use docker compose build
to build the container. We have moved all commands that need to be executed into the .bashrc
file. No further action is needed after creating the Docker container.
"},{"location":"template-ws/#5-start-to-develop-with-ros","title":"5. Start to develop with ROS.","text":"You've successfully completed all the instructions. Wishing you a productive and successful journey in your ROS development !
"},{"location":"template-ws/#warning","title":"\u26a0\ufe0f Warning \u26a0\ufe0f","text":" - Do not place your files in any folder named
build
, install
, or log
. These folders will not be tracked by Git. - If you encounter an error when opening Gazebo, consider closing the container and reopen it. Alternatively, you can check the log output in
~/.gazebo
, which may contain relevant error messages. The most common issue is using a duplicate port, which prevents Gazebo from starting. You can use lsof -i:11345
to identify which process is using the port and then use kill -9
to terminate it. xhost +local:docker
is required if the container is not in privileged mode.
"},{"location":"vlp-ws/","title":"Velodyne VLP-16","text":""},{"location":"vlp-ws/#simulation-setup","title":"Simulation Setup","text":""},{"location":"vlp-ws/#add-description-in-defined-robot","title":"Add description in defined robot","text":" -
Declared necessary argument and your own robot_gazebo.urdf.xacro
<xacro:arg name=\"gpu\" default=\"false\"/>\n<xacro:arg name=\"organize_cloud\" default=\"false\"/>\n<xacro:property name=\"gpu\" value=\"$(arg gpu)\" />\n<xacro:property name=\"organize_cloud\" value=\"$(arg organize_cloud)\" />\n
-
Include the velodyne_description
package
<xacro:include filename=\"$(find velodyne_description)/urdf/VLP16.urdf.xacro\" />\n
-
Add LiDAR in the robot
<xacro:VLP-16 parent=\"base_footprint\" name=\"velodyne\" topic=\"/velodyne_points\" organize_cloud=\"${organize_cloud}\" hz=\"10\" samples=\"440\" gpu=\"${gpu}\">\n <origin xyz=\"0 0 0.1\" rpy=\"0 0 0\" />\n</xacro:VLP-16>\n
- You could refer to more information from
veloyne_simulator/velodyne_description/urdf/template.urdf.xacro
"},{"location":"vlp-ws/#launch-lidar-driver-with-simulated-lidar","title":"Launch LiDAR driver with simulated LiDAR","text":" "},{"location":"vlp-ws/#sample-robot","title":"Sample Robot","text":""},{"location":"vlp-ws/#lidar-setup","title":"LiDAR setup","text":""},{"location":"vlp-ws/#hardware-setup","title":"Hardware Setup","text":" - Connect the LiDAR to power.
- Connect the LiDAR to the computer or router using the provided ethernet cable.
"},{"location":"vlp-ws/#directly-using-computer","title":"Directly Using Computer","text":" - Connect the LiDAR to the computer using the ethernet cable.
- Open the computer settings and navigate to Network > Wired.
- Set the IPv4 configuration to 'manual' and configure the settings as shown in the image below:
"},{"location":"vlp-ws/#launch-the-driver","title":"Launch the Driver","text":""},{"location":"vlp-ws/#pipeline","title":"Pipeline","text":" - Data process as following: raw data -> pointcloud -> laser scan -> slam method
- Velodyne driver:
velodyne_driver
get the raw data from LiDAR. - Transform the raw data to pointcloud:
velodyne_pointcloud
- Transform the pointcloud to laser scan:
velodyne_laserscan
"},{"location":"vlp-ws/#operating-in-a-single-launch-file","title":"Operating in a single launch file","text":"ros2 launch vlp_cartographer vlp_driver.launch.py\n
- By the above command, the driver, pointcloud and laserscan will be launched.
"},{"location":"vlp-ws/#published-topics","title":"Published topics","text":"Topic Type Description /velodyne_packets
velodyne_msgs/VelodyneScan
raw data /velodyne_points
sensor_msgs/PointCloud2
Point cloud message /scan
sensor_msgs/LaserScan
laser scan message"},{"location":"vlp-ws/#test-with-cartographer","title":"Test with cartographer","text":""},{"location":"vlp-ws/#bringup","title":"Bringup","text":" "},{"location":"vlp-ws/#reference","title":"Reference","text":" - Velodyne_driver with ROS2 Humble
- Cartographer ROS
- Turtlebot3 with Cartographer
"}]}
\ No newline at end of file