This project is a slightly modified version of NVidia DOPE neural network used as final project for the Robot Control course.
Besides the original readme I will add some practical guidelines useful to set the system up from 3D object reconstruction all the to inference.
The system consists in three main blocks and needs for a convenient implementation both Ubuntu and Windows. Namely the blocks are:
- Texture mapping (Windows)
- Synthetic Dataset Creation (Windows)
- Network Training (Ubuntu)
Thanks to the simple geometries of the objects analysed we simply mapped the undistorted texture photos on 3D models obtained using blender. More general stereo reconstruction methods were analysed and deployed but resulted in poorer performance for our task.
In order to create the dataset you'll need to download Unreal Engine and setup the NDDS plugin. Unreal Engine is available for both Linux and Windows distribution but we found a lot more convenient its installation on Windows.
First install Visual Studio 2017 version 15.9 from here and perform a CUSTOM INSTALL. Make sure the following are selected:
- Desktop development with C++
- Latest Windows 10 SDK
To install Unreal Engine you'll first need Epic Games Laucher and an Epic Games account.
Once installed start the Launcher, login with your account and go to the Unreal Engine tab on the upper left corner of the Launcher. From here, make sure you have Library highlighted and then click the "+" icon as shown below
With the engine installed start the Editor by pressing the Launch button. You will need to start the Editor at least once so that it sets file associations. Creating a blank default project is good enough.
To setup NDDS you first need Git Large File Storage from here and then you can download NDDS using git lfs clone https://github.com/NVIDIA/Dataset\_Synthesizer.git
.
Launch the plugin using the file $NDDS/source/NDDS.uproject
nd select "Yes" when it prompts you to rebuild binaries. The compilation will occur behind the scene and open the project when completed.
Guidelines for NDDS usage can be found in $NDDS/Documentation/NDDS.pdf
Network training due to network dimensions is computationally demanding and you'll need a Nvidia GPU in order to train DOPE locally (8GB+ GPU memory recommended). The fastest way to set it up locally is by using the Docker image.
For network training the best option is to use Google Colaboratory GPUs and train the network there. In order to do so you should first upload the dataset as a zip file in Google Drive together with the train.py
in ./scripts/train.py
.
First thing to do in Colab is select the GPU runtime by going in 'Runtime' at the top left then 'Change runtime type' and select GPU.
You can use both python and bash commands (for bash start the command with '!'). Check GPU selection by using:
!nvidia-smi
Then mount Google Drive, all your storage in the cloud will become accessible at specified path in my case /content/drive
from google.colab import drive
drive.mount('/content/drive')
Unzip your dataset using bash command unzip $PATH_TO_ZIP_DATASET $DESTINATION_PATH
in my case:
!unzip /content/drive/My\ Drive/Colab\ Notebooks/DOPE/NVCapturedData.zip -d ./dataset/
Then train the network using train.py
with optional arguments arguments:
--data DATA path to training data
--datatest DATATEST path to data testing set
--object OBJECT In the dataset which objet of interest
--workers WORKERS number of data loading workers
--batchsize BATCHSIZE
input batch size
--imagesize IMAGESIZE
the height / width of the input image to network
--lr LR learning rate, default=0.001
--noise NOISE gaussian noise added to the image
--net NET path to net (to continue training)
--namefile NAMEFILE name to put on the file of the save weights
--manualseed MANUALSEED
manual seed
--epochs EPOCHS number of epochs to train
--loginterval LOGINTERVAL
--gpuids GPUIDS [GPUIDS ...]
GPUs to use
--outf OUTF folder to output images and model checkpoints, it will
add a train_ in front of the name
--sigma SIGMA keypoint creation size for sigma
--save save a visual batch and quit, this is for debugging
purposes
--pretrained PRETRAINED
do you want to use vgg imagenet pretrained weights
--nbupdates NBUPDATES
nb max update to network, overwrites the epoch number
otherwise uses the number of epochs
--datasize DATASIZE randomly sample that number of entries in the dataset
folder
--option OPTION
In this case, if starting training without pretrained weights:
!python3 /content/drive/My\ Drive/Colab\ Notebooks/DOPE/original/train.py --data /content/dataset/ --outf hallonspona --gpuids 0 --epochs 5 --batchsize 24 --lr 0.001
With pretrained weights:
!python3 /content/drive/My\ Drive/Colab\ Notebooks/DOPE/original/train.py \\ --data /content/dataset/ --net /content/drive/My\ Drive/Colab\ Notebooks/DOPE/net_epoch_5.pth --outf hallonspona --gpuids 0 --pretrained True --epochs 5 --batchsize 24 --lr 0.0002
In particular batch size was set as high as possible compatibly with GPU memory (around 2x GPU memory in GB), and the number of epochs was set to 5 in order to backup on drive the weights every 5 epochs using:
!cp /content/train_hallonspona/net_epoch_5.pth /content/drive/My\ Drive/Colab\ Notebooks/DOPE
Access to GPU runtime is indeed limited and it may occur that notebook execution is suddenly interrupted (usually after 8+ hours but almost randomly). In that case is better to have partial backups as training goes on. Training is monitored by checking loss on training set and belief maps representation.
Running inference on images from camera is explained well below, the main things you need to do in order to use the net on costum objects are:
- Calibrate your camera by modifying
./config/camera_info.yaml
- Associate your trained weights with your object by modifying
./config/config_pose.yaml.
RVIZ can then be used to display inference results in realtime, after launching it the two image topics should be added
by clicking add in the bottom left and selecting topics "/dope/webcam/image_raw"
and "dope/rgb_points/image_raw"
This is the official DOPE ROS package for detection and 6-DoF pose estimation of known objects from an RGB camera. The network has been trained on the following YCB objects: cracker box, sugar box, tomato soup can, mustard bottle, potted meat can, and gelatin box. For more details, see our CoRL 2018 paper and video.
Note: The instructions below refer to inference only. Training code is also provided but not supported.
16/03/2020 - Added a wiki (thanks to @saratrajput)
03/07/2019 - ROS interface update (thanks to Martin Günther)
11/06/2019 - Added bleach YCB weights
We have tested on Ubuntu 16.04 with ROS Kinetic with an NVIDIA Titan X with python 2.7. The code may work on other systems.
The following steps describe the native installation. Alternatively, use the provided Docker image and skip to Step #7.
-
Install ROS
Follow these instructions. You can select any of the default configurations in step 1.4; even the ROS-Base (Bare Bones) package (
ros-kinetic-ros-base
) is enough. -
Create a catkin workspace (if you do not already have one). To create a catkin workspace, follow these instructions:
$ mkdir -p ~/catkin_ws/src # Replace `catkin_ws` with the name of your workspace $ cd ~/catkin_ws/ $ catkin_make
-
Download the DOPE code
$ cd ~/catkin_ws/src $ git clone https://github.com/NVlabs/Deep_Object_Pose.git dope
-
Install python dependencies
$ cd ~/catkin_ws/src/dope $ pip install -r requirements.txt
-
Install ROS dependencies
$ cd ~/catkin_ws $ rosdep install --from-paths src -i --rosdistro kinetic $ sudo apt-get install ros-kinetic-rosbash ros-kinetic-ros-comm
-
Build
$ cd ~/catkin_ws $ catkin_make
-
Download the weights and save them to the
weights
folder, i.e.,~/catkin_ws/src/dope/weights/
.
-
Start ROS master
$ cd ~/catkin_ws $ source devel/setup.bash $ roscore
-
Start camera node (or start your own camera node)
$ roslaunch dope camera.launch # Publishes RGB images to `/dope/webcam_rgb_raw`
The camera must publish a correct
camera_info
topic to enable DOPE to compute the correct poses. Basically all ROS drivers have acamera_info_url
parameter where you can set the calibration info (but most ROS drivers include a reasonable default).For details on calibration and rectification of your camera see the camera tutorial.
-
Edit config info (if desired) in
~/catkin_ws/src/dope/config/config_pose.yaml
topic_camera
: RGB topic to listen totopic_camera_info
: camera info topic to listen totopic_publishing
: topic namespace for publishinginput_is_rectified
: Whether the input images are rectified. It is strongly suggested to use a rectified input topic.downscale_height
: If the input image is larger than this, scale it down to this pixel height. Very large input images eat up all the GPU memory and slow down inference. Also, DOPE works best when the object size (in pixels) has appeared in the training data (which is downscaled to 400 px). For these reasons, downscaling large input images to something reasonable (e.g., 400-500 px) improves memory consumption, inference speed and recognition results.weights
: dictionary of object names and there weights path name, comment out any line to disable detection/estimation of that objectdimensions
: dictionary of dimensions for the objects (key values must match theweights
names)class_ids
: dictionary of class ids to be used in the messages published on the/dope/detected_objects
topic (key values must match theweights
names)draw_colors
: dictionary of object colors (key values must match theweights
names)model_transforms
: dictionary of transforms that are applied to the pose before publishing (key values must match theweights
names)meshes
: dictionary of mesh filenames for visualization (key values must match theweights
names)mesh_scales
: dictionary of scaling factors for the visualization meshes (key values must match theweights
names)thresh_angle
: undocumentedthresh_map
: undocumentedsigma
: undocumentedthresh_points
: Thresholding the confidence for object detection; increase this value if you see too many false positives, reduce it if objects are not detected.
-
Start DOPE node
$ roslaunch dope dope.launch [config:=/path/to/my_config.yaml] # Config file is optional; default is `config_pose.yaml`
-
The following ROS topics are published (assuming
topic_publishing == 'dope'
):/dope/webcam_rgb_raw # RGB images from camera /dope/dimension_[obj_name] # dimensions of object /dope/pose_[obj_name] # timestamped pose of object /dope/rgb_points # RGB images with detected cuboids overlaid /dope/detected_objects # vision_msgs/Detection3DArray of all detected objects /dope/markers # RViz visualization markers for all objects
Note:
[obj_name]
is in {cracker, gelatin, meat, mustard, soup, sugar} -
To debug in RViz, run
rviz
, then add one or more of the following displays:Add > Image
to view the raw RGB image or the image with cuboids overlaidAdd > Pose
to view the object coordinate frame in 3D.Add > MarkerArray
to view the cuboids, meshes etc. in 3D.Add > Camera
to view the RGB Image with the poses and markers from above.
If you do not have a coordinate frame set up, you can run this static transformation:
rosrun tf2_ros static_transform_publisher 0 0 0 0.7071 0 0 -0.7071 world <camera_frame_id>
, where<camera_frame_id>
is theframe_id
of your input camera messages. Make sure that in RViz'sGlobal Options
, theFixed Frame
is set toworld
. Alternatively, you can skip thestatic_transform_publisher
step and directly set theFixed Frame
to your<camera_frame_id>
. -
If
rosrun
does not find the package ([rospack] Error: package 'dope' not found
), be sure that you calledsource devel/setup.bash
as mentioned above. To find the package, runrospack find dope
.
DOPE returns the poses of the objects in the camera coordinate frame. DOPE uses the aligned YCB models, which can be obtained using NVDU (see the nvdu_ycb
command).
If you use this tool in a research project, please cite as follows:
@inproceedings{tremblay2018corl:dope,
author = {Jonathan Tremblay and Thang To and Balakumar Sundaralingam and Yu Xiang and Dieter Fox and Stan Birchfield},
title = {Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects},
booktitle = {Conference on Robot Learning (CoRL)},
url = "https://arxiv.org/abs/1809.10790",
year = 2018
}
Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license.
Thanks to Jeffrey Smith (jeffreys@nvidia.com) for creating the Docker image.
Jonathan Tremblay (jtremblay@nvidia.com), Stan Birchfield (sbirchfield@nvidia.com)