This repository holds the code for pRRTC: GPU-Parallel RRT-Connect for Fast, Consistent, and Low-Cost Motion Planning.
We introduce pRRTC, a GPU-based, parallel RRT-Connect-based algorithm. Our approach has three key improvements:
- Concurrent sampling, expansion and connection of start and goal trees via GPU multithreading
- SIMT-optimized collision checking to quickly validate edges, inspired by the SIMD-optimized validation of Vector-Accelerated Motion Planning
- Efficient memory management between block- and thread-level parallelism, reducing expensive memory transfer overheads
Our empirical evaluations show that pRRTC achieves a 10x average speedup on constrained reaching tasks. pRRTC also demonstrates a 5.4x reduction in solution time standard deviation and 1.4x improvement in initial path costs compared to state-of-the-art motion planners in complex environments.
pRRTC currently supports 7-DoF Franka Emika Panda, 8-DoF Fetch, and 14-DoF Rethink Robotics Baxter. The functions for tracing forward kinematics and collision checking were generated using Cricket.
To build pRRTC, follow the instructions below
git clone git@github.com:CoMMALab/pRRTC.git
cmake -B build
cmake --build build
The repository comes with two benchmarking scripts: evaluate_mbm.cpp and single_mbm.cpp.
evaluate_mbm.cpp allows users to benchmark pRRTC's performance using Panda, Fetch, or Baxter on the entire MotionBenchMaker dataset. To run evaluate_mbm.cpp:
build/evaluate_mbm <robot> <experiment name>
single_mbm.cpp allows users to benchmark pRRTC's performance using Panda, Fetch, or Baxter on a single problem of MotionBenchMaker. To run single_mbm.cpp:
build/single_mbm <robot> <MBM problem name> <MBM problem index>
The MotionBenchMaker JSON files are generated using the script detailed here.
pRRTC has the following parameters which can be modified in the benchmarking scripts:
- max_samples: maximum number of samples in trees
- max_iters: maximum number of planning iterations
- num_new_configs: amount of new samples generated per iteration
- range: maximum RRT-Connect extension range
- granularity: number of discretized motions along an edge during collision checking. Note: this parameter must match the BATCH_SIZE parameter in robot's header file (ex. fetch.cuh) for correct results.
- balance: whether to enable tree balancing -- 0 for no balancing; 1 for distributed balancing where each iteration may generate samples for one or two trees; 2 for single-sided balancing where each iteration generate samples for one tree only
- tree_ratio: the threshold for distinguishing which tree is smaller in size -- if balance set to 1, then set tree_ratio to 0.5; if balance set to 2, then set tree_ratio to 1
- dynamic_domain: whether to enable dynamic domain sampling -- 0 for false; 1 for true
- dd_alpha: extent to which each radius is enlarged or shrunk per modification
- dd_radius: starting radius for dynamic domain sampling
- dd_min_radius: minimum radius for dynamic domain sampling
- Use Foam to generate two spherized urdfs:
-
Clone Cricket and switch to the
gpu-cc-early-exitbranch. -
Create a folder under
resources/<robot name> -
Add the two spherized urdfs and the robot srdf file to this folder.
-
Create a json config file for approximate fkcc kernel generation. Ex.
resources/robot_approx.json:
{
"name": "Robot",
"urdf": "robot/robot_spherized_approx.urdf",
"srdf": "robot/robot.srdf",
"end_effector": "robot_grasptarget",
"batch_size": 16,
"template": "templates/prrtc_approx_template.hh",
"subtemplates": [],
"output": "robot_prrtc_approx.hh"
}
Make sure to reference the approximate urdf. Batch size should be equal to the number of discretized collision checks on each extension of pRRTC.
-
Repeat step 5 and create a config file for the main fkcc generation. Ex.
resources/robot_main.json. See the panda, fetch, and baxter config files for examples. -
After building cricket run the script
gpu_fkcc_gen.sh robot. This will put the generated code into a filerobot_fk.hh. -
Add this to pRRTC as
src/robot/robot.cuh, and include it insrc/planning/pRRTC.cu.
-
Add a template instantiation for your robot to the bottom of
src/planning/pRRTC.cu. -
Add your robot to
src/planning/Robots.hh.a. Generate the robot struct from cricket with
build/fkcc_gen robot_struct.json. Ex. config filerobot_struct.json:{ "name": "robot", "urdf": "robot/robot_spherized.urdf", "srdf": "robot/robot.srdf", "end_effector": "robot_grasptarget", "resolution": 32, "template": "templates/prrtc_robot_template.hh", "subtemplates": [], "output": "robot_struct.hh" }b. copy the generated struct into
src/planning/Robots.hh. -
Recompile pRRTC.