Skip to content

CoMMALab/pRRTC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pRRTC: GPU-Parallel RRT-Connect

arXiv VAMP

This repository holds the code for pRRTC: GPU-Parallel RRT-Connect for Fast, Consistent, and Low-Cost Motion Planning.

We introduce pRRTC, a GPU-based, parallel RRT-Connect-based algorithm. Our approach has three key improvements:

  • Concurrent sampling, expansion and connection of start and goal trees via GPU multithreading
  • SIMT-optimized collision checking to quickly validate edges, inspired by the SIMD-optimized validation of Vector-Accelerated Motion Planning
  • Efficient memory management between block- and thread-level parallelism, reducing expensive memory transfer overheads

Our empirical evaluations show that pRRTC achieves a 10x average speedup on constrained reaching tasks. pRRTC also demonstrates a 5.4x reduction in solution time standard deviation and 1.4x improvement in initial path costs compared to state-of-the-art motion planners in complex environments.

Supported Robots

pRRTC currently supports 7-DoF Franka Emika Panda, 8-DoF Fetch, and 14-DoF Rethink Robotics Baxter. The functions for tracing forward kinematics and collision checking were generated using Cricket.

Building Code

To build pRRTC, follow the instructions below

git clone git@github.com:CoMMALab/pRRTC.git
cmake -B build
cmake --build build

Running Code

The repository comes with two benchmarking scripts: evaluate_mbm.cpp and single_mbm.cpp.

evaluate_mbm.cpp allows users to benchmark pRRTC's performance using Panda, Fetch, or Baxter on the entire MotionBenchMaker dataset. To run evaluate_mbm.cpp:

build/evaluate_mbm <robot> <experiment name>

single_mbm.cpp allows users to benchmark pRRTC's performance using Panda, Fetch, or Baxter on a single problem of MotionBenchMaker. To run single_mbm.cpp:

build/single_mbm <robot> <MBM problem name> <MBM problem index>

The MotionBenchMaker JSON files are generated using the script detailed here.

Planner Configuration

pRRTC has the following parameters which can be modified in the benchmarking scripts:

  • max_samples: maximum number of samples in trees
  • max_iters: maximum number of planning iterations
  • num_new_configs: amount of new samples generated per iteration
  • range: maximum RRT-Connect extension range
  • granularity: number of discretized motions along an edge during collision checking. Note: this parameter must match the BATCH_SIZE parameter in robot's header file (ex. fetch.cuh) for correct results.
  • balance: whether to enable tree balancing -- 0 for no balancing; 1 for distributed balancing where each iteration may generate samples for one or two trees; 2 for single-sided balancing where each iteration generate samples for one tree only
  • tree_ratio: the threshold for distinguishing which tree is smaller in size -- if balance set to 1, then set tree_ratio to 0.5; if balance set to 2, then set tree_ratio to 1
  • dynamic_domain: whether to enable dynamic domain sampling -- 0 for false; 1 for true
  • dd_alpha: extent to which each radius is enlarged or shrunk per modification
  • dd_radius: starting radius for dynamic domain sampling
  • dd_min_radius: minimum radius for dynamic domain sampling

Adding a Robot

Generating FKCC kernels

  1. Use Foam to generate two spherized urdfs:
  • one with approximate geometry, i.e. 1 sphere per link. Ex. here
  • one with fine geometry. Ex. here
  1. Clone Cricket and switch to the gpu-cc-early-exit branch.

  2. Create a folder under resources/<robot name>

  3. Add the two spherized urdfs and the robot srdf file to this folder.

  4. Create a json config file for approximate fkcc kernel generation. Ex. resources/robot_approx.json:

{
    "name": "Robot",
    "urdf": "robot/robot_spherized_approx.urdf",
    "srdf": "robot/robot.srdf",
    "end_effector": "robot_grasptarget",
    "batch_size": 16,
    "template": "templates/prrtc_approx_template.hh",
    "subtemplates": [],
    "output": "robot_prrtc_approx.hh"
}

Make sure to reference the approximate urdf. Batch size should be equal to the number of discretized collision checks on each extension of pRRTC.

  1. Repeat step 5 and create a config file for the main fkcc generation. Ex. resources/robot_main.json. See the panda, fetch, and baxter config files for examples.

  2. After building cricket run the script gpu_fkcc_gen.sh robot. This will put the generated code into a file robot_fk.hh.

  3. Add this to pRRTC as src/robot/robot.cuh, and include it in src/planning/pRRTC.cu.

Integrating the generated code

  1. Add a template instantiation for your robot to the bottom of src/planning/pRRTC.cu.

  2. Add your robot to src/planning/Robots.hh.

    a. Generate the robot struct from cricket with build/fkcc_gen robot_struct.json. Ex. config file robot_struct.json:

    {
        "name": "robot",
        "urdf": "robot/robot_spherized.urdf",
        "srdf": "robot/robot.srdf",
        "end_effector": "robot_grasptarget",
        "resolution": 32,
        "template": "templates/prrtc_robot_template.hh",
        "subtemplates": [],
        "output": "robot_struct.hh"
    }
    

    b. copy the generated struct into src/planning/Robots.hh.

  3. Recompile pRRTC.