Skip to content

Benchmarking suite to evaluate 🤖 robotics computing performance. Vendor-neutral.

License

Notifications You must be signed in to change notification settings

alexMarFar/benchmarks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RobotPerf Benchmarks

Benchmark results 🤖 | Benchmark spec 📖 | Contributing 🌍 | Contact and support 📨

RobotPerf is an open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems.

The project's mission is to build open, fair and useful robotics benchmarks that are technology agnostic, vendor-neutral and provide unbiased evaluations of robotics computing performance for hardware, software, and services. As a reference performance benchmarking suite in robotics, RobotPerf can be used to evaluate robotics computing performance across compute substratrated including CPUs, GPUs, FPGAs and other compute accelerators. The benchmarks are designed to be representative of the performance of a robotic system and to be reproducible across different robotic systems. For that, RobotPerf builds on top of ROS 2, the de facto standard for robot application development.

Refer to the Benchmark Specification for more details.

Why RobotPerf?

RobotPerf arquitecture diagram

The myriad combinations of robot hardware and robotics software make assessing robotic-system performance challenging, specially in an architecture-neutral, representative, and reproducible manner. RobotPerf addresses this issue delivering a reference performance benchmarking suite that is used to evaluate robotics computing performance across CPU, GPU, FPGA and other compute accelerators.

Mission Vision Standards
Represented by consortium of robotics leaders from industry, academia and research labs, RobotPerf is formated as an open project whose mission is to build open, fair and useful robotics benchmarks that are technology agnostic, vendor-neutral and provide unbiased evaluations of robotics computing performance for hardware, software, and services. Benchmarking helps assess performance. Performance information can help roboticists design more efficient robotic systems and select the right hardware for each robotic application. It can also help understand the trade-offs between different algorithms that implement the same capability. RobotPerf benchmarks aligns to robotics standards so that you don’t spend time reinventing the wheel and re-develop what already works. Benchmarks are conducted using ROS 2 as its common baseline. RobotPerf also aligns to standardization initiatives within the ROS ecosystem related to computing performance and benchmarking such as REP 2008 (ROS 2 Hardware Acceleration Architecture and Conventions) and the REP 2014 (Benchmarking performance in ROS 2).

Benchmarks

RobotPerf benchmarks aim to cover the complete robotics pipeline including perception, localization, control, manipulation and navigation. In time, new benchmarks will be added and new categories may appear over time. If you wish to contribute a new benchmark, please read the contributing guidelines.

a Perception b Localization c Control d Navigation e Manipulation
perception benchmarks localization benchmarks control benchmarks navigation benchmarks manipulation benchmarks

Perception

ID Graph Summary Metric Hardware Value Category Timestamp Note Data Source
a1 Perception computational graph composed by 2 dataflow-connected Components, rectify and resize. latency (ms) ROBOTCORE 66.82 edge 14-10-2022 perception/image
a1 Perception computational graph composed by 2 dataflow-connected Components, rectify and resize. latency (ms) Kria KR260 66.82 edge 14-10-2022 perception/image
a1 Perception computational graph composed by 2 dataflow-connected Components, rectify and resize. latency (ms) Jetson Nano 238.13 edge 14-10-2022 perception/image
a1 Perception computational graph composed by 2 dataflow-connected Components, rectify and resize. latency (ms) Jetson AGX Xavier 106.34 edge 14-10-2022 perception/image
a5 Perception resize ROS Component. latency (ms) Intel® Core™ i5-8250U CPU @ 1.60GHz × 8 33.68 workstation 08-05-2023 perception/image2
a3 Perception computational graph to compute a disparity map for stereo images. latency (ms) 12th Gen Intel(R) Core(TM) i9-12900KF 132.12 edge 25-04-2023 Mean: 26.25 ms, RMS: 27.18 ms, Max: 132.12 ms, Min: 8.73 ms over 1124 samples. perception/image3
a2 Perception rectify ROS Component. latency (ms) ROBOTCORE 66.82 edge 14-10-2022 perception/image

Contributing

Start by reading the benchmark specification. You can contribute to the project both as an individual or as a organization. If you are an individual, feel free to contribute by running the benchmarks and/or by contributing with new benchmarks (see spec). Once ready, submit a pull request and/or raise issues as appropriate. If you are an organization willing to commit resources to the project and contribute to it, please contact here.

Contact and support

For getting involved in the project and/or receiving support to run the RobotPerf benchmarks, contact here.

About

Benchmarking suite to evaluate 🤖 robotics computing performance. Vendor-neutral.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 78.8%
  • C++ 8.7%
  • C 7.3%
  • CMake 4.4%
  • Other 0.8%