Skip to content

A framework built on NVIDIA Isaac Sim for the implementation of evaluations and benchmarking of imitation learning algorithms for robotic manipulation.

Notifications You must be signed in to change notification settings

j9smith/manipulation-lab

Repository files navigation

Manipulation Lab

Python Isaac Sim Isaac Lab

image

Manipulation Lab is a modular benchmarking toolkit for robotic manipulation, built on top of NVIDIA Isaac Sim/Isaac Lab. It enables researchers to design, run, and evaluate robotic learning experiments in a reproducible and extensible manner. It was designed to address fragmentation of the evaluation ecosystem, whereby many benchmarks exist across disparate frameworks, introducing switching costs when evaluating along different axes.

Manipulation Lab addresses this by providing:

  • Flexibility: Support for multiple sensors, scenes, fixed-base single-arm manipulators, and language conditioning, allowing for diverse evaluations to be performed.
  • Modularity: Tasks, models, observation/action spaces, and embodiments can be swapped via configuration without editing core code.
  • Configuration: Deep integration with Hydra configuration, used to parameterise many elements of the framework, allowing a low-code approach and reproducibility.
  • Benchmark-first Approach: New benchmarks can be introduced as Hydra-based configuration files ('manifests') and then executed*, providing a formal mechanism for the introduction of new benchmarks.

This approach encourages convergence around a shared framework, allowing researchers to introduce new tasks with flexibility and then group them into new benchmarks.

* currently requires manual execution of individual evaluations.

Installation

Clone the repository:

git clone https://github.com/j9smith/manipulation-lab.git
cd manipulation-lab

Install the repository into the same Python environment as Isaac Sim/Isaac Lab:

pip install -e .

This will automatically install the required dependencies.

Usage

Running Evaluations

Each evaluation is parameterised via Hydra configuration under config/play_config.yaml and executed via scripts/play.py.

For example, to run the Stack Blocks task located in the room scene in simulation:

python play.py task=room/stack_blocks

Evaluation results are automatically logged to Weights & Biases upon completion.

Teleoperation & Dataset Collection

Teleoperation is currently only supported with an Xbox remote. To record expert demonstrations, load the task with teleop=true:

python play.py task=room/stack_blocks teleop=true

Demonstrations are saved in HDF5 format following the RLDS schema.

Training a Deterministic Policy

A training script is offered for deterministic behavioural cloning, and is parameterised via the config/train_config.yaml file and executed via scripts/train.py.

To train a model:

python train.py model=mlp encoder=resnet18

Training metrics are automatically logged to Weights & Biases.

Project Structure

manipulation-lab/
|
|--assets      # Robots, sensors, etc.
|--config      # Configuration files for different components of the framework
|--envs        # Task implementations grouped by their parent scene
|--manifests   # Benchmark manifests
|--models      # Imitation learning models, clients, encoders
|--scripts     # Framework implementation

About

A framework built on NVIDIA Isaac Sim for the implementation of evaluations and benchmarking of imitation learning algorithms for robotic manipulation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages