Skip to content

stanford-iprl-lab/ao-grasp

Repository files navigation

AO-Grasp: Articulated Object Grasp Generation

Carlota Parés Morlans1*, Claire Chen1*, Yijia Weng1, Michelle Yi1, Yuying Huang1, Nick Heppert2, Linqi Zhou1, Leonidas Guibas1, Jeannette Bohg1

*Equal Contribution, 1Stanford University, USA, 2University of Freiburg, Germany

AO-Grasp project page

This repository contains:

  • Code for running AO-Grasp to get actionable grasps for interacting with articulated objects from partial point clouds. See installation and usage guides in this readme.

  • [Coming soon] Information on how to download the AO-Grasp dataset of actionable grasps on synthetic articulated objects from the PartNet-Mobility dataset. See this readme for more information.

Installation

AO-Grasp requires two conda environments, one for running inference to predict heatmaps, and one for running Contact-GraspNet. Follow the instructions below to set up both environments.

This code has been tested with Ubuntu 20.04 and CUDA 11.0, on a Quadro P5000 GPU. We note that we were unable to run the code, particularly the Contact-GraspNet inference, on a Geforce RTX 4090 GPU, due to a tensorflow version incompatibility.

Step 1: Clone this repository First, clone this resposity and its submodule contact_graspnet:

git clone --recurse-submodules git@github.com:stanford-iprl-lab/ao-grasp.git

Step 2: Setting up the ao-grasp conda environment

  1. From within the ao-grasp/ directory, create a conda env named ao-grasp with the provided environment yaml file:
conda env create --name ao-grasp --file aograsp-environment.yml
  1. Activate the new conda env you just created:
conda activate ao-grasp
  1. Install PyTorch:
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
  1. Install the aograsp package as an editable package
pip install -e .
  1. Install PointNet++. In the ao-grasp conda env, install PointNet2_PyTorch from the directory contained within this repo by running the following commands:
cd aograsp/models/Pointnet2_PyTorch/
pip install -r requirements.txt
pip install -e .
  1. Test the installation by predicting the per-point grasp likelihood scores on a provided test point cloud:
cd ../../../ # Navigate back to top-level directory
python run_pointscore_inference.py --pcd_path '/juno/u/clairech/ao-grasp/test_data/real/microwave_closed.ply'

This will save the predicted scores in output/point_score/microwave_closed.npz and a visualization of the scores in output/point_score_img/microwave_closed.png.

Step 3: Setting up the cgn conda environment

  1. From within the ao-grasp/contact_graspnet directory, create a conda env named cgn with the provided environment yaml file.
cd contact_graspnet
conda env create --name cgn --file aograsp_cgn_environment.yml
  1. Download CGN checkpoints

Download trained models from here and copy them into the checkpoints/ folder.

  1. Activate the cgn conda environment and test the installation from the ao-grasp directory
conda activate cgn
cd ..
python contact_graspnet/contact_graspnet/run_cgn_on_heatmap_file.py '/juno/u/clairech/ao-grasp/output/point_score/microwave_closed.npz' --viz_top_k 1

This will save an image of the top 1 grasp proposals in output/grasp_proposals_img/microwave_closed.png

Running AO-Grasp

Step 1: Running AO-Grasp on our test data

Running AO-Grasp inference to get grasps from point clouds requires running two scripts, each in their own conda environment. For your convenience, we have provided a bash script takes a path to a partial point cloud (saved as an Open3d point cloud) and generates proposals by calling the two scripts in their respectived conda environments. Note: If you have named either your AO-Grasp or CGN conda envs with custom names, instead of the ones we use in our installation instructions, you will need to change the conda environment names in the bash script. See the TODOs in the bash script.

To run this bash script on a provided test point cloud (you may need to change the permissions on the script to make it executable):

./get_proposals_from_pcd.sh test_data/real/microwave_open.ply

This will save the grasp proposals in output/grasp_proposals/microwave_open.npz and a visualization of the top 10 grasp proposals in output/grasp_proposals_img/microwave_open.mp4.

We have provided a few real and synthetic test point clouds in the test_data/ directory.

Format of saved grasp proposal files

A grasp_proposals/proposal.npz contains a single python dictionary with the following contents:

{
  "input": Input point cloud np.array of shape [4096, 3],
  "proposals": List of proposal tuples, where each tuple has the format (grasp_position, grasp_quaternion, grasp_likelihood_score),
  "heatmap": Grasp-likelihood scores for each point in "input", np.array of shape [4096,]
  "cgn_grasps": A list of grasps in the format generated by Contact-GraspNet. Only used to visaualize grasps using code from the original CGN repo.
}

To read a proposal.npz file, we recommend using the following line of code:

prop_dict = np.load(<path/to/proposal.npz>, allow_pickle=True)["data"].item()

Step 2: Running AO-Grasp on your own data

To run AO-Grasp on your own data using our provided script, you must save point clouds in the Open3d point cloud format. For best performance, here are a few things to look out for when using your own data:

  • Point clouds must have 4096 points.

  • The object point cloud must be segmented out from the full scene.

  • Point clouds should be in an image coordinate system that is right-handed with the positive Y-axis pointing down, X-axis pointing right and Z-axis pointing away from the camera (ie. towards the object). This is the default camera frame used by the Zed2 camera that we used to capture our real-world point clouds.

    Here is a birds-eye view of one of our test point clouds, microwave_open.ply, with the camera frame visualized. You can see that the camera frame z-axis is pointing towards the object.

  • The object should be roughly 1-1.2 meters away from the camera. This does not need to be exact, AO-Grasp can handle variations in object-to-camera distance, but the model will likely not perform as well if the object is too close or too far (~2 meters or more) from the camera.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published