Skip to content

JS-RML/LearnFromInteraction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Learn from Interaction: Learning to Pick via Reinforcement Learning in Challenging Clutter

1. Overview

In this work, we present an end-to-end reinforcement learning (RL) framework, which aims at singulating and simultaneously picking the objects one by one from a random clutter. We present a novel solution to incorporate object interaction in policy learning and a gripper designed for this technique is capable of changing relative digit lengths. This repository provides the implementation.

2. Prerequisites

2.1 Hardware requirements

2.2 Software requirements

The code is built with Python 3.6. Dependencies are listed in [requirements.yaml] and can be installed via Anaconda by running:

conda env create -n learn_interaction -f requirements.yaml

3. Training

If you want to train your own model, please run the following code:

python main.py --play-only==False

4. Testing

4.1 Test in Simulation

We provide a testing script to evaluate our trained model in simulation. The following code runs the test on three trained objects, and report the average grasp success rates.

python main.py --play-only==True

4.2 Test on Real Robot (UR10)

Here we provide the steps to test our method on a real robot.

Robot control

Robot is controlled via this python software.

Camera setup

To deploy RealSense L515 camera,

  1. Download and install the librealsense SDK 2.0
  2. Our camera setting can be found in real/640X480_L_short_default.json

Start testing

Then run the following code to start testing:

cd real
python test_in_real.py

Maintenance

For any technical issues, please contact: Chao Zhao (czhaobb@connect.ust.hk).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published