Skip to content

Latest commit

 

History

History
93 lines (68 loc) · 3.02 KB

README.md

File metadata and controls

93 lines (68 loc) · 3.02 KB

Active Perception using Light Curtains for Autonomous Driving

Project Website: https://siddancha.github.io/projects/active-perception-light-curtains

drawing

This is the official code for our paper:

Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa G. Narasimhan, and David Held.
Active Perception using Light Curtains for Autonomous Driving.
In European Conference on Computer Vision (ECCV), August 2020.

Installation

  1. Clone repository.
git clone git@github.com:siddancha/active-perception-light-curtains.git
  1. Install pylc
cd /path/to/second.pytorch/pylc
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && make
  1. Install spconv.

  2. Add required paths to the $PYTHONPATH.

export PYTHONPATH=$PYTHONPATH:/path/to/second.pytorch
export PYTHONPATH=$PYTHONPATH:/path/to/second.pytorch/pylc
export PYTHONPATH=$PYTHONAPTH:/path/to/spconv

Data preparation

Download the Virtual KITTI and SYNTHIA-AL datasets into folders called vkitti and synthia. Then, create their info files that contain their respective metadata using the following commands:

export DATADIR=/path/to/synthia/and/vkitti/datasets

# create info files for Virtual KITTI dataset
python ./data/vkitti_dataset.py create_vkitti_info_file
    --datapath=$DATASET/vkitti

# create info files for the SYNTHIA dataset
python ./data/synthia_dataset.py create_synthia_info_file
    --datapath=$DATASET/synthia

Training

To train a model, run the following commands:

cd second
python ./pytorch/train.py train
    --config_path=./configs/{dataset}/second/{experiment}.yaml
    --model_dir=/path/to/save/model
    --display_step=100

where dataset is either vkitti or synthia . We will be releasing our pre-trained models shortly.

Evaluation

To evaluate a model, run the following commands:

cd second
python ./pytorch/train.py evaluate
    --config_path=./configs/{dataset}/second/{experiment}.yaml
    --model_dir=/path/to/saved/model
    --result_path=/path/to/save/evaluation/results
    --info_path=/info/path/of/dataset/split

Launch all experiments on slurm

In order to facilitate reproducibiilty, we have created a script that launches all experiments included in our paper, on a compute clustered managed by slurm. In order to launch all experiments, simply run the following

cd second
python ./launch_all_exp.py

This will automatically schedule training of all experiments using sbatch commands provided in second/sbatch.py.

Notes

  • The codebase uses Python 3.7.
  • The codebase is built upon the SECOND repository.