Skip to content

Inception-Robotics/terrapn

 
 

Repository files navigation

Code Usage

This is the main working code and model for TerraPN, the paper found here: https://ieeexplore.ieee.org/document/9981942

Steps to follow:

  1. Install ROS Melodic.

  2. Setup the conda environment using terrapn.yaml inside the conda folder.

  3. Setup RGB image stream from a camera. outdoor_dwa contains the ros callback function to subscribe to images, and velocities from the robot's odometry and output a "surface costmap".

  4. Run outdoor_dwa.py within the terrapn conda environment.

Training

  • The training code, and associated txt files needed to read the dataset can be found in the model folder along with the network model.

  • Our dataset to train a new model can be found here.

To build the docker container

  1. Go to the terrapn package
  2. Please install Nvidia container runtime if it is not already installed here
  3. Run the following command for RTX 20** GPUs (might work on other GPUs as well)
sudo docker build -t terrapn --build-arg conda_file=terrapn -f terrapnDockerfile .
  1. Run the following command for a series GPUs
sudo docker build -t terrapn --build-arg conda_file=terrapn-a -f terrapnDockerfile .

To run the docker

  1. Run the following command
sudo docker run -it --net=host --runtime=nvidia --gpus=all terrapn

Extracting Labels

python scripts/dataset_preparation.py /path/to/bagfile --show

Without Images - 
python scripts/dataset_preparation.py /path/to/bagfile

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.2%
  • CMake 9.8%
  • Dockerfile 4.0%