Skip to content

Latest commit

 

History

History
143 lines (94 loc) · 7.51 KB

README.md

File metadata and controls

143 lines (94 loc) · 7.51 KB

PencilNet

This repo will host dataset, models, and implementation code of PencilNet.

Abstract

In autonomous and mobile robotics, one of the main challenges is the robust on-the-fly perception of the environment which is often unknown and dynamic, like in autonomous drone racing. In this work, we propose a novel deep neural network-based perception method for racing gate detection -- PencilNet -- which relies on a lightweight neural network backbone on top of a pencil filter. This approach unifies predictions of the gates' 2D position, distance and orientation in a single pose tuple. We show that our method is effective for zero-shot sim-to-real transfer learning that does not need any real-world training samples. Moreover, compared to state-of-art methods, our framework is extremely robust to illumination changes commonly seen under rapid flight. A thorough set of experiments demonstrates the effectiveness of this approach in multiple challenging scenarios, where the drone completes various tracks under different lighting conditions.

Dependencies

  • tensorflow (For training and evaluation: version 2.8. For Nvidia Jetson TX2: version 1.5)
  • numpy
  • tensorboard
  • pyaml
  • ROS melodic/noetic

It is highly recommended to use a virtual environment (e.g. anaconda. You can find an example anaconda environment in /misc/environment.yml ). If you use the provided anaconda environment, all training and evaluation should work (except for the ROS package, as it depends on whether you are using ROS Melodic (so python 2.7) or Noetic (python 3)).

Dataset

The simulation dataset generated by Unreal Engine 4 from various environments together with our drone racing gate design. The print-ready gate pattern and its 3D mesh model will be also made available for interested researchers.

Training dataset:

31k RGB fisheye-distorted images, annotation file (*.json) and training & testing indices (0.9 and 0.1 respectively in *.npy) can be downloaded here: RGB training dataset.

Although the traning script can train PencilNet from RGB image, it can be faster to train by using converted pencil images. The respective pencil images of the same dataset can be downloaded here: Pencil training dataset

Testing dataset:

Multiple images from different simulation environments (that do not appear in training dataset) and real world scenarios can be downloaded here:

Additionally, real-world images that contain significant blurriness due to movement at different light intensities can be downloaded here:

A test dataset with gate from the same pose but in different background, that can be used to test the consistency of a network, can be found here:

Trained models

The following models are trained solely using the above RGB training dataset.

Training

For training, inside train.py, change the config for the right directory of the dataset folder accordingly:

config["dataset_folder"] = "~/your_training_data_set"

and also the directory for the trained model:

logger.set_save_dir("./trained_model")

You can also change other parameters of the config inside train.py, for example, config['epochs'] = 500. Then you can train the pencilnet model:

python train.py

For other baselines, we will provide the training file and configurations in separate folders in this repo.

Evaluation

There are two different evaluation scripts:

  • Mean absolute error (MAE) evaluation (evaluate_mae.py): this will provide model estimated MAE errors along metrics: gate center, distance and orientation of a predicted gate against ground truth for a model
  • False negative percentage (FN) evaluation (evaluate_FP.py): provide the percentage of false negative errors of a model

To use these scripts, please simply modify the path of the respective models (downloaded links provided above), the result folder (where to store the results), test datasets, and the name of the models in the 'main' at the end of each file. For example:

result_folder_path = "./evaluation_results"

pencil_model_path = "../filter_models/2022-01-22-13-45-pretrain-single-gate-corrected"
sobel_model_path = "../filter_models/sobel-2022-06-06-17-30"
canny_model_path = "../filter_models/canny-2022-06-07-17-35"
GateNet_model_path = "../models/Baselines/GateNet/test_folder/2022-01-24-01-06-rgb-on-sim"
Adr_model_path = "../models/Baselines/ADRNet/test_folder/ADR-mod-2022-02-16-16-03_adr_mod"
DroNet_half_model_path = "../models/Baselines/Dronet/test_folder/2022-02-15-01-12-dronet-half"
DroNet_full_model_path = "../models/Baselines/Dronet/test_folder/2022-02-15-13-29-dronet-full"

base_test_data_folder_path = '/home/huy/dataset_ws/Test_data/RAL2022/rgb'
test_data = ["sim_outdoor_combined", "rgb_real_N_100_from_New_Night", "rgb_real_N_40", "rgb_real_N_20", "rgb_real_N_10"]

models = "GateNet sobel canny pencil".split()

Note that due to compatibility error, baseline models like ADR, DroNet full, and DroNet half must be evaluated together with filter models. E.g.:

models = "ADR".split()

ROS detection node

Build the package with catkin build

catkin build pencil_perception_module

Change the path to the model within perception.launch:

 <param name="model_path" value="please_input_your_path_to_perception_model"/>

then, to launch the node:

roslaunch pencil_perception_module perception.launch

Bag file testing

Download a real-time test bag file here (size: 5 GB): Bagfile N-100

run the bag file with the following options:

--topics /camera/image_raw /tf /tf_static /ground_truth/gate_pose_array /state_estimator/drone/filtered_odom_on_map /state_estimator/drone/pose_on_map /mavros/local_position/odom /mavros/vision_pose/pose /vicon/gate_1/gate_1 /vicon/gate_2/gate_2 /vicon/gate_3/gate_3 /vicon/gate_4/gate_4 --clock

Predicted gate center can be visualized using rqt_image_view under the topic /predicted_center_on_image