We implemented the "3D Segment Mapping using Data-Driven Descriptors" work. SegMap is the state-of-the-art work, provided the approach of a map representation solution to the localization and mapping problem based on the extracted segments from 3D point clouds. We leverages a data-driven descriptorin order to extract meaningful features that can also be used for reconstructing a dense 3D map of the environment and for extracting semantic information. This is particularly interesting for navigation tasks and for providing visual feedback to end-users such as robot operators, for example in search and rescuescenarios.
- labeled segments from LiDAR data
- mark up the data
- voxelize
- augment
- compressed representation(data-driven descriptors) from LiDAR segments
- reconstruction
- semantic classification
- SegMap provided dataset based on Kitti odometry dataset with segments extracted by their previously proposed approach SegMatch: Segment based place recognition in 3D point clouds. The provided dataset wasn't labeled fully and correctly, we marked down all data, changed the representation to *.npy for fast and convinient use and uploaded partly in folder "datasets".
- Beyond PASCAL: A Benchmark for 3D Object Detection in the Wild. We had an idea to learn our model on perfect segments recieved from CAD-models, however on voxelization step it becomes clear that this dataset doesn't suit us due to very sparse results.
- SYDNEY URBAN OBJECTS DATASET. We got this dataset on last stage of our project. It's suit us. Used for tests.
- Vision meets Robotics: The KITTI Dataset. We were trying to use Kitti 3D object dataset by Vision meets Robotics: The KITTI Dataset however we were not managed to extract valuable segments from LiDAR data. This dataset was created for other goals.
We develop a procedure for generating training data and detail the performances of the SegMap descriptor for localization, reconstruction and semantics extraction.
All experiments were performed on "Lechuga machine" provided by Mobile Robotics lab which has 3 GPU GeForce GTX 1080 Ti with compute capability: 6.1.
Segmap provide open code however it isn't possible to compile due to many unresolved dependencies(Ububntu 14 need with Tensorflow comiled by hands, ROS, catkin etc). We needed comparable results, that's why we took some parts of their code and managed to compile it after many iterations without dependecies. Autoencoder and sematic models on TF were recieved in this way. We also provide some experiments with their model and implemented our own on Keras. We trained zoo of our model on our datasets, Segmap and Sydney. The general scheme of our experiments is provided below.
- Tensorflow-gpu==1.8.0 (for comparable expirements)
- Keras (for our models)
- sklearn (pipeline, preprocessing)
- Anastasia Kishkun
- Alenicheva Alisa
- Grashchenkov Kirill
- Konstantin Pakulev
- Roman Zinoviev