Welcome to the Sensor Fusion course for ADAS and autonomous driving.
In this course we will be talking about sensor fusion, which is the process of taking data from multiple sensors and combining it to give us a better understanding of the world around us. We will mostly be focusing on three sensors, lidar, camera and radar. By the end we will be fusing the data from these sensors to track multiple cars on the road, estimating their positions and speed.
Point Cloud Library (PCL) allows us to divide an unorganized collection of 3D points in order to process the subsections rapidly. When Lidar point clouds are combined with camera images, this technique allows us to positively identify objects and provide their distance, while Radar allows us to track their speed.
Lidar sensing gives us high resolution data by sending out thousands of laser signals. These lasers bounce off objects, returning to the sensor where we can then determine how far away objects are by timing how long it takes for the signal to return. Also we can tell a little bit about the object that was hit by measuring the intesity of the returned signal. Each laser ray is in the infrared spectrum, and is sent out at many different angles, usually in a 360 degree range. While lidar sensors gives us very high accurate models for the world around us in 3D, they are currently still very expensive, upwards of $60,000 for a standard unit.
Radar data is typically very sparse and in a limited range, however it can directly tell us how fast an object is moving in a certain direction. This ability makes radars a very pratical sensor for doing things like cruise control where its important to know how fast the car infront of you is traveling. Radar sensors are also very affordable and common now of days in newer cars.
Sensor Fusion by combing lidar's high resolution imaging with radar's ability to measure velocity of objects we can get a better understanding of the surrounding environment than we could using one of the sensors alone.
For progress, please monitor my frequent GitHub commits:
https://github.com/UkiDLucas/SFND313_Lidar_Obstacle_Detection/commits/master
I can clearly see all the obstacles, yet the separation does not fully work.
https://www.youtube.com/watch?v=lGbHW8SMu24
In this project you will take everything that you have learned for processing point clouds, and use it to detect car and trucks on a narrow street using lidar. The detection pipeline should follow the covered methods, filtering, segmentation, clustering, and bounding boxes. Also the segmentation, and clustering methods should be created from scratch using the previous lesson’s guidelines for reference.
- Final project
Rendring the cloud
Read few articles on Machine Learning on MacOS, CUDA (NVidia) vs OpenCL (AMD) GPU.
$ pip install -U plaidml-keras
Requirement already satisfied, skipping upgrade: keras==2.2.4 in /anaconda2/lib/python2.7/site-packages (from plaidml-keras) (2.2.4)
Successfully installed plaidml-0.6.3 plaidml-keras-0.6.3
$ plaidml-setup
-
Installing PlaidML
-
I have re-read documentation about Euclidean Cluster Extraction: http://pointclouds.org/documentation/tutorials/cluster_extraction.php
-
Added documentation to my project: https://github.com/UkiDLucas/SFND313_Lidar_Obstacle_Detection#ukis-class-progress
-
Renamed variables in kdtree.h to make code easier to read and understand.
Total class time spent: ~29.5 hours.
The KD Tree is used to drastically speed up a look up of points in space.
The KD Tree is a form of binary tree where you sort the e.g. points (X,Y), shown below, you take the median of X from all the points and insert it as a root node, then take median of all Y and insert it as first child, then continue recursively.
I was able to implement the first version of RANSAC 3D that fits a plane (e.g. road surface) to a Point Cloud data.
Another happy milestone is that I am able to run it in Mac OS without Ubuntu (dual boot, or remote)
You can see that the plane (green dots) upper boundary mixes with obstacles (red dots), the following image is after adjusting iterations to 1,000 and the distance threshold down to 0.05 meters (5 cm).
Happy medium is somewhere in between.
SFND313_Lidar_Obstacle_Detection/src/quiz/ransac/ $ make clean && make && ./quizRansac
I am correctly detecting all 3 cars and the road underneath.
float clusterTolerance = 1.5; *// e.g. less than 1.5 divides the car in two*
int minClusterSize = 1; *// weed out the single point outliers (i.e. gravel)*
int maxClusterSize = 500; *// my biggest car is 278 points*
- Euclidean Cluster Extraction: http://pointclouds.org/documentation/tutorials/cluster_extraction.php
- pcl Namespace Reference: http://docs.pointclouds.org/1.0.0/namespacepcl.html
- Distance between Point and a Line: https://brilliant.org/wiki/dot-product-distance-between-point-and-a-line/
- Extracting indices from a PointCloud: http://pointclouds.org/documentation/tutorials/extract_indices.php#extract-indices
See my blog post: https://ukidlucas.blogspot.com/2019/07/ubuntu-cmake.html
$> sudo apt install libpcl-dev
$> cd ~
$> git clone https://github.com/udacity/SFND_Lidar_Obstacle_Detection.git
$> cd SFND_Lidar_Obstacle_Detection
$> mkdir build && cd build
$> cmake ..
$> make
$> ./environment
http://www.pointclouds.org/downloads/windows.html
-
install homebrew
-
update homebrew
$> brew update
-
add homebrew science tap
$> brew tap brewsci/science
-
view pcl install options
$> brew options pcl
-
install PCL
$> brew install pcl
$ pwd
SFND313_Lidar_Obstacle_Detection/build
build $ cmake ../CMakeLists.txt
Compilation results in ERROR:
-- Checking for module 'glew'
-- No package 'glew' found
CMake Error at /usr/local/share/pcl-1.9/PCLConfig.cmake:58 (message):
simulation is required but glew was not found
build $ cd ..
(turi) uki 13:00 SFND313_Lidar_Obstacle_Detection $ edit CMakeCache.txt
EDIT lines (~279) to look similar to:
//GLEW library for OSX
GLEW_GLEW_LIBRARY:STRING=/usr/local/Cellar/glew/2.1.0
//GLEW include dir for OSX
GLEW_INCLUDE_DIR:STRING=/usr/local/Cellar/glew/2.1.0
SFND313_Lidar_Obstacle_Detection $ cd build/
(turi) uki 13:03 build $ cmake ../CMakeLists.txt
...
-- Configuring done
-- Generating done
-- Build files have been written to: /Volumes/DATA/_Drive/_REPOS/SFND313_Lidar_Obstacle_Detection
build $ cd ..
(turi) uki 13:09 SFND313_Lidar_Obstacle_Detection $ make clean && make
http://www.pointclouds.org/downloads/macosx.html
NOTE: very old version