Skip to content

System Integration Project using ROS to move a car and detect and obey traffic lights

Notifications You must be signed in to change notification settings

3omdawy/CarND-System-Integration-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

  • Note: obstacle detection is excluded

The implemented modules are:

1. DBW (Drive-By-Wire) Node

  • This node generates the control commands to the simulator or car (throttle, brake, and steering)
  • It receives the required twist commands as well as the current state if the car
  • PID controller was used for the throttle / brake commands while the steering command was handled by a yaw controller (both provided in Udacity repo).
  • Testing (uncomment the logging in line 110 of dbw_node.py):
    • Normally: throttle is 0.4457 and brake and steering are 0 (car is striaght and stopped).
    • Manually increase speed: throttle decreases then becomes 0 and brake increases
    • Manually go left or right --> steering is updated in the opposite direction
    • Update the command in waypoint updater --> target speed is updated accordingly

2. Waypoint Updater Node

  • This node generates the final path to be followed by the control module (way points with the intended speed at each waypoint)
  • It receives information about traffic light and car position and a map of waypoints.
  • Logic: if a red light is detected ahead, set the target speed of next way points to 0. Otherwise follow a speed of 10 mph.
  • Testing (uncomment the logging in line 97 of waypoint_updater.py):
    • Detected waypoint starts from 272 then increases gradually till the length of waypoints (> 10000) when moving the car forward manually then starts from 0 again.
    • Position is updated correctly (x increases as we go forward)
    • Waypoint message is generated correctly with target speed
    • When red light is passed from traffic light detector, log message is generated indicating this and also speed is set to 0.

3. Traffic Light Detection Node

  • This node generates information about red traffic lights that are in front of and near the car
  • It receives images from the camera mounted on the car, as well as the car position and a map of waypoints
  • Logic: State of the art SSD trained by google and a simple opencv color thresholding script was used as described [here] (https://github.com/cochoa0x1/system-integration). Due to tensorflow version conflicts, resnet was used on the final version instead of the faster SSD mobilenet. On modest hardware this led to some interesting problems caused by the lag, optimizations are needed.
  • Note: when using it on my PC (an extremely slow and light Inspiron): it gave a warning that a GPU is better [it can work with a CPU but the performance will be degraded]
  • Testing (uncomment the logging in line 176 of tl_detector.py):
    • Approach red lights --> lights are detected and message is published and received by waypoint updatder
    • Move away from them --> no new log messages

External dependencies

  • Please download the coco classifier here and unzip it inside ros/src/tl_detector/light_classification.
  • Recommendation: use a GPU for the classifier to give better results

Note

  • This is an indicidual submission, just to run on the simulator (not on Carla)

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

    • Note: I used the VM provided by Udacity (as described here)
  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this (which I did).

  • Follow these instructions to install ROS

  • Dataspeed DBW

    • Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary) - Again the link provided above for Udacity VM will work for these items
  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Usage

  1. Clone the project repository
git clone https://github.com/3omdawy/CarND-System-Integration-Project
  1. Install python dependencies
cd CarND-System-Integration-Project
pip install -r requirements.txt
  • Note: requirements.text for me was adapted as done here as I re-used the classifier in this repository.
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator (but take case not to make a large delay from step 3 otherwise a timeout can occur)

Real world testing (not done by me)

  1. Download training bag that was recorded on the Udacity self-driving car (a bag demonstraing the correct predictions in autonomous mode can be found here)
  2. Unzip the file
unzip traffic_light_bag_files.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
  1. Launch your project in site mode
cd CarND-System-Integration-Project/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

About

System Integration Project using ROS to move a car and detect and obey traffic lights

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published