-
Notifications
You must be signed in to change notification settings - Fork 3
TensorFlow Object Detection API
TensorFlow is available for Raspberry Pi 3 Model B in the repository below.
https://github.com/samjabrahams/tensorflow-on-raspberry-pi/
The instructions below work on Raspberry Pi 3 Model B but unfortunately not on Raspberry Pi Zero W. The reason apparently is that TensorFlow has specific build information which has instructions unavailable on ARMv6Z (see the discussion here https://github.com/samjabrahams/tensorflow-on-raspberry-pi/issues/43).
sudo apt-get update
sudo apt-get install python-pip python-dev
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/releases/download/v1.1.0/tensorflow-1.1.0-cp27-none-linux_armv7l.whl
sudo pip install tensorflow-1.1.0-cp27-none-linux_armv7l.whl
sudo pip uninstall mock
sudo pip install mock
As of July 2017, since we cannot run TensorFlow on Raspberry Pi Zero W directly, let us use the TensorFlow Object Detection API in the cloud for this project. Running TensorFlow on Raspberry Pi Zero W became possible later on and I am using it in my new version of the smart security camera project. If interested, checkout the instructions here https://github.com/salekd/rpizero_smart_camera3/wiki/TensorFlow-Object-Detection-API
If you want to have this example running in AWS as a lambda function make sure to enter the commands below in a Linux environment. For example, one can launch an Amazon Ubuntu server from the Amazon EC2 console https://eu-west-1.console.aws.amazon.com/ec2
Select Create a new key pair and save the public key in the ~/.ssh
folder.
Connect to the running instance using ssh.
ssh -i ~/.ssh/my_first_EC2_key.pem ubuntu@ec2-52-208-84-63.eu-west-1.compute.amazonaws.com
At the end of this exercise, do not forget to terminate the instance. From Instances click on Actions → Instance State → Terminate.
Alternatively, once can use Docker:
docker run -it ubuntu:16.04 /bin/bash
Start by cloning this repository and TensorFlow Models.
sudo apt-get install git
git clone https://github.com/salekd/rpizero_smart_camera.git
git clone https://github.com/tensorflow/models.git
The Tensorflow Object Detection API uses Protobufs to configure model and training parameters. Before the framework can be used, the Protobuf libraries must be compiled.
sudo apt-get install protobuf-compiler
protoc models/object_detection/protos/*.proto --python_out=.
This page lists the available pre-trained models for the object detection: https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md
The models are trained on the COCO dataset http://mscoco.org/
Download the ssd_mobilenet_v1_coco model:
cd rpizero_smart_camera
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz
tar -zxvf ssd_mobilenet_v1_coco_11_06_2017.tar.gz
Start a new virtual environment.
sudo apt-get install python2.7
sudo apt-get install virtualenv
virtualenv --python=python2.7 rpizero_smart_camera_env
source rpizero_smart_camera_env/bin/activate
Install the following packages.
pip install --upgrade pip
pip install tensorflow
pip install Pillow
You should be able to successfully test the object detection locally.
python object_detection_test.py
This repository contains a lambda function based on the TensorFlow Object Detection API. It works in the following way:
- Trigger the Amazon Lambda function on a new image appearing in Amazon S3.
- Use a pre-trained model from TensorFlow Object Detection API to recognise humans in the image.
- If a human is recognised, an e-mail notification is sent using Amazon Simple Email Service.
The instructions on how to deploy a lambda function with non-standard dependencies on AWS are inspired by this post: https://medium.com/tooso/serving-tensorflow-predictions-with-python-and-aws-lambda-facb4ab87ddd
The lambda function code is in this file: https://github.com/salekd/rpizero_smart_camera/blob/master/object_detection_lambda.py
Copy all non-standard packages into a new folder called vendored
.
mkdir vendored
cp -r rpizero_smart_camera_env/lib/python2.7/site-packages/* vendored
Manually create init.py file in the google directory (see the issue in https://stackoverflow.com/questions/31308812/no-module-named-google-protobuf)
touch vendored/google/__init__.py
Add the object_detection
package.
cp -r ../models/object_detection vendored
Create a .zip file with the lambda function and the model. Dependencies will go into a separate .zip file. Ideally we want to have a single .zip file with everything. However, the Amazon Lambda limit of 250MB is too small for everything and we will have to bypass this by downloading the dependencies from S3 every time the lambda functions is trigged. Therefore, it is necessary to upload the vendored.zip file into the rpizero-smart-camera-archive S3 bucket.
sudo apt-get install zip
chmod 644 ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb
zip -r deployment_package.zip ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb object_detection_lambda.py
zip -r vendored.zip vendored --exclude \*.pyc
From the AWS Lambda console https://eu-west-1.console.aws.amazon.com/lambda/home create new lambda function called rpizero-smart-camera-tf with the following settings:
- For Code entry type choose Upload a .ZIP file.
- Handler: object_detection_lambda.lambda_handler
- Role: Choose an existing role
- Existing role: service-role/rpizero-smart-camera-role
- Add trigger for new objects appearing in the rpizero-smart-camera-upload S3 bucket.
- 512MB memory and 2min timeout.