Face tracking using Raspberry Pi & Lego Mindstorms! The Pi Build Hat, Pi Camera, and Lego mindstorms motors are combined to create a face tracking robot. It provides a barebones Flask interface for monitoring the bot headlessly. In short, it provides:
- Instructions for a movable Lego robot using Mindstorms motors & sensors.
- Ability to use various face detection algorithms for the robot to follow your face.
- Ability to detect hand gestures.
- Webinterface for monitoring the video feed and detections, allowing the robot to operate headlessly.
To replicate this project, the following hardware is required:
-
A Mindstorms Robot Inventor (51515) set. This set is all that's needed to build the robot. It used two medium motors and a distance sensor
Note: other compatible devices can be found here.
-
A Raspberry Pi 4B with 64-bit OS.
Note: A Raspberry Pi 3 variant with a 64-bit OS might also work with reduced performance.
-
The Build Hat Power Supply or similar 8V charger. Optionally, a battery pack could be used.
-
A Raspberry Pi compatible camera. I used a cheap 5MP option like this, but official units should work as well (or even better)
-
(Optional) A cooling fan. Due to the heavy processing done on the device, temperatures can get high. A small fan can be places between the build hat and Pi to remedy this.
To get your hardware ready, follow these steps:
-
Build your Lego robot! Instructions are available here.
-
Install the latest Raspberry Pi 64-bit OS. If you use a headless device (no monitor connected), don't forget to set your WiFi and SSH settings so you can access the device over the network. Video.
-
Install the Build Hat following the tutorial here. Don't forget to enable Serial Port and disable Serial Console for the Hat to function.
-
Whilst you're in the settings menu, enable the Camera (or Legacy Camera) under Interface options.
Install dependencies and python packages:
./setup.sh
pip install -r requirements.txt
Run the provided script to launch the webinterface and start the bot:
python run.py
This will launch a webinterface which can be accessed to monitor the camera feed at http://<pi_ip>:5000
.
Note: Starting the program the first time after a reboot can take longer due to the Hat's initialization. If something goes wrong the first time, just try again.
Different face detectors can be selected using the -d
or --detector
flag:
-
haarcascade
: Face detection using the OpenCV Haar feature-based cascade classifier (link). -
yunet
: The OpenCV based YuNet CNN model. More accurate and faster thanhaarcascade
(link). -
mediapipe
: A google Mediapipe face detector (link).
The gesture recogniser is based on the Mediapipe hand landmark detector, and a gesture classifier proposed by Kazuhito00.