Here is a link to a short article on my website that I wrote about this project: https://brandonbonifacio.com/projects/live-computer-vision-on-arduino-with-aws-backend/
The title of this project succinctly explains what I did in this project, and I also provide a full System Diagram below to help explain. I completed this project during my winter break from December 2023 - January 2024 with the purpose of gaining more experience with live ML deployment and cloud engineering.
In this project, I connected an OV7670 camera to an Arduino Nano 33 BLE Sense. Then, I created a cloud-based backend in AWS, with the primary components being S3 for data storage and Sagemaker for MLOps. Using this, I trained a MobileNetv2 model in Sagemaker, compressed its size to 0.6 MB using Tensorflow Lite while maintaining 98+% test accuracy, and then deployed it on the Arduino in order to capture and label images live. I used S3 to maintain model version control by storing my models on it.
The purpose of the model was to classify Saltine crackers imaged under it as either broken or whole, and you can see some examples of whole/broken Saltines below that were imaged with the camera:
Below is a picture of my camera setup with a Saltine and also a hamster:
A huge thanks to the numerous free AWS tutorials, both on YouTube and from AWS's website, and the Maker Pro website for help with hardware implementation.