The Arduino Uno Q is a "Dual-Soul" powerhouse, and this repository is dedicated to mastering its complexity. It is no longer just about blinking LEDs; it is about orchestrating a Qualcomm QRB2210 (Debian Linux) "Brain" and an STM32U585 (Zephyr RTOS) "Nervous System."
We document the journey of pushing this board beyond its documentation—from low-level bit-banging and Zephyr debugging to deploying Agentic AI frameworks that allow your Arduino to think, search the web, and reason.
If you've ever asked:
How do I bridge a Python-based AI Agent to my Arduino Sketch?
Why does the Zephyr core behave differently than standard AVR?
How can I run local LLMs on the MPU while the MCU handles real-time sensors?
Why is my SPI/GPIO mapping not matching the datasheet?
…this repo is for you.
In this repo you find:
- Display text on a SPI driven OLED display
- Make a local Ollama Model working
- Attach a webcam
- Perform Google Mediapipe Facedetection
- Interpret the face by a Large Language Model
- Make a small version of the Agent OpenClaw running and using a sketch as tool
- apply servos to the device
- and in future more applications
| Mediapipe Facedetection | here with Low Code Graphical Flowise based LLM |
|---|---|
| VisualAI | FlowiseFaceMesh |
In the chapters below, the integration of GenAI models like qwen3:0.6b or video streaming by an USB Webcam is shown.
The Arduino has Qualcomm’s advanced QRB2210 Microprocessor (MPU) running a full Debian Linux OS with upstream support, and the real-time responsiveness of a STM32U585 microcontroller (MCU) running Arduino sketches over Zephyr OS. Together with the extension of a Large Language Model we could collect some interesting project ideas. Due to the lack of a powerful hardware and the fact, the llm is running locally from a USB-C connected sdcard, the medium response time of the LLM is about 90 seconds. We should take this into account for our ideas.
These ideas are engineered so the LLM only runs occasionally, while the STM32 MCU handles all real-time tasks:
Here some ideas:
-
Concept: Use various sensors (thermocouples, accelerometers, current sensors) from the Arduino Lab to monitor a small motor, 3D printer, or household appliance. The MCU collects all the raw, real-time data. Once a day, the MPU sends a summarized batch of this sensor data to the local LLM.
-
LLM Role (90s Wait): The LLM acts as an expert diagnostic engineer. It analyzes the raw data summary ("Vibration spike at 14:30," "Temperature 5°C above baseline at 16:00") and generates a plain-language daily maintenance report with suggested interventions.
-
Bricks Used: Accelerometer, temperature sensor, relays/LEDs for alerts.
-
Concept: The MCU monitors a PIR sensor and a door magnetic switch. When an event is triggered (e.g., door opened), the MCU takes a timestamp and perhaps a low-res image/audio snippet (if a camera/mic is attached to the MPU). The LLM is queried with the event context.
-
LLM Role (90s Wait): It acts as the "Sentry Analyst." The LLM takes the context ("Door opened at 02:45 after 3 days of no activity") and generates a response classifying the event and suggesting the next step: "Likely a false alarm, but recommend manual visual inspection. Logged as Severity 2." This classification takes time but is valuable.
-
Bricks Used: PIR sensor, magnetic switch, perhaps a small speaker/buzzer for pre-LLM alerts.
✔ Embedded developers ✔ Zephyr / RTOS users ✔ Arduino users moving beyond AVR ✔ Debugging & reverse-engineering enthusiasts ✔ Anyone frustrated by undocumented behavior
Written with StackEdit.
