Skip to content

RIDER.AI Embedded Python logic for a self-balancing robot. Real-time sensor fusion, motor control, and local execution.

License

Notifications You must be signed in to change notification settings

keroipl/Rider-Pi

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

433 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Rider-Pi

This is not an official repository for the Rider-Pi robot.
It is a sandbox for practicing robot programming.

Overview

Rider-Pi is a comprehensive robotics platform built on Raspberry Pi, featuring:

  • Autonomous Navigation - Rekonesans (reconnaissance) mode with obstacle avoidance, SLAM mapping, and return-to-home capability
  • Vision System - Real-time object detection, face tracking, and depth estimation
  • Voice Interaction - Voice commands, text-to-speech, and conversational AI integration
  • Expressive Face - Animated LCD display with emotions and reactions
  • Motion Control - Quadruped movement with balance and height control
  • Web Interface - Comprehensive web UI for control and monitoring
  • Modular Architecture - Event-driven design using ZMQ message bus

Key Features

πŸ€– Autonomous Navigation (Rekonesans Epic)

Multi-stage autonomous exploration system:

  • Stage 1: Reactive obstacle avoidance (STOP and AVOID strategies)
  • Stage 2: Position tracking via odometry (IMU + dead reckoning fusion)
  • Stage 3: SLAM mapping with occupancy grid
  • Stage 4: Path planning and return-to-home using A* algorithm

πŸ‘οΈ Vision System

  • Face and person detection (HOG, TFLite, SSD)
  • Follow-me tracking (face and hand tracking)
  • Obstacle detection with ROI analysis
  • Depth estimation for mapping
  • Edge TPU (Coral) acceleration support

πŸ—£οΈ Voice & Chat

  • Streaming and file-based voice modes
  • ASR (Automatic Speech Recognition)
  • Conversational AI (OpenAI, Google Gemini)
  • TTS (Text-to-Speech) with multiple backends
  • Push-to-Talk (PTT) support
  • Keyword spotting and voice activity detection

😊 Animated Face

  • LCD display with expressive animations
  • Emotions: happy, sad, neutral, surprised, angry
  • Eye movements and blinking
  • Responsive to events and sentiment

πŸ•ΉοΈ Web Interface

  • Live camera preview
  • Manual movement controls
  • Balance and height adjustment
  • Vision tracking controls
  • Autonomous navigation dashboard
  • Real-time event logging
  • Multi-language support (Polish, English)

πŸ—οΈ Architecture

  • Modular Design: Independent services communicating via ZMQ message bus
  • Event-Driven: Publish-subscribe pattern for loose coupling
  • REST API: Unified HTTP API on port 8080
  • Systemd Integration: Managed services for reliability
  • Simulation Mode: Development without hardware

πŸ”§ Service Control (App Logic Core)

  • Single source of truth for feature orchestration in apps/app_logic_core (FeatureManager).
  • Systemd operations are wrapped by common/systemd_ctrl.py.
  • Thin API /api/logic/feature/<name> and CLI scripts/robot_ctl.py start|stop <feature>.
  • Web UI calls the API only; business logic stays in the core layer.

Quick Start

Prerequisites

  • Raspberry Pi 4 (or compatible)
  • Python 3.9+
  • XGO quadruped robot (or simulator mode)
  • Camera module (optional for vision features)

Installation

# Clone repository
git clone https://github.com/mpieniak01/Rider-Pi.git
cd Rider-Pi

# Install dependencies
pip3 install -r requirements-dev.txt

# Initialize configuration files from templates
make config-init

# Configure environment (copy and edit)
cp .env.example .env

# Customize configuration files as needed
nano config/vision.toml      # Vision system paths
nano config/voice_web.toml   # Voice model paths

Running Services

# Start core services
sudo systemctl start rider-broker      # Message bus
sudo systemctl start rider-api         # REST API server

# Start optional services
sudo systemctl start rider-vision      # Vision system
sudo systemctl start rider-odometry    # Position tracking
sudo systemctl start rider-mapper      # SLAM mapping
sudo systemctl start rider-voice       # Voice interaction

# Start/stop feature stacks via CLI (App Logic Core)
sudo python3 scripts/robot_ctl.py start s3_follow_me_face
sudo python3 scripts/robot_ctl.py stop s4_recon

# Check current scenario state snapshot
sudo python3 scripts/robot_ctl.py status

Web Interface

Open browser: http://robot-ip:8080/control.html

Project Structure

Rider-Pi/
β”œβ”€β”€ apps/               # Application modules
β”‚   β”œβ”€β”€ camera/         # Camera capture
β”‚   β”œβ”€β”€ chat/           # Chat and NLU
β”‚   β”œβ”€β”€ mapper/         # SLAM mapping (Stage 3)
β”‚   β”œβ”€β”€ motion/         # Movement control
β”‚   β”œβ”€β”€ navigator/      # Autonomous navigation (Stages 1 & 4)
β”‚   β”œβ”€β”€ odometry/       # Position tracking (Stage 2)
β”‚   β”œβ”€β”€ ui/             # Face animations
β”‚   β”œβ”€β”€ vision/         # Vision and detection
β”‚   β”œβ”€β”€ voice/          # Voice processing
β”‚   └── app_logic_core/ # FeatureManager faΓ§ade (App Logic Core)
β”œβ”€β”€ services/           # System services
β”‚   β”œβ”€β”€ api_server.py   # REST API
β”‚   β”œβ”€β”€ broker.py       # ZMQ message broker
β”‚   β”œβ”€β”€ api_core/       # API endpoints
β”‚   └── core/           # Core business logic (FeatureManager implementation)
β”œβ”€β”€ common/             # Shared utilities
β”‚   β”œβ”€β”€ bus.py          # Message bus definitions
β”‚   └── systemd_ctrl.py # Systemd wrapper (start/stop/status)
β”œβ”€β”€ config/            # Configuration files
β”œβ”€β”€ docs/              # Documentation
β”‚   β”œβ”€β”€ api/           # API documentation
β”‚   β”œβ”€β”€ apps/          # Application docs
β”‚   β”œβ”€β”€ modules/       # Module documentation
β”‚   └── ui/            # Web UI documentation
β”œβ”€β”€ drivers/           # Hardware drivers
β”œβ”€β”€ scripts/           # Operational scripts
β”œβ”€β”€ systemd/           # Service definitions
β”‚   β”œβ”€β”€ legacy/        # Deprecated/legacy units (manual install)
β”œβ”€β”€ tests/             # Test suite
└── web/               # Web interfaces

Documentation

Development

Running Tests

# Run all tests
pytest tests/ -v

# Run specific module tests
pytest tests/test_navigator.py -v
pytest tests/test_odometry.py -v
pytest tests/test_mapper.py -v

# Skip audio tests (requires ALSA)
ALSA_SKIP_LSOF=1 pytest tests/ -v

Linting

# Run ruff linter
ruff check --fix

# Format code
ruff format

Simulation Mode

Run without hardware using simulator:

export RIDER_SIMULATOR=1
python3 -m apps.navigator.main

Contributing

This is a personal learning project. Contributions, suggestions, and feedback are welcome!

Acknowledgments

  • XGO Robot platform
  • OpenCV, TensorFlow Lite for vision
  • OpenAI, Google Gemini for AI features
  • ZMQ for messaging infrastructure

πŸ“ License

Distributed under the MIT License. See LICENSE for more information.

Copyright (c) 2025-2026 Maciej Pieniak

About

RIDER.AI Embedded Python logic for a self-balancing robot. Real-time sensor fusion, motor control, and local execution.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 81.0%
  • HTML 6.9%
  • Shell 5.4%
  • CSS 3.9%
  • JavaScript 1.8%
  • Makefile 1.0%