This is not an official repository for the Rider-Pi robot.
It is a sandbox for practicing robot programming.
Rider-Pi is a comprehensive robotics platform built on Raspberry Pi, featuring:
- Autonomous Navigation - Rekonesans (reconnaissance) mode with obstacle avoidance, SLAM mapping, and return-to-home capability
- Vision System - Real-time object detection, face tracking, and depth estimation
- Voice Interaction - Voice commands, text-to-speech, and conversational AI integration
- Expressive Face - Animated LCD display with emotions and reactions
- Motion Control - Quadruped movement with balance and height control
- Web Interface - Comprehensive web UI for control and monitoring
- Modular Architecture - Event-driven design using ZMQ message bus
Multi-stage autonomous exploration system:
- Stage 1: Reactive obstacle avoidance (STOP and AVOID strategies)
- Stage 2: Position tracking via odometry (IMU + dead reckoning fusion)
- Stage 3: SLAM mapping with occupancy grid
- Stage 4: Path planning and return-to-home using A* algorithm
- Face and person detection (HOG, TFLite, SSD)
- Follow-me tracking (face and hand tracking)
- Obstacle detection with ROI analysis
- Depth estimation for mapping
- Edge TPU (Coral) acceleration support
- Streaming and file-based voice modes
- ASR (Automatic Speech Recognition)
- Conversational AI (OpenAI, Google Gemini)
- TTS (Text-to-Speech) with multiple backends
- Push-to-Talk (PTT) support
- Keyword spotting and voice activity detection
- LCD display with expressive animations
- Emotions: happy, sad, neutral, surprised, angry
- Eye movements and blinking
- Responsive to events and sentiment
- Live camera preview
- Manual movement controls
- Balance and height adjustment
- Vision tracking controls
- Autonomous navigation dashboard
- Real-time event logging
- Multi-language support (Polish, English)
- Modular Design: Independent services communicating via ZMQ message bus
- Event-Driven: Publish-subscribe pattern for loose coupling
- REST API: Unified HTTP API on port 8080
- Systemd Integration: Managed services for reliability
- Simulation Mode: Development without hardware
- Single source of truth for feature orchestration in
apps/app_logic_core(FeatureManager). - Systemd operations are wrapped by
common/systemd_ctrl.py. - Thin API
/api/logic/feature/<name>and CLIscripts/robot_ctl.py start|stop <feature>. - Web UI calls the API only; business logic stays in the core layer.
- Raspberry Pi 4 (or compatible)
- Python 3.9+
- XGO quadruped robot (or simulator mode)
- Camera module (optional for vision features)
# Clone repository
git clone https://github.com/mpieniak01/Rider-Pi.git
cd Rider-Pi
# Install dependencies
pip3 install -r requirements-dev.txt
# Initialize configuration files from templates
make config-init
# Configure environment (copy and edit)
cp .env.example .env
# Customize configuration files as needed
nano config/vision.toml # Vision system paths
nano config/voice_web.toml # Voice model paths# Start core services
sudo systemctl start rider-broker # Message bus
sudo systemctl start rider-api # REST API server
# Start optional services
sudo systemctl start rider-vision # Vision system
sudo systemctl start rider-odometry # Position tracking
sudo systemctl start rider-mapper # SLAM mapping
sudo systemctl start rider-voice # Voice interaction
# Start/stop feature stacks via CLI (App Logic Core)
sudo python3 scripts/robot_ctl.py start s3_follow_me_face
sudo python3 scripts/robot_ctl.py stop s4_recon
# Check current scenario state snapshot
sudo python3 scripts/robot_ctl.py statusOpen browser: http://robot-ip:8080/control.html
Rider-Pi/
βββ apps/ # Application modules
β βββ camera/ # Camera capture
β βββ chat/ # Chat and NLU
β βββ mapper/ # SLAM mapping (Stage 3)
β βββ motion/ # Movement control
β βββ navigator/ # Autonomous navigation (Stages 1 & 4)
β βββ odometry/ # Position tracking (Stage 2)
β βββ ui/ # Face animations
β βββ vision/ # Vision and detection
β βββ voice/ # Voice processing
β βββ app_logic_core/ # FeatureManager faΓ§ade (App Logic Core)
βββ services/ # System services
β βββ api_server.py # REST API
β βββ broker.py # ZMQ message broker
β βββ api_core/ # API endpoints
β βββ core/ # Core business logic (FeatureManager implementation)
βββ common/ # Shared utilities
β βββ bus.py # Message bus definitions
β βββ systemd_ctrl.py # Systemd wrapper (start/stop/status)
βββ config/ # Configuration files
βββ docs/ # Documentation
β βββ api/ # API documentation
β βββ apps/ # Application docs
β βββ modules/ # Module documentation
β βββ ui/ # Web UI documentation
βββ drivers/ # Hardware drivers
βββ scripts/ # Operational scripts
βββ systemd/ # Service definitions
β βββ legacy/ # Deprecated/legacy units (manual install)
βββ tests/ # Test suite
βββ web/ # Web interfaces
- Documentation Index - Complete documentation index
- Architecture - System architecture and design
- Project Vision - Project goals and roadmap
- Configuration - Configuration management with TOML templates
- API Documentation - REST API endpoints
- App Logic Core - Feature orchestration and FeatureManager
- Application Modules - Detailed module documentation
- Web UI Documentation - Web interface guides
- Systemd Services - Service mappings
- Scripts - Operational and development scripts
# Run all tests
pytest tests/ -v
# Run specific module tests
pytest tests/test_navigator.py -v
pytest tests/test_odometry.py -v
pytest tests/test_mapper.py -v
# Skip audio tests (requires ALSA)
ALSA_SKIP_LSOF=1 pytest tests/ -v# Run ruff linter
ruff check --fix
# Format code
ruff formatRun without hardware using simulator:
export RIDER_SIMULATOR=1
python3 -m apps.navigator.mainThis is a personal learning project. Contributions, suggestions, and feedback are welcome!
- XGO Robot platform
- OpenCV, TensorFlow Lite for vision
- OpenAI, Google Gemini for AI features
- ZMQ for messaging infrastructure
Distributed under the MIT License. See LICENSE for more information.
Copyright (c) 2025-2026 Maciej Pieniak
