PC-side client infrastructure for the Rider-PI robot, providing:
- REST API adapter for consuming Rider-PI endpoints
- ZMQ subscriber for real-time data streams
- Local SQLite cache for buffering data
- FastAPI web server replicating the Rider-PI UI
- AI Provider Layer with real ML models (Voice, Vision, Text)
- Production-ready deployment with Docker and CI/CD
This project now includes:
- ✅ Real AI Models: Whisper ASR, Piper TTS, YOLOv8 Vision, Ollama LLM
- ✅ Docker Deployment: Complete stack with Redis, Prometheus, Grafana
- ✅ CI/CD Pipeline: Automated testing, security scanning, Docker builds
- ✅ Health Probes: Kubernetes-ready liveness and readiness endpoints
- ✅ Automatic Fallback: Mock mode when models unavailable
# Create .env file
echo "RIDER_PI_HOST=192.168.1.100" > .env
# Start the full stack
docker-compose up -d
# Access services
# Rider-PC UI: http://localhost:8000
# Prometheus: http://localhost:9090
# Grafana: http://localhost:3000# Install dependencies
pip install -r requirements.txt
# Run in mock mode (no AI models required)
python -m pc_client.main- Python: develop with Python 3.11 (CI target) while keeping code compatible with Rider-PI’s Python 3.9.
- Tooling: install lightweight dev deps and hooks:
pip install -r requirements-ci.txt pre-commit install
- Checks: use the Make targets to stay aligned with CI:
make lint→ruff check .make format→ruff format .make test→ pytest suite with async/timeouts configured like GitHub Actions.
- CI split: every
pushtomaingoes through the Quick Checks workflow (ruff + short unit tests). The full pipeline (unit-tests,e2e-tests,css-ui-audit, Copilot setup) is run on pull requests. - Copilot / agent flow: uruchom
./config/agent/run_tests.sh, aby odtworzyć środowisko wykorzystywane przez GitHub Copilot coding agent.- Skrypt instaluje zależności z
config/agent/constraints.txti odpala pytest. - Szczegółowa checklista PR-ów: docs/CONTRIBUTING.md.
- Skrypt instaluje zależności z
- Rider-PI Integration:
- UI (
web/control.html) now mirrors Rider-Pi, including “AI Mode” and “Provider Control” cards. - Backend proxies
/api/system/ai-modeand/api/providers/*to Rider-Pi via theRestAdapter, caching results for offline development. - To test locally with the real device, update
.env(RIDER_PI_HOST,RIDER_PI_PORT) and runmake start, then openhttp://localhost:8000/web/control.html. - Vision offload is now wired end-to-end: set
ENABLE_PROVIDERS=true,ENABLE_TASK_QUEUE=true,ENABLE_VISION_OFFLOAD=true, and pointTELEMETRY_ZMQ_HOSTat Rider-Pi so the PC publishesvision.obstacle.enhancedafter processingvision.frame.offload. - Voice offload mirrors the same flow:
ENABLE_VOICE_OFFLOAD=truelets Rider-PC consumevoice.asr.request/voice.tts.request, run Whisper/Piper (or mock), and publishvoice.asr.result/voice.tts.chunkback to Rider-Pi for immediate playback. - Text/LLM integration exposes
/providers/text/generateplus a capability handshake (GET /providers/capabilities) so Rider-PI knows which domains/versions Rider-PC supports before przełączeniem.
- UI (
- Activate the virtualenv before checks:
source .venv/bin/activate || true. - Type check:
mypy(expected:Success: no issues found in 75 source files). - Quick test run:
pytest -q.
📚 Full Documentation - Complete documentation and guides
- English UI coverage: all
/web/*.htmlscreens now rely onweb/assets/i18n.jskeys, the Polish strings have English fallbacks, and the chat/chat-pc/control/system scripts no longer declare duplicate helpers. - Verification: run
node scripts/check_i18n.mjsto ensure every key includes an English value, andpytest(inside.venv) covers the back-end regressions tied to UI rendering. - Notes: run
?lang=enagainstview,control,navigation,system,models,project,assistant,chat,chat-pc,google_home,mode, andprovidersto keep translations in sync.
- Quick Start Guide - Get started quickly
- AI Model Configuration - Setup real AI models
- Architecture - System architecture overview
- Configuration Hub - Central configuration guide
- API Documentation - REST API reference
- Replication Notes - Notes for replicating the project
- AI Model Configuration - Whisper, Piper, YOLOv8, Ollama setup
- Security Configuration - WireGuard VPN, mTLS setup
- Task Queue Configuration - Redis, RabbitMQ setup
- Monitoring Configuration - Prometheus, Grafana setup
- PC Offload Integration - Enabling AI mode / provider parity between Rider-Pi and Rider-PC
- Service Management - Operations, monitoring, troubleshooting
- Future Work - Planned improvements and development
Distributed under the MIT License. See LICENSE for more information.
Copyright (c) 2025-2026 Maciej Pieniak