Skip to content

Rider-PC - FastAPI control server & AI Provider replicating the robot's nervous system. Offloaded heavy ML models (Vision, Voice) to a dedicated unit.

License

Notifications You must be signed in to change notification settings

mpieniak01/Rider-Pc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

582 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rider-PC Client

PC-side client infrastructure for the Rider-PI robot, providing:

  • REST API adapter for consuming Rider-PI endpoints
  • ZMQ subscriber for real-time data streams
  • Local SQLite cache for buffering data
  • FastAPI web server replicating the Rider-PI UI
  • AI Provider Layer with real ML models (Voice, Vision, Text)
  • Production-ready deployment with Docker and CI/CD

🎉 Phase 4 Complete: Real AI Models & Production Deployment

This project now includes:

  • Real AI Models: Whisper ASR, Piper TTS, YOLOv8 Vision, Ollama LLM
  • Docker Deployment: Complete stack with Redis, Prometheus, Grafana
  • CI/CD Pipeline: Automated testing, security scanning, Docker builds
  • Health Probes: Kubernetes-ready liveness and readiness endpoints
  • Automatic Fallback: Mock mode when models unavailable

Quick Start

Option 1: Docker (Recommended)

# Create .env file
echo "RIDER_PI_HOST=192.168.1.100" > .env

# Start the full stack
docker-compose up -d

# Access services
# Rider-PC UI: http://localhost:8000
# Prometheus: http://localhost:9090
# Grafana: http://localhost:3000

Option 2: Local Development

# Install dependencies
pip install -r requirements.txt

# Run in mock mode (no AI models required)
python -m pc_client.main

Development Workflow

  • Python: develop with Python 3.11 (CI target) while keeping code compatible with Rider-PI’s Python 3.9.
  • Tooling: install lightweight dev deps and hooks:
    pip install -r requirements-ci.txt
    pre-commit install
  • Checks: use the Make targets to stay aligned with CI:
    • make lintruff check .
    • make formatruff format .
    • make test → pytest suite with async/timeouts configured like GitHub Actions.
  • CI split: every push to main goes through the Quick Checks workflow (ruff + short unit tests). The full pipeline (unit-tests, e2e-tests, css-ui-audit, Copilot setup) is run on pull requests.
  • Copilot / agent flow: uruchom ./config/agent/run_tests.sh, aby odtworzyć środowisko wykorzystywane przez GitHub Copilot coding agent.
    • Skrypt instaluje zależności z config/agent/constraints.txt i odpala pytest.
    • Szczegółowa checklista PR-ów: docs/CONTRIBUTING.md.
  • Rider-PI Integration:
    • UI (web/control.html) now mirrors Rider-Pi, including “AI Mode” and “Provider Control” cards.
    • Backend proxies /api/system/ai-mode and /api/providers/* to Rider-Pi via the RestAdapter, caching results for offline development.
    • To test locally with the real device, update .env (RIDER_PI_HOST, RIDER_PI_PORT) and run make start, then open http://localhost:8000/web/control.html.
    • Vision offload is now wired end-to-end: set ENABLE_PROVIDERS=true, ENABLE_TASK_QUEUE=true, ENABLE_VISION_OFFLOAD=true, and point TELEMETRY_ZMQ_HOST at Rider-Pi so the PC publishes vision.obstacle.enhanced after processing vision.frame.offload.
    • Voice offload mirrors the same flow: ENABLE_VOICE_OFFLOAD=true lets Rider-PC consume voice.asr.request / voice.tts.request, run Whisper/Piper (or mock), and publish voice.asr.result / voice.tts.chunk back to Rider-Pi for immediate playback.
    • Text/LLM integration exposes /providers/text/generate plus a capability handshake (GET /providers/capabilities) so Rider-PI knows which domains/versions Rider-PC supports before przełączeniem.

Tests & Maintenance

  • Activate the virtualenv before checks: source .venv/bin/activate || true.
  • Type check: mypy (expected: Success: no issues found in 75 source files).
  • Quick test run: pytest -q.

Documentation

📚 Full Documentation - Complete documentation and guides

Localization & Translation Status

  • English UI coverage: all /web/*.html screens now rely on web/assets/i18n.js keys, the Polish strings have English fallbacks, and the chat/chat-pc/control/system scripts no longer declare duplicate helpers.
  • Verification: run node scripts/check_i18n.mjs to ensure every key includes an English value, and pytest (inside .venv) covers the back-end regressions tied to UI rendering.
  • Notes: run ?lang=en against view, control, navigation, system, models, project, assistant, chat, chat-pc, google_home, mode, and providers to keep translations in sync.

Quick Links

Configuration Guides

Operations

See Also

📝 License

Distributed under the MIT License. See LICENSE for more information.

Copyright (c) 2025-2026 Maciej Pieniak

About

Rider-PC - FastAPI control server & AI Provider replicating the robot's nervous system. Offloaded heavy ML models (Vision, Voice) to a dedicated unit.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •