Stateless task processor with server-driven bootstrap.
Part of the runqy distributed task queue system.
Documentation · Website
- Server-driven bootstrap — Workers receive all config (Redis, code repo, queue routing) from the central server
- Automatic code deployment — Git clone, virtualenv creation, pip install on startup
- Two execution modes — Long-running process for ML inference, or one-shot per task
- Multi-queue support — Process tasks from multiple queues with priority weighting
- Health monitoring — Heartbeat and process health tracked in Redis
Push code to GitHub. Workers deploy themselves. No SSH. No Docker builds. No CI pipelines to maintain.
docker pull ghcr.io/publikey/runqy-worker:latestFor GPU/ML workloads:
docker pull ghcr.io/publikey/runqy-worker:inferenceDownload from GitHub Releases:
curl -LO https://github.com/publikey/runqy-worker/releases/latest/download/runqy-worker_latest_linux_amd64.tar.gz
tar -xzf runqy-worker_latest_linux_amd64.tar.gzSee Installation Guide for all platforms and options.
- Create
config.yml:
server:
url: "http://localhost:3000"
api_key: "your-api-key"
worker:
queue: "inference"- Run the worker:
./runqy-worker -config config.ymlThe worker connects to the server, pulls your code from Git, installs dependencies, and starts processing tasks.
See Configuration Reference for all options.
- runqy — Central server with CLI and monitoring dashboard
- runqy-python — Python SDK (
runqy-task) - Documentation — Full documentation
MIT License — see LICENSE for details.
