Open-source task queue for AI workloads. Deploy workers anywhere, from your laptop to the cloud.
Documentation · Website · Examples · Contributing
🌍 Workers run anywhere — Your laptop, on-prem servers, AWS, Azure, Runpod, any machine with an internet connection. Learn more →
🚀 Zero-touch deployment — Workers pull code from Git, install dependencies, and start processing automatically. No manual setup. Learn more →
📄 Simple YAML config — Define a queue in a few lines. One YAML file, one queue. Learn more →
🔐 Built-in secrets — Pass secrets to workers via encrypted env vars. Learn more →
🐍 Go server + Python SDK — Robust Go server, familiar Python developer experience. Learn more →
📊 Web monitoring UI — Real-time dashboard with Prometheus metrics. Learn more →
| Feature | Runqy | Celery | Temporal | Modal | BullMQ | Inngest |
|---|---|---|---|---|---|---|
| Self-hosted | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ |
| Workers anywhere | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Auto-deploy from Git | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ |
| Deployment YAML | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ |
| Built-in secrets | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ |
| Monitoring UI | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ |
Get Runqy running in under 60 seconds:
# 1. Start the stack
curl -O https://raw.githubusercontent.com/Publikey/runqy/main/docker-compose.quickstart.yml
docker-compose -f docker-compose.quickstart.yml up -d
# 2. Enqueue a task
pip install runqy-python
python -c "
from runqy_python import RunqyClient
client = RunqyClient('http://localhost:3000', api_key='dev-api-key')
task = client.enqueue('quickstart-oneshot', {'message': 'Hello World!'})
print(f'Task ID: {task.task_id}')
"
# 3. Check results
open http://localhost:3000/monitoring/See the Quickstart Guide for the full walkthrough.
A queue is a simple YAML file:
queues:
image-resize:
priority: 5
deployment:
# Worker code: https://github.com/acme/image-worker
git_url: "https://github.com/acme/image-worker.git"
branch: "main"
startup_cmd: "python main.py"
mode: "one_shot"Deploy it:
runqy config create -f queue.yamlSee the Queue Configuration Reference for all options.
from runqy import task, load
@load
def setup():
"""Load models once when worker starts"""
import torch
return torch.load('my_model.pt')
@task
def process_image(image_url: str, model) -> dict:
"""Runs on every task execution"""
result = model.predict(image_url)
return {"prediction": result, "confidence": 0.95}See the Python SDK Reference for the full API.
Three ways to enqueue:
# CLI
runqy task enqueue -q image-resize -p '{"image":"img001.jpg","size":256}'
# REST API
curl -s POST localhost:3000/queue/add \
-H "X-API-Key: dev-api-key" \
-d '{"queue":"image-resize","data":{"image":"img002.jpg"}}'
# Python SDK
from runqy_python import RunqyClient
client = RunqyClient('http://localhost:3000', api_key='dev-api-key')
task = client.enqueue('image-resize', {'image': 'img003.jpg'})See the API Reference for all endpoints.
Explore real-world use cases:
- quickstart-oneshot — Simple task execution
- quickstart-longrunning — Long-running worker processes
- data-pipeline — Multi-step data processing (API calls, ETL)
- webhook-processor — Event-driven webhook handling (Stripe, GitHub)
- scheduled-tasks — Cron-like healthchecks, reports, and cleanup
- multi-queue — Priority-based routing (critical, standard, bulk)
- gpu-inference — GPU-accelerated image generation with Stable Diffusion
- star-runqy — Vault secrets management tutorial
Linux/macOS:
curl -fsSL https://raw.githubusercontent.com/publikey/runqy/main/install.sh | shWindows (PowerShell):
iwr https://raw.githubusercontent.com/publikey/runqy/main/install.ps1 -useb | iexdocker pull ghcr.io/publikey/runqy:latestgit clone https://github.com/Publikey/runqy.git
cd runqy
go build -o runqy ./appSee the Installation Guide for detailed instructions.
- Redis + PostgreSQL
Configure the server via environment variables:
export REDIS_HOST=localhost:6379
export RUNQY_API_KEY=your-secret-keySee the Configuration Reference for all options.
Manage your deployment locally or remotely:
runqy queue list # List all queues
runqy config create -f queue.yaml # Deploy a queue
runqy task enqueue -q myqueue -p '{"key":"value"}' # Enqueue task
runqy task list myqueue # List tasks
runqy task get myqueue <task_id> # Get task result
runqy worker list # List active workersSee the CLI Reference for all commands.
Access the built-in web dashboard at /monitoring:
📊 More screenshots
Queue Overview — Status, pending/active/completed counts, latency per queue:
Workers — CPU/RAM usage, assigned queues, heartbeat status:
Runqy also exposes Prometheus metrics at /metrics. See the Monitoring Guide for Grafana dashboards and alerting.
Tasks flow from clients → runqy server → queues → workers running anywhere. Workers are stateless and pull code from Git on startup.
Zero-touch Deployment: Workers connect to the server, pull your code from Git, install dependencies, and start processing — no manual setup required.
- 📖 Documentation — Complete guides and API reference
- 🌐 Website — Project homepage
- 🐍 Python SDK — Client library
- 🔧 Worker Runtime — Task processor
- 🤝 Contributing — How to contribute
- 📄 License — MIT License
Your workers, your machines, your rules.
Built on asynq • Made with ❤️ for AI developers





