Skip to content
/ runqy Public

The open-source distributed task queue. From simple scripts to GPU-intensive inference—your workers handle it all. Runqy distributes the tasks.

License

Notifications You must be signed in to change notification settings

Publikey/runqy

Repository files navigation

runqy logo

runqy

Open-source task queue for AI workloads. Deploy workers anywhere, from your laptop to the cloud.

GitHub Stars License Go Version Python SDK Build Status

Documentation · Website · Examples · Contributing


Runqy demo — from zero to task result in 90 seconds


Why Runqy?

🌍 Workers run anywhere — Your laptop, on-prem servers, AWS, Azure, Runpod, any machine with an internet connection. Learn more →
🚀 Zero-touch deployment — Workers pull code from Git, install dependencies, and start processing automatically. No manual setup. Learn more →
📄 Simple YAML config — Define a queue in a few lines. One YAML file, one queue. Learn more →
🔐 Built-in secrets — Pass secrets to workers via encrypted env vars. Learn more →
🐍 Go server + Python SDK — Robust Go server, familiar Python developer experience. Learn more →
📊 Web monitoring UI — Real-time dashboard with Prometheus metrics. Learn more →

Feature Comparison

Feature Runqy Celery Temporal Modal BullMQ Inngest
Self-hosted
Workers anywhere
Auto-deploy from Git
Deployment YAML
Built-in secrets
Monitoring UI

Quick Start

Get Runqy running in under 60 seconds:

# 1. Start the stack
curl -O https://raw.githubusercontent.com/Publikey/runqy/main/docker-compose.quickstart.yml
docker-compose -f docker-compose.quickstart.yml up -d

# 2. Enqueue a task
pip install runqy-python
python -c "
from runqy_python import RunqyClient
client = RunqyClient('http://localhost:3000', api_key='dev-api-key')
task = client.enqueue('quickstart-oneshot', {'message': 'Hello World!'})
print(f'Task ID: {task.task_id}')
"

# 3. Check results
open http://localhost:3000/monitoring/

See the Quickstart Guide for the full walkthrough.

Define a Queue

A queue is a simple YAML file:

queues:
  image-resize:
    priority: 5
    deployment:
      # Worker code: https://github.com/acme/image-worker
      git_url: "https://github.com/acme/image-worker.git"
      branch: "main"
      startup_cmd: "python main.py"
      mode: "one_shot"

Deploy it:

runqy config create -f queue.yaml

See the Queue Configuration Reference for all options.

Write a Task

from runqy import task, load

@load
def setup():
    """Load models once when worker starts"""
    import torch
    return torch.load('my_model.pt')

@task
def process_image(image_url: str, model) -> dict:
    """Runs on every task execution"""
    result = model.predict(image_url)
    return {"prediction": result, "confidence": 0.95}

See the Python SDK Reference for the full API.

Enqueue Tasks

Three ways to enqueue:

# CLI
runqy task enqueue -q image-resize -p '{"image":"img001.jpg","size":256}'

# REST API
curl -s POST localhost:3000/queue/add \
  -H "X-API-Key: dev-api-key" \
  -d '{"queue":"image-resize","data":{"image":"img002.jpg"}}'

# Python SDK
from runqy_python import RunqyClient
client = RunqyClient('http://localhost:3000', api_key='dev-api-key')
task = client.enqueue('image-resize', {'image': 'img003.jpg'})

See the API Reference for all endpoints.


Examples

Explore real-world use cases:


Installation

Quick Install

Linux/macOS:

curl -fsSL https://raw.githubusercontent.com/publikey/runqy/main/install.sh | sh

Windows (PowerShell):

iwr https://raw.githubusercontent.com/publikey/runqy/main/install.ps1 -useb | iex

Docker

docker pull ghcr.io/publikey/runqy:latest

From Source

git clone https://github.com/Publikey/runqy.git
cd runqy
go build -o runqy ./app

See the Installation Guide for detailed instructions.

Requirements

  • Redis + PostgreSQL

Server Configuration

Configure the server via environment variables:

export REDIS_HOST=localhost:6379
export RUNQY_API_KEY=your-secret-key

See the Configuration Reference for all options.

CLI Reference

Manage your deployment locally or remotely:

runqy queue list                    # List all queues
runqy config create -f queue.yaml   # Deploy a queue

runqy task enqueue -q myqueue -p '{"key":"value"}'  # Enqueue task
runqy task list myqueue                              # List tasks
runqy task get myqueue <task_id>                     # Get task result

runqy worker list                   # List active workers

See the CLI Reference for all commands.

Monitoring

Access the built-in web dashboard at /monitoring:

Runqy Dashboard

📊 More screenshots

Queue Overview — Status, pending/active/completed counts, latency per queue:

Runqy Queues

Workers — CPU/RAM usage, assigned queues, heartbeat status:

Runqy Workers

Runqy also exposes Prometheus metrics at /metrics. See the Monitoring Guide for Grafana dashboards and alerting.

Architecture

Tasks flow from clients → runqy server → queues → workers running anywhere. Workers are stateless and pull code from Git on startup.

runqy architecture

Zero-touch Deployment: Workers connect to the server, pull your code from Git, install dependencies, and start processing — no manual setup required.

zero-touch deployment

Links


Your workers, your machines, your rules.
Built on asynq • Made with ❤️ for AI developers

About

The open-source distributed task queue. From simple scripts to GPU-intensive inference—your workers handle it all. Runqy distributes the tasks.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •