Distributed job queue written in Go using NATS, Traefik and Docker.
- Run NATS with jetstream
docker network create nats
docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats --http_port 8222 -js # -js enables jetstream
- Create the dummy job binary:
cd go-jobqueue/dummy-job
go build .
cd go-jobqueue
go run .Each job gets a new Job ID (uuid4)
curl -X POST http://localhost:3000/v1/job
{
"id": "db9e2022-3a98-407c-a75f-dec42972c94b",
"name": "job_output_subject.db9e2022-3a98-407c-a75f-dec42972c94b"
}curl -X GET http://localhost:3000/v1/job/<job-id> && echo
{
"id": "<job-id>",
"name": "job_output_subject.<job-id>"
}
This can be done via a websocket client. The easiest way is to:
- open up the browser
- point it towards localhost:3000
- enter the job ID
Note: Each new websocket client will read the job output from the very beginning.
- Docker and docker compose
- NATS
- Make (optional, for using the Makefile)
# Build all images and start services
make build
make up
# Or use docker-compose directly
docker-compose build
docker-compose up -d- Main Application: http://localhost
- Traefik Dashboard: http://localhost:8080 (disabled by default)
- NATS Monitoring: http://localhost:8222
# Create a job
curl -X POST http://localhost/v1/job
# Check service status
make statusThe Docker setup includes:
- Go Application Instances: Horizontally scalable workers
- NATS Server: Message broker with JetStream enabled
- Traefik: Load balancer and reverse proxy
- Docker Network: Isolated network for all services
Each instance:
- Runs on internal port 3000
- Connects to NATS via Docker service discovery
- Can be scaled independently
- Serves both API and UI
- Port: 4222 (client), 8222 (monitoring, disabled in production)
- JetStream: Enabled for persistent streams
- Network: Isolated within Docker network
- HTTP Port: 80
- HTTPS Port: 443
- Dashboard: 8080 (Disabled in production)
- Auto-discovery: Automatically detects services via Docker labels
# Scale to 5 instances
make scale N=5
# Or use docker-compose directly
docker-compose up -d --scale app=5# Rebuild and restart
make build
make up- Default:
nats://nats:4222(Docker service name) - Override: Set to external NATS server if needed
- Format:
nats://hostname:port
make status
docker-compose ps# All services
make logs
# Specific service
docker-compose logs -f app-1# Stop and remove everything
make clean
# Or step by step
make down
docker system prune -f- We create a single common stream across all jobs (check constants/constants.go).
- In a stream, each new job gets a new subject name.
- A stream consumer belongs to a stream and not to the subject inside the stream. As such, a stream consumer will get messages across all subjects.
- A client who is consuming messages from the consumer has to ACK each message. If it fails to ACK, then NATS makes it available to the next consumer again after a default 30 seconds timeout.
- For each request to stream output, a brand new consumer is created.
- This is done since each NATS consumer gets logs from the very beginning of the job.
- If we share a single consumer for a job, then all parallel requests to stream output will get differing outputs (i.e. if 2 people are watching a stock and the stock goes 1,3,4,2,5, one of the watchers will geet 1,4,5 while the other gets 3,2 ).
- filter job output stream by job ID
- keep a limit on number of messages in NATS stream
- allow multiple job types (today we only trigger the dummy job)
