Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 170 additions & 0 deletions compose/staging/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# Staging Deployment

Deploy the Antenna platform with local Redis, RabbitMQ, and NATS containers.
The database is always external — either a dedicated server, a managed service,
or the optional local Postgres container included here.

## Quick Start (single instance)

### 1. Configure environment files

Copy the examples and fill in the values:

```bash
# Django settings
cp .envs/.production/.django-example .envs/.production/.django

# Database credentials
cat > .envs/.production/.postgres << 'EOF'
POSTGRES_HOST=db
POSTGRES_PORT=5432
POSTGRES_DB=antenna_staging
POSTGRES_USER=antenna
POSTGRES_PASSWORD=<generate-a-password>
EOF

# Database host IP
cat > .envs/.production/.compose << 'EOF'
DATABASE_IP=host-gateway
EOF
```

Key settings to configure in `.envs/.production/.django`:

| Variable | Example | Notes |
|---|---|---|
| `DJANGO_SECRET_KEY` | `<random-string>` | Generate with `python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"` |
| `DJANGO_ALLOWED_HOSTS` | `*` or `api.staging.example.com` | |
| `REDIS_URL` | `redis://redis:6379/0` | Always use `redis` hostname (local container) |
| `CELERY_BROKER_URL` | `amqp://antenna:password@rabbitmq:5672/` | Always use `rabbitmq` hostname |
| `RABBITMQ_DEFAULT_USER` | `antenna` | Must match the user in `CELERY_BROKER_URL` |
| `RABBITMQ_DEFAULT_PASS` | `<password>` | Must match the password in `CELERY_BROKER_URL` |
| `NATS_URL` | `nats://nats:4222` | Always use `nats` hostname |
| `CELERY_FLOWER_USER` | `flower` | Basic auth for the Flower web UI |
| `CELERY_FLOWER_PASSWORD` | `<password>` | |
| `SENDGRID_API_KEY` | `placeholder` | Set a real key to enable email, or any non-empty string to skip |
| `DJANGO_AWS_STORAGE_BUCKET_NAME` | `my-bucket` | S3-compatible object storage for media/static files |
| `DJANGO_SUPERUSER_EMAIL` | `admin@example.com` | Used by `create_demo_project` command |
| `DJANGO_SUPERUSER_PASSWORD` | `<password>` | Used by `create_demo_project` command |

### 2. Start the database

If you have an external database, set `DATABASE_IP` in `.envs/.production/.compose`
to its IP address and skip this step.

For a local database container:

```bash
docker compose -f compose/staging/docker-compose.db.yml up -d

# Set DATABASE_IP to reach the host-published port from app containers
echo "DATABASE_IP=host-gateway" > .envs/.production/.compose
```

Verify the database is ready:

```bash
docker compose -f compose/staging/docker-compose.db.yml logs
# Should show: "database system is ready to accept connections"
```

### 3. Build and start the app

```bash
docker compose -f docker-compose.staging.yml \
--env-file .envs/.production/.compose build django

docker compose -f docker-compose.staging.yml \
--env-file .envs/.production/.compose up -d
```

### 4. Run migrations and create an admin user

```bash
# Shorthand for the compose command
COMPOSE="docker compose -f docker-compose.staging.yml --env-file .envs/.production/.compose"

# Apply database migrations
$COMPOSE run --rm django python manage.py migrate

# Create demo project with sample data and admin user
$COMPOSE run --rm django python manage.py create_demo_project

# Or just create an admin user without sample data
$COMPOSE run --rm django python manage.py createsuperuser --noinput
```

### 5. Verify

```bash
# API root
curl http://localhost:5001/api/v2/

# Django admin
# Open http://localhost:5001/admin/ in a browser

# Flower (Celery monitoring)
# Open http://localhost:5550/ in a browser

# NATS health (internal, but reachable via docker exec)
docker compose -f docker-compose.staging.yml \
--env-file .envs/.production/.compose \
exec nats wget -qO- http://localhost:8222/healthz
```

## Multiple Instances on the Same Host

Internal services (Redis, RabbitMQ, NATS) don't publish host ports, so they
never conflict between instances. Each compose project gets its own isolated
Docker network.

Only Django and Flower publish host ports. Override them with environment
variables and use a unique project name (`-p`):

```bash
# Instance A (defaults: Django on 5001, Flower on 5550)
docker compose -p antenna-main \
-f docker-compose.staging.yml \
--env-file .envs/.production/.compose up -d

# Instance B (custom ports)
DJANGO_PORT=5002 FLOWER_PORT=5551 \
docker compose -p antenna-feature-xyz \
-f docker-compose.staging.yml \
--env-file path/to/other/.compose up -d
```

Each instance needs its own:
- `.envs/.production/.compose` (can share `DATABASE_IP` if using the same DB server)
- `.envs/.production/.postgres` (use a different `POSTGRES_DB` per instance)
- `.envs/.production/.django` (can share most settings, but use unique `DJANGO_SECRET_KEY`)

If using the local database container, each instance needs its own DB container
too (or share one by creating multiple databases in it).

## Stopping and Cleaning Up

```bash
# Stop the app stack
docker compose -f docker-compose.staging.yml \
--env-file .envs/.production/.compose down

# Stop the local database (data is preserved in a Docker volume)
docker compose -f compose/staging/docker-compose.db.yml down

# Remove everything including database data
docker compose -f compose/staging/docker-compose.db.yml down -v
```

## Database Options

The staging compose supports any PostgreSQL database reachable by IP:

| Option | `DATABASE_IP` | Notes |
|---|---|---|
| Local container | `host-gateway` | Use `compose/staging/docker-compose.db.yml` |
| Dedicated VM | `<server-ip>` | Best performance for shared environments |
| Managed service | `<service-ip>` | Cloud-hosted PostgreSQL |

Set `POSTGRES_HOST=db` in `.envs/.production/.postgres` — the `extra_hosts`
directive in the compose file maps `db` to whatever `DATABASE_IP` resolves to.
38 changes: 38 additions & 0 deletions compose/staging/docker-compose.db.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Optional local PostgreSQL for staging environments.
#
# Use this when you don't have an external database (e.g., for local testing
# or isolated branch previews). Publishes PostgreSQL on localhost:5432.
#
# Usage:
# # Start the database first
# docker compose -f compose/staging/docker-compose.db.yml up -d
#
# # Then start the app stack
# docker compose -f docker-compose.staging.yml --env-file .envs/.production/.compose up -d
#
# The app connects to the database via extra_hosts (db → DATABASE_IP).
# Set DATABASE_IP to the Docker bridge gateway so the app container can
# reach the host-published port:
#
# .envs/.production/.compose:
# DATABASE_IP=host-gateway # Recommended (resolves to host on all platforms)
#
# .envs/.production/.postgres:
# POSTGRES_HOST=db # resolves via extra_hosts to DATABASE_IP

volumes:
staging_postgres_data: {}

services:
postgres:
build:
context: ../../
dockerfile: ./compose/local/postgres/Dockerfile
volumes:
- staging_postgres_data:/var/lib/postgresql/data
- ../../data/db/snapshots:/backups
env_file:
- ../../.envs/.production/.postgres
ports:
- "127.0.0.1:5432:5432"
restart: always
89 changes: 50 additions & 39 deletions docker-compose.staging.yml
Original file line number Diff line number Diff line change
@@ -1,79 +1,90 @@
# Identical to production.yml, but with the following differences:
# Uses the django production settings file, but staging .env file.
# Uses "local" database
# Staging / demo / branch preview deployment.
#
# 1. The database is a service in the Docker Compose configuration rather than external as in production.
# 2. Redis is a service in the Docker Compose configuration rather than external as in production.
# 3. Port 5001 is exposed for the Django application.

volumes:
ami_local_postgres_data: {}
# Like production, but runs Redis, RabbitMQ, and NATS as local containers
# instead of requiring external infrastructure services.
# Database is always external — set DATABASE_IP in .envs/.production/.compose.
#
# Usage:
# docker compose -f docker-compose.staging.yml --env-file .envs/.production/.compose up -d
#
# For a local database, see compose/staging/docker-compose.db.yml.
#
# Multiple instances: This compose file can run multiple instances on the same
# host (e.g., branch previews, worktrees) by setting a unique project name and
# overriding the published ports:
#
# DJANGO_PORT=5002 FLOWER_PORT=5551 \
# docker compose -p my-preview -f docker-compose.staging.yml \
# --env-file .envs/.production/.compose up -d
#
# Internal services (Redis, RabbitMQ, NATS) do not publish host ports, so they
# never conflict between instances. Each compose project gets its own isolated
# Docker network.
#
# Required env files:
# .envs/.production/.compose — DATABASE_IP
# .envs/.production/.django — Django settings, CELERY_BROKER_URL, NATS_URL, etc.
# .envs/.production/.postgres — POSTGRES_HOST=db, POSTGRES_DB, POSTGRES_USER, POSTGRES_PASSWORD

services:
django: &django
build:
context: .
# This is the most important setting to test the production configuration of Django.
dockerfile: ./compose/production/django/Dockerfile

image: insectai/ami_backend
depends_on:
- postgres
- redis
# - nats
- rabbitmq
- nats
env_file:
- ./.envs/.production/.django
- ./.envs/.local/.postgres
- ./.envs/.production/.postgres
volumes:
- ./config:/app/config
ports:
- "5001:5000"
- "${DJANGO_PORT:-5001}:5000"
extra_hosts:
- "db:${DATABASE_IP:?Set DATABASE_IP in .envs/.production/.compose}"
command: /start
restart: always

postgres:
build:
context: .
# There is not a local/staging version of the Postgres Dockerfile.
dockerfile: ./compose/local/postgres/Dockerfile
# Share the local Postgres image with the staging configuration.
# Production uses an external Postgres service.
volumes:
- ami_local_postgres_data:/var/lib/postgresql/data
- ./data/db/snapshots:/backups
env_file:
- ./.envs/.local/.postgres
restart: always

redis:
image: redis:6
restart: always

celeryworker:
<<: *django
scale: 1
ports: []
command: /start-celeryworker
restart: always

celerybeat:
<<: *django
ports: []
command: /start-celerybeat
restart: always

flower:
<<: *django
ports:
- "5550:5555"
- "${FLOWER_PORT:-5550}:5555"
command: /start-flower
restart: always
volumes:
- ./data/flower/:/data/

redis:
image: redis:6
restart: always

rabbitmq:
image: rabbitmq:3.13-management-alpine
hostname: rabbitmq
env_file:
- ./.envs/.production/.django
restart: always

nats:
image: nats:2.10-alpine
container_name: ami_local_nats
hostname: nats
ports:
- "4222:4222" # Client port
- "8222:8222" # HTTP monitoring port
command: ["-js", "-m", "8222"] # Enable JetStream and monitoring
command: ["-js", "-m", "8222"]
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8222/healthz"]
interval: 10s
Expand Down
2 changes: 1 addition & 1 deletion requirements/base.txt
Original file line number Diff line number Diff line change
Expand Up @@ -98,5 +98,5 @@ pytest-django==4.5.2 # https://github.com/pytest-dev/pytest-django
# ------------------------------------------------------------------------------

newrelic==9.6.0
gunicorn==20.1.0 # https://github.com/benoitc/gunicorn
gunicorn==23.0.0 # https://github.com/benoitc/gunicorn
# psycopg[c]==3.1.9 # https://github.com/psycopg/psycopg
Loading