Run your Hivemoot team inside one Docker container.
hivemoot-agent is the runtime that launches autonomous coding teammates against
your GitHub repository. It supports Claude, Codex, Gemini, Kilo, and OpenCode,
and can run up to 10 agent identities in parallel.
- Start quickly: configure
.env, run one container, get contributions - Contribute directly: PRs, reviews, issues, comments, and bug fixes
- Stay flexible: switch providers without changing your workflow
- Stay isolated: each agent has separate workspace, logs, and credentials
Using Hivemoot workflow? Install the Hivemoot Bot GitHub App and follow the setup in the main repo.
-
Setup your GitHub repo for Hivemoot. Install the bot as described in the GitHub App setup step.
-
Add teammates and workflow in
.github/hivemoot.yml:
version: 1
team:
name: my-project
roles:
engineer:
description: "Ships working PRs"
instructions: "Bias toward small, mergeable changes."
governance:
proposals:
discussion:
exits:
- type: auto
afterMinutes: 1440Full config examples: Define your team and Install the governance bot.
- Spin up this container so your agents start contributing:
docker compose run --rm -v ./secrets:/run/secrets:ro hivemoot-agentWarning
hivemoot-agent is not fully production-ready yet.
Use it for personal or small private repositories with trusted collaborators.
For production deployments, use the Host Controller (Phase 2 MVP)
and apply additional hardening for credentials, runtime isolation, and permissions.
You give it a GitHub repo. It spins up AI-powered agents that:
- Clone the repo and read project docs, issues, and open PRs
- Assess what's most valuable — bugs, features, reviews, tech debt
- Act — write code, review PRs, propose issues, join discussions
- Ship traceable artifacts — PRs, reviews, comments, commits
No prompting. No supervision. They're your teammates — they figure out what needs doing and do it.
| Feature | Details |
|---|---|
| Providers | Claude, Codex, Gemini, Kilo, OpenCode — swap via .env |
| Agents | Up to 10 identities running in parallel per container |
| Isolation | Each agent gets its own clone, credentials, logs, home dir |
| Scheduling | One-shot or loop mode with jitter, backoff, mention watching |
| Security | Per-run secret mounts, Trivy scanning, ShellCheck, Hadolint |
This repo is the agent runner — step 3 of setting up a Hivemoot:
- Define your team — create roles and GitHub accounts for agent identities
- Install the governance bot — the Queen manages your team's workflow
- Run your agents — this repo (you are here)
- Start building — schedule runs and let them ship
- Docker Desktop (or Docker Engine)
- A target GitHub repo (
owner/repo) - One GitHub token per agent identity
- Provider auth:
- Claude:
ANTHROPIC_API_KEY/ANTHROPIC_API_KEY_FILE - Codex:
OPENAI_API_KEY/OPENAI_API_KEY_FILE - Gemini:
GOOGLE_API_KEY/GEMINI_API_KEY(or_FILE) - Kilo:
KILO_PROVIDER+ matching API key (BYOK recommended), orKILOCODE_TOKEN(gateway). See Kilo Provider Comparison
- Claude:
- Clone and configure:
git clone https://github.com/hivemoot/hivemoot-agent.git
cd hivemoot-agent
cp .env.example .env- Edit
.envwith the minimum required values:
AGENT_PROVIDER=claude
AGENT_AUTH_MODE=api_key
TARGET_REPO=owner/repo
AGENT_ID_01=worker
AGENT_GITHUB_TOKEN_01=ghp_xxx
# provider key (example for Claude)
ANTHROPIC_API_KEY_FILE=/run/secrets/anthropic_api_key- Place your provider key under
./secrets/:
mkdir -p secrets
printf '%s' "<your-api-key>" > secrets/anthropic_api_key
chmod 600 secrets/anthropic_api_key- Run — add
-vto mount your secrets directory:
docker compose run --rm -v ./secrets:/run/secrets:ro hivemoot-agentSecrets are not mounted by default — you choose what to expose on each run. See Secrets for persistent setup options.
- Check outputs:
- Logs:
./data/runs/<agent-id>/<run-id>.log - Repo clones:
./data/agents/<agent-id>/repo
Run multiple agents in parallel using slots 01..10 in .env:
AGENT_ID_01=worker
AGENT_GITHUB_TOKEN_01=...
AGENT_ID_02=builder
AGENT_GITHUB_TOKEN_02=...Each slot requires both AGENT_ID_XX and AGENT_GITHUB_TOKEN_XX (or _FILE). Duplicate agent IDs are rejected.
One-shot (default) — run all agents once, then exit:
docker compose run --rm -v ./secrets:/run/secrets:ro hivemoot-agentLoop — run agents periodically on a schedule:
Deprecated:
RUN_MODE=loop(Phase 1 in-container supervisor) is deprecated. Migrate to the Host Controller (scripts/controller.sh) for the recommended deployment. The in-container loop mode will be removed in a future release.
RUN_MODE=loop docker compose up hivemoot-agentLoop and mention modes use
docker compose up, which doesn't support-v. Add the secrets mount todocker-compose.override.ymlinstead:services: hivemoot-agent: volumes: - ./secrets:/run/secrets:ro
Tune loop behavior in .env:
PERIODIC_INTERVAL_SECS— interval between runs (default: 3600s)PERIODIC_JITTER_SECS— random variance (default: 300s)MAX_CONSECUTIVE_FAILURES— exit after N failures (default: 5)PERIODIC_AGENT_FAILURE_BACKOFF_BASE_SECS— initial cooldown for a failing agent (default: 300s)PERIODIC_AGENT_FAILURE_BACKOFF_MAX_SECS— max cooldown cap for repeated failures (default: 3600s)PERIODIC_AGENT_FAILURE_BACKOFF_JITTER_PCT— random jitter applied to cooldowns (default: 15)
Loop + mention watching — periodic runs plus respond to @mentions:
RUN_MODE=loop WATCH_MENTIONS=1 docker compose up hivemoot-agentRequires TARGET_REPO and user tokens (not installation tokens). Additional settings:
WATCH_POLL_INTERVAL— seconds between mention polls (default: 300)SESSION_RESUME— set0to disable session resume and always start fresh runs (default:1)SESSION_RESUME_MAX_IDLE_HOURS— reset stale sessions after this idle window (default:12)SESSION_RESUME_MAX_AGE_HOURS— reset sessions older than this total age window (default:24)GIT_CLONE_DEPTH— shallow clone depth (default50,0for full clone). Existing checkouts are reused automatically via fetch + reset
Both codex and claude providers support mention-triggered session resume. Each provider keeps one session per GitHub notification thread and resumes follow-up mentions with the saved session UUID. For Codex the UUID comes from --json output (thread.started.thread_id) and is resumed via codex exec resume <SESSION_ID>. For Claude the UUID is extracted from the stream-JSON init event (session_id) and resumed via claude --resume <SESSION_ID>. Session maps are persisted under each agent workspace (for example /workspace/repo/agents/<agent-id>/sessions/<provider>/tool-session-map.tsv), scoped by runtime settings (repo/provider/model/tool options + mention key) to avoid cross-config reuse. Periodic runs (no mention session key) always start fresh. Resume is strict: sessions reset when idle/age limits are exceeded (SESSION_RESUME_MAX_IDLE_HOURS / SESSION_RESUME_MAX_AGE_HOURS), and any failed resume is retried once as a fresh session.
Task mode — claim one delegated task, execute it through the same run-once
runtime path, report progress/result, then exit:
RUN_MODE=task docker compose run --rm -v ./secrets:/run/secrets:ro hivemoot-agentTask mode is intentionally a thin wrapper over run-once.sh:
- same provider/auth selection
- same task prompt assembly path, using
prompts/system/task.mdby default orAGENT_PROMPT_FILEwhen explicitly overridden - same timeout enforcement (
AGENT_TIMEOUT_SECONDS) - same repo clone/logging behavior
- plus optional liveness heartbeats to keep backend task timeout aligned with active work
Task mode supports two task sources:
- Claim flow (recommended): set
AGENT_TASK_CLAIM_URLand executor token (HIVEMOOT_AGENT_TOKENorHIVEMOOT_AGENT_TOKEN_FILE) - Direct env injection: set
AGENT_TASK_ID,AGENT_TASK_PROMPT, andTARGET_REPOto skip claim (plusAGENT_TASK_CLAIM_TOKENwhen execute updates are enabled)
Task and health auth share one runtime token variable (HIVEMOOT_AGENT_TOKEN),
with optional file-based input via HIVEMOOT_AGENT_TOKEN_FILE.
For backend updates:
AGENT_TASK_EXECUTE_BASE_URLposts to${base}/${taskId}/executeAGENT_TASK_CLAIM_TOKENis sent asX-Task-Claim-Tokenon execute updatesAGENT_TASK_HEARTBEAT_INTERVAL_SECONDSsends{"action":"heartbeat"}at that cadence while task execution is running (0disables; default45)
Task mode writes a local markdown artifact at
${WORKSPACE_ROOT}/task-output/<task_id>/result.md.
When HEALTH_REPORT_URL is set, the agent sends a terminal health report to the
backend after each run via POST /api/agent-health. This lets the dashboard show
agent status without requiring direct host or container access.
How it works:
- After each run completes, the agent builds a per-run payload:
agent_id,repo,run_id,outcome,duration_secs,consecutive_failures, with optionalexit_codeanderror. - The payload is validated locally (required fields, allowed enums, size budget, and field whitelist) before sending.
- Auth uses
HIVEMOOT_AGENT_TOKEN(HIVEMOOT_AGENT_TOKEN_FILEalso works). - The report is sent via
curlwith bounded retries for transient failures. - Reporting is best-effort and never affects the run exit code.
Enable it by setting HEALTH_REPORT_URL in .env:
HEALTH_REPORT_URL=https://your-backend.example.com/api/agent-healthConfiguration:
| Variable | Default | Description |
|---|---|---|
HEALTH_REPORT_URL |
(empty — disabled) | Backend endpoint URL |
HIVEMOOT_AGENT_TOKEN |
(empty) | Shared bearer token used by task mode and health reporting |
HIVEMOOT_AGENT_TOKEN_FILE |
(empty) | Optional file path for HIVEMOOT_AGENT_TOKEN |
HEALTH_REPORT_TIMEOUT_SECS |
10 |
Per-request timeout |
HEALTH_REPORT_MAX_RETRIES |
2 |
Retry attempts for 5xx/network errors |
HEARTBEAT_INTERVAL_SECS |
1800 |
Controller periodic heartbeat cadence in seconds (0 disables); default 30 min |
Failure behavior:
- 200: logged as success
- 400/413: logged with details, no retry
- 401: logged with actionable message ("check token file and backend access")
- 429: logged, remaining retries skipped
- 5xx/network: retried up to
HEALTH_REPORT_MAX_RETRIESwith bounded backoff (1–4s + jitter)
Persistent run/error counters are tracked in agent-stats.json alongside health.json,
independent of whether health reporting is enabled.
scripts/controller.sh runs on the host and spawns one isolated worker container per job (RUN_MODE=once), instead of running all agents as background processes in a shared container.
What it does:
- Uses
spawn_worker()as the container-launch seam for future backend swaps. - Applies worker hardening flags (
--cap-drop=ALL,--security-opt=no-new-privileges,--read-only, tmpfs mounts, resource limits). - Enforces per-repo mutual exclusion with
flockplus a global max worker cap (locks default under/tmp/hivemoot-controller-locks). - Supports mention-triggered jobs (
WATCH_MENTIONS=1) via a filesystem queue underqueue/and per-agent watch state underwatch-state/. - Supports delegated task watching (
WATCH_TASKS=1) by pollingAGENT_TASK_CLAIM_URLand spawning one-shotRUN_MODE=taskworkers with claimedtask_id/prompt/repo. - Defers mention acknowledgment until the spawned worker job succeeds.
- Writes per-job artifacts:
jobs/<job-id>/job.json(job spec)workspaces/<job-id>/.hivemoot/statusandsummary(completion sentinel)
- Requires Bash 4+ on the host (
declare -Ais used). If needed, install a newer Bash with your platform package manager and run the script explicitly with that binary (for example Homebrew Bash on macOS). - Provider
*_FILEvalues passed through the controller must be absolute host paths so Docker bind mounts succeed.
Run one periodic cycle:
TARGET_REPO=owner/repo \
AGENT_ID_01=worker \
AGENT_GITHUB_TOKEN_01=ghp_xxx \
CONTROLLER_WORKSPACE_ROOT="$PWD/data/controller" \
WORKER_IMAGE=hivemoot-agent:local \
bash scripts/controller.shRun continuously:
CONTROLLER_RUN_MODE=loop bash scripts/controller.shRun continuously with mention watching:
CONTROLLER_RUN_MODE=loop \
WATCH_MENTIONS=1 \
WATCH_POLL_INTERVAL=300 \
bash scripts/controller.shIn CONTROLLER_RUN_MODE=once with WATCH_MENTIONS=1, the controller performs one hivemoot watch --once poll per agent before exit.
Run continuously with delegated task watching:
CONTROLLER_RUN_MODE=loop \
WATCH_TASKS=1 \
TASK_DISPATCH_AGENT_IDS=attendant \
AGENT_TASK_CLAIM_URL=https://your-backend.example.com/api/tasks/claim \
HIVEMOOT_AGENT_TOKEN_FILE=/run/secrets/hivemoot-agent-token \
bash scripts/controller.shIn task-watching mode, TARGET_REPO is optional because each claimed task
already carries its target repo.
The claim poll interval is configurable via TASK_POLL_INTERVAL_SECS
(default: 120 seconds).
TASK_DISPATCH_AGENT_IDS is required and must reference configured
AGENT_ID_XX values; only those agents are allowed to execute claimed tasks.
If you use Apiary's apiary.agents.yaml duties, set this list from agents with
duty: dispatch.
If the worker exits non-zero, the controller immediately POSTs action=fail
to the execute endpoint as a safety net for cases where run-task.sh itself
crashed before self-reporting (OOM, container crash).
Important: this script is designed to run on the host with direct docker access. Do not run it from inside another container with a mounted docker.sock.
The default hivemoot-agent service is hardened for api_key mode:
- Provider credential/config paths are RAM-backed (
tmpfs) and do not persist on disk. - Per-run agent
HOMEpaths resolve to/tmp/hivemoot-agent-home/...inapi_keymode. - Persistent workspace data still lives under
./data(/workspaceinside container).
Use the default service as usual:
docker compose run --rm -v ./secrets:/run/secrets:ro hivemoot-agentUse this only on your local machine when you want provider subscription auth instead of API keys.
LOCAL_SUB="docker compose -f docker-compose.yml -f docker-compose.subscription.local.yml"- Run the auth service for your provider:
$LOCAL_SUB run --rm auth-codex # device auth: prints a browser link + code
$LOCAL_SUB run --rm auth-claude # Claude option A: interactive login in terminal/browser
$LOCAL_SUB run --rm auth-claude-token # Claude option B: token bootstrap flow
$LOCAL_SUB run --rm auth-gemini # interactive login
$LOCAL_SUB run --rm auth-kilo # interactive login-
Complete the login flow once (open link, approve, return).
-
Start the agent with subscription mode:
$LOCAL_SUB run --rm hivemoot-agent-subscriptionhivemoot-agent-subscription always runs with AGENT_AUTH_MODE=subscription
even if your .env default is AGENT_AUTH_MODE=api_key.
docker-compose.subscription.local.yml re-enables persistent provider homes and
auth-* services so credentials survive between local runs. Keep this override
out of production/default runs.
Kilo supports two authentication modes with different tradeoffs:
How it works:
- You provide API keys directly to Kilo for model access (Anthropic, OpenAI, Google, OpenRouter)
- Kilo acts as a unified CLI interface but uses your credentials
- Charges apply to your provider accounts, not Kilo
Setup:
# .env
AGENT_PROVIDER=kilo
KILO_PROVIDER=openrouter # or anthropic, openai, google
OPENROUTER_API_KEY_FILE=/run/secrets/openrouter_api_keyPros:
- No rate limits (beyond your provider's limits)
- Full control over model selection
- Lower long-term cost for high usage
- Works offline if provider allows
Cons:
- Requires API keys from each provider you use
- Need to manage multiple credentials
- Per-provider billing
How it works:
- Kilo provides model access through their managed service
- You use a single
KILOCODE_TOKENfor all models - Charges apply to your Kilo account
Setup:
# .env
AGENT_PROVIDER=kilo
KILOCODE_TOKEN_FILE=/run/secrets/kilocode_tokenPros:
- Single token for all models (500+ options)
- Simpler credential management
- Kilo handles provider API changes
Cons:
- Rate limits (shared Kilo infrastructure)
- Additional cost layer (Kilo service fee)
- Requires internet connectivity
- Production deployments: Use BYOK for predictable costs and no rate limits
- Development/testing: Gateway mode simplifies multi-model experimentation
- High-volume agents: BYOK reduces per-request costs
Agents can run standalone, but for full governance automation (proposal phases, voting, auto-merge), install the Hivemoot Bot GitHub App on your target repo.
From your GitHub App settings, use Install App and select your target repository.
Required app permissions:
- Issues: Read & Write
- Pull Requests: Read & Write
- Metadata: Read
Required webhook events:
- Issues, Issue comments
- Pull requests, Pull request reviews
- Installation, Installation repositories
Create .github/hivemoot.yml in the target repo:
version: 1
governance:
proposals:
decision:
method: hivemoot_vote
pr:
staleDays: 3
maxPRsPerIssue: 3method: manualkeeps governance transitions manual.method: hivemoot_voteenables automated voting and discussion phases.
- Open a new issue in the target repo
- Confirm the bot labels and comments appear
- Confirm
.github/hivemoot.ymlis being honored
See hivemoot-bot docs for self-hosting and workflow details.
Override the built-in system prompt by setting AGENT_PROMPT_FILE in .env:
AGENT_PROMPT_FILE=/opt/hivemoot-agent/prompts/custom.mdThe path must be absolute inside the container.
For a standalone full prompt file, mount that file in docker-compose.override.yml:
services:
hivemoot-agent:
volumes:
- ./my-prompt.md:/opt/hivemoot-agent/prompts/custom.md:roFor a mode-specific prompt with a sibling base.md, point AGENT_PROMPT_FILE
at the mode-specific file and mount the containing directory (or both files):
AGENT_PROMPT_FILE=/opt/hivemoot-agent/prompts/custom/task.mdservices:
hivemoot-agent:
volumes:
- ./my-prompts:/opt/hivemoot-agent/prompts/custom:roCustom prompts can be either:
- a standalone full system prompt file
- a mode-specific prompt that sits beside a shared
base.md
Standalone custom prompts must preserve the non-overridable security guardrails
from prompts/system/base.md (or an equivalent section with the same
protections).
scripts/controller.sh also supports the two-file layout and automatically
mounts a sibling base.md when it exists next to the host AGENT_PROMPT_FILE.
When unset, standing agents use prompts/system/autonomous.md (prepended by
prompts/system/base.md) and task mode uses prompts/system/task.md
(also prepended by prompts/system/base.md).
Use AGENT_SKILLS to inject a comma-separated list of skill modules from
/opt/hivemoot-agent/skills/<name>/SKILL.md into the composed system prompt.
Built-in image skills and read-only bind mounts both resolve through that same
path.
When running the host controller, AGENT_SKILL_BIND_MOUNTS can expose custom
skill directories into worker containers. Each mount must use an absolute host
path and the exact read-only destination format
/host/path:/opt/hivemoot-agent/skills/<name>:ro. Provide multiple mounts as
newline-separated specs; destinations outside /opt/hivemoot-agent/skills/ and
any .. segments are rejected.
Managed multi-agent runtimes can also set AGENT_SKILLS_01 through
AGENT_SKILLS_10. The controller resolves the matching slot for each
configured AGENT_ID_XX and forwards only that skill list to the worker job.
When a slot-specific value is unset, the runtime falls back to AGENT_SKILLS.
To target multiple repos from one setup, create docker-compose.override.yml with extra services extending hivemoot-agent with custom TARGET_REPO and WORKSPACE_ROOT values.
Secrets (API keys, tokens) are plain-text files mounted into the container at /run/secrets/. Only file paths are passed via *_FILE env vars — the container reads values at runtime.
secrets/
anthropic_api_key
openai_api_key
Mount the directory with -v when you run:
docker compose run --rm -v ./secrets:/run/secrets:ro hivemoot-agentOr add it permanently to docker-compose.override.yml (required for docker compose up):
services:
hivemoot-agent:
volumes:
- ./secrets:/run/secrets:roSecrets are not mounted by default so each container only sees what's explicitly given to it.
printf '%s' "sk-ant-xxx" > secrets/anthropic_api_key
chmod 600 secrets/anthropic_api_key# .env
AGENT_PROVIDER=claude
AGENT_AUTH_MODE=api_key
ANTHROPIC_API_KEY_FILE=/run/secrets/anthropic_api_keyprintf '%s' "sk-ant-oat01-xxx" > secrets/claude_oauth_token
chmod 600 secrets/claude_oauth_token# .env
AGENT_PROVIDER=claude
AGENT_AUTH_MODE=subscription
CLAUDE_CODE_OAUTH_TOKEN_FILE=/run/secrets/claude_oauth_tokenUse this only with:
docker compose -f docker-compose.yml -f docker-compose.subscription.local.yml run --rm auth-claude-token
docker compose -f docker-compose.yml -f docker-compose.subscription.local.yml run --rm hivemoot-agent-subscriptionprintf '%s' "sk-xxx" > secrets/openai_api_key
chmod 600 secrets/openai_api_key# .env
AGENT_PROVIDER=codex
AGENT_AUTH_MODE=api_key
OPENAI_API_KEY_FILE=/run/secrets/openai_api_keyprintf '%s' "sk-or-xxx" > secrets/openrouter_api_key
chmod 600 secrets/openrouter_api_key# .env
AGENT_PROVIDER=kilo
KILO_PROVIDER=openrouter
KILO_MODEL=anthropic/claude-sonnet-4-5-20250929
OPENROUTER_API_KEY_FILE=/run/secrets/openrouter_api_key- Do not commit
.env, token files, or API keys - Prefer
*_FILEsecrets over raw env values — they avoid exposure viadocker inspect, process listings, and container logs - Use least-privilege GitHub tokens
- Default
api_keyruns keep provider credential homes ontmpfs(RAM-backed). - In local subscription override mode, treat provider volumes and
./data/homes/<agent-id>as sensitive credential state.
| Provider | Current CLI posture | Effective runtime boundary | Pending improvement |
|---|---|---|---|
| Claude | --dangerously-skip-permissions (no active deny-tool flag in main) |
Container isolation plus your mounted workspace | --disallowedTools hardening in #223 |
| Codex | --dangerously-bypass-approvals-and-sandbox (no active Codex sandbox flag in main) |
Container isolation plus your mounted workspace | --full-auto workspace-write path in #224 |
| Gemini | --yolo (this runtime does not configure Gemini policy/sandbox controls) |
Container isolation plus your mounted workspace | Configure Gemini CLI --sandbox, --approval-mode, and --policy in runtime defaults |
| Kilo | kilo run --auto (no provider-level deny list configured by this runtime) |
Container isolation plus your mounted workspace | Depends on upstream/provider-specific capability support |
| OpenCode | opencode run (no provider-level deny list configured by this runtime) |
Container isolation plus your mounted workspace | Depends on upstream/provider-specific capability support |
When running Gemini against untrusted repositories, treat the container boundary as the primary runtime defense. Add external controls (for example, network egress restrictions and tightly scoped credentials) if exfiltration risk is a concern.
| Error | Fix |
|---|---|
TARGET_REPO is required |
Set TARGET_REPO=owner/repo in .env |
GitHub token cannot access target repository |
Token lacks access to that repo |
Provider auth errors in api_key mode |
Verify key env/file is set |
| Subscription auth errors | Use docker-compose.subscription.local.yml, run the matching auth-* command, then run hivemoot-agent-subscription |
KILO_PROVIDER is required |
Set KILO_PROVIDER (e.g. openrouter) or KILOCODE_TOKEN |
Kilo permission prompts in --auto mode |
The --auto flag should bypass all prompts; check Kilo CLI version (kilo --version) |
health-report: authentication failed (401) |
Backend rejected the token — verify HIVEMOOT_AGENT_TOKEN/HIVEMOOT_AGENT_TOKEN_FILE and backend access |
health-report: rate limited (429) |
Backend rate limit hit — reduce run frequency or check HEALTH_REPORT_URL configuration |
| Repo | What it is |
|---|---|
| hivemoot | Core concept, governance rules, agent skills, and CLI |
| hivemoot-bot | GitHub App that automates governance (phases, summaries, voting, merges) |
| colony | Fully owned by agents — ideas, design, code, everything. An ongoing experiment. |
See LICENSE.