by TetiAI
Native Linux sandboxing built for the AI era.
HiveBox turns Alpine Linux into a platform where sandboxing is a first-class primitive. No containers, no VMs — just direct kernel isolation (namespaces, cgroups, seccomp, Landlock) exposed through a simple CLI and REST API.
Each hivebox is a lightweight, isolated Linux sandbox with its own OpenCode AI agent. Launch thousands on a single host — each running arbitrary commands in complete isolation with near-zero overhead, and each with a dedicated AI coding assistant accessible via API or the built-in web dashboard.
The AI industry has a proportionality problem. Every major platform spins up full Ubuntu containers — mounted filesystems, browsers, CLI tools, isolated networks — for every user interaction. A .pptx, a .docx, a .xlsx, a PDF, a CSV export. Tasks that take milliseconds with a lightweight library are burning entire virtual machines in the cloud.
To generate a 10-slide presentation, a small virtual computer boots. To format a Word document with headers, a container spins up. To create a spreadsheet with a few formulas, an isolated network is allocated. None of this requires a sandbox. It requires a function call.
The hidden cost is real:
- A single AI query consumes 10x more energy than a traditional search
- Running workloads in Docker containers produces a measurable increase in energy consumption vs. bare Linux, due to I/O overhead
- U.S. AI data centers are projected to emit 24-44 million metric tons of CO2/year by 2030 and consume water equivalent to 6-10 million Americans (Cornell, 2025)
But when you do need isolation — running untrusted code, AI agents executing arbitrary commands, multi-tenant environments — you need it done right, and you need it done light.
HiveBox is the proportionate answer: kernel-native isolation with near-zero overhead. No container runtime. No hypervisor. Just the Linux kernel doing what it already knows how to do.
| Docker | gVisor | Firecracker | Daytona | HiveBox | |
|---|---|---|---|---|---|
| Memory per sandbox | ~50 MB | ~30 MB | ~5 MB | ~50 MB+ (Docker-based) | ~2-5 MB |
| Startup time | ~1 s | ~0.5 s | ~125 ms | ~90 ms (warm) | ~10-50 ms |
| Isolation | Namespaces | Syscall emulation | Hypervisor | Docker (managed) | NS + seccomp + Landlock |
| Self-hosted | Yes | Yes | Yes | Yes (cloud also available, pay-per-second) | Yes |
| Complexity | High | Medium | Medium | High (requires Docker Compose) | Low |
Same security. A fraction of the resources. The smartest infrastructure is the one that uses the minimum needed for the task.
One command on any Linux VPS (Ubuntu, Debian, Alpine, etc.):
curl -fsSL https://raw.githubusercontent.com/TetiAI/hivebox/main/scripts/setup.sh | sudo bashThis installs Docker (if needed), pulls the latest image, generates an API key, and starts HiveBox. The dashboard is available at http://your-server:7070/dashboard.
To pass custom configuration:
curl -fsSL https://raw.githubusercontent.com/TetiAI/hivebox/main/scripts/setup.sh | \
HIVEBOX_API_KEY=your-key \
HIVEBOX_OPENCODE_API_KEY=sk-... \
HIVEBOX_OPENCODE_MODEL=your-model \
HIVEBOX_PACKAGES="python3 nodejs git" \
sudo -E bashHIVEBOX_PACKAGES installs extra Alpine packages into every sandbox at startup. All settings are saved in /opt/hivebox/.env and persist across restarts.
# Create a hivebox
hivebox create --name myagent --memory 512m
# Execute commands in it
hivebox exec myagent -- echo "hello from the hivebox"
hivebox exec myagent -- pip install requests numpy
hivebox exec myagent -- python3 scraper.py
# Check status
hivebox list
# Clean up
hivebox destroy myagent# Start the daemon
hivebox daemon --port 7070 --api-key mysecretkey
# Create a hivebox
curl -X POST http://localhost:7070/api/v1/hiveboxes \
-H "Authorization: Bearer mysecretkey" \
-H "Content-Type: application/json" \
-d '{"name": "api-hivebox", "timeout": 3600}'
# Execute a command
curl -X POST http://localhost:7070/api/v1/hiveboxes/api-hivebox/exec \
-H "Authorization: Bearer mysecretkey" \
-H "Content-Type: application/json" \
-d '{"command": "python3 -c \"print(1+1)\""}'
# Destroy
curl -X DELETE http://localhost:7070/api/v1/hiveboxes/api-hivebox \
-H "Authorization: Bearer mysecretkey"HiveBox includes a built-in MCP (Model Context Protocol) server that bridges any AI coding agent into a sandbox. No custom agent loop — just plug your favorite MCP-compatible client (OpenCode, Claude Code, etc.) into any hivebox.
[AI Client] --HTTP--> [hivebox daemon :7070/mcp] --nsenter--> [Sandbox]
Setup:
- Start the daemon and create a sandbox:
hivebox daemon --port 7070 --api-key mysecretkey
hivebox create --name myworkspace --memory 512m- Configure your AI client to connect via MCP over HTTP (no local binary needed):
{
"mcpServers": {
"sandbox": {
"url": "http://your-server:7070/api/v1/hiveboxes/{HIVEBOX_ID}/mcp",
"headers": { "Authorization": "Bearer mysecretkey" }
}
}
}- The AI client now has 16 tools to operate inside the sandbox:
exec,read_file,read_multiple_files,write_file,edit_file,list_directory,directory_tree,search_files,glob,get_file_info,create_directory,move_file,upload_file,download_file,read_media_file,list_directory_with_sizes.
Each sandbox automatically gets its own OpenCode AI agent, accessible through the same daemon port. No extra setup — create a sandbox and start chatting.
OpenCode doesn't run inside the sandbox — it runs as a separate opencode serve process on the host, connected to the hivebox via MCP. The daemon proxies all requests through port 7070, so each sandbox's agent is accessible at /api/v1/hiveboxes/{id}/opencode/ without exposing internal ports.
# Create a sandbox (opencode starts automatically)
curl -X POST http://localhost:7070/api/v1/hiveboxes \
-H "Authorization: Bearer mysecretkey" \
-H "Content-Type: application/json" \
-d '{"name": "my-agent"}'
# Connect to the real-time event stream (SSE)
curl -N http://localhost:7070/api/v1/hiveboxes/my-agent/opencode/event \
-H "Authorization: Bearer mysecretkey"
# Create a session
curl -X POST http://localhost:7070/api/v1/hiveboxes/my-agent/opencode/session \
-H "Authorization: Bearer mysecretkey" \
-H "Content-Type: application/json" -d '{}'
# Send a prompt (response streams via SSE)
curl -X POST http://localhost:7070/api/v1/hiveboxes/my-agent/opencode/session/{SESSION_ID}/prompt_async \
-H "Authorization: Bearer mysecretkey" \
-H "Content-Type: application/json" \
-d '{"parts":[{"type":"text","text":"create a hello world script"}]}'OpenCode is enabled by default. Disable it with HIVEBOX_OPENCODE=false.
HiveBox includes a built-in web dashboard for managing hiveboxes from the browser. Start the daemon and navigate to:
http://localhost:7070/dashboard
The dashboard provides:
- Login with API key
- Create, list, and destroy hiveboxes
- Built-in terminal to execute commands in any hivebox
- Agent playground with real-time SSE streaming for each hivebox
- Real-time hivebox status (uptime, TTL, resource usage, command count)
- Auto-refresh every 5 seconds
No external dependencies — everything is embedded in the binary.
HiveBox uses three components:
- Alpine Linux host — unmodified vanilla Alpine (or any Linux with kernel 5.15+)
hiveboxbinary — single static Rust binary (~5 MB) managing hivebox lifecycle- Rootfs image — single Alpine base squashfs image mounted with overlayfs
Every hivebox is protected by six independent security layers:
+---------------------------------------------------+
| User Command |
+---------------------------------------------------+
| 6. seccomp-BPF — syscall allow-list |
| 5. Capabilities — minimal privilege set |
| 4. Landlock LSM — filesystem path control |
| 3. pivot_root — filesystem isolation |
| 2. cgroup v2 — resource limits |
| 1. Namespaces — process/net/mount/user |
+---------------------------------------------------+
| Linux Kernel |
+---------------------------------------------------+
- Namespaces (PID, Mount, Network, User, UTS, IPC) — complete process isolation
- cgroup v2 — memory, CPU, PID limits; swap disabled
- pivot_root — host filesystem becomes unreachable
- Landlock LSM — path-based filesystem access control
- Capability dropping — minimal privilege set +
NO_NEW_PRIVS - seccomp-BPF — syscall allow-list (~80 default, ~40 strict)
Each hivebox uses overlayfs over squashfs base images:
merged/ (rw) <- hivebox sees this
+-- upper/ (tmpfs) <- writes go here (vanishes on destroy)
+-- squashfs/ (ro) <- shared Alpine base across all hiveboxes
Three modes: none (default, no network), isolated (NAT to internet), shared (bridge between hiveboxes).
Requires Rust toolchain. Target platform: Linux with kernel 5.15+.
# Development build
cargo build
# Static release build (for Alpine deployment)
cargo build --release --target x86_64-unknown-linux-musl
# The binary is fully static — copy it anywhere
cp target/x86_64-unknown-linux-musl/release/hivebox /usr/bin/# Build the image
docker build -t hivebox .
# Run with privileged mode and host cgroup namespace
docker run --privileged --cgroupns=host -p 7070:7070 \
-e HIVEBOX_API_KEY=your-secret-key \
-e HIVEBOX_PACKAGES="python3 nodejs git" \
hiveboxEnvironment variables:
| Variable | Description | Default |
|---|---|---|
HIVEBOX_API_KEY |
API key for authentication (Authorization: Bearer <key>) |
(none — auth disabled) |
HIVEBOX_OPENCODE |
Enable OpenCode agent per sandbox (true/false) |
true |
HIVEBOX_OPENCODE_BASE_URL |
Default LLM base URL for all sandboxes | (opencode default) |
HIVEBOX_OPENCODE_API_KEY |
Default LLM API key for all sandboxes | (opencode default) |
HIVEBOX_OPENCODE_MODEL |
Default LLM model for all sandboxes | (opencode default) |
HIVEBOX_OPENCODE_SKILLS_PATH |
Path to skills directory (mount custom skills) | /root/.config/opencode/skills |
HIVEBOX_OPENCODE_MCPS |
JSON of global MCP servers added to every sandbox | (none) |
HIVEBOX_PACKAGES |
Extra Alpine packages to install at container startup | (none) |
HIVEBOX_PIP_PACKAGES |
Extra pip packages to install at container startup | (none) |
HIVEBOX_NPM_PACKAGES |
Extra npm global packages to install at container startup | (none) |
All LLM and MCP settings can be overridden per-sandbox at creation time via the API (see API docs).
Note:
--privilegedis required for Linux namespaces, cgroups, and mount operations.--cgroupns=hostgives access to the host cgroup hierarchy, needed for memory/CPU/PID limits on hiveboxes. Without it, hiveboxes still work but without resource limits.
sudo bash scripts/build-images.shThis builds the Alpine base squashfs image. To pre-install packages in all hiveboxes, add them to the Dockerfile or the base image build script. For per-hivebox packages, use hivebox exec <name> -- apk add <package>.
# Direct
hivebox daemon --port 7070 --api-key your-secret-key
# OpenRC (Alpine)
sudo cp config/hivebox.openrc /etc/init.d/hivebox
sudo rc-update add hivebox default
sudo rc-service hivebox startSee docs/deployment.md for full production setup.
| Method | Path | Description |
|---|---|---|
GET |
/healthz |
Health check (no auth) |
POST |
/api/v1/hiveboxes |
Create hivebox |
GET |
/api/v1/hiveboxes |
List hiveboxes |
GET |
/api/v1/hiveboxes/:id |
Get hivebox details |
POST |
/api/v1/hiveboxes/:id/exec |
Execute command |
POST |
/api/v1/hiveboxes/:id/mcp |
MCP over HTTP |
PUT |
/api/v1/hiveboxes/:id/files |
Upload file |
GET |
/api/v1/hiveboxes/:id/files |
Download file |
DELETE |
/api/v1/hiveboxes/:id |
Destroy hivebox |
GET |
/api/v1/analytics |
Metrics history |
ANY |
/api/v1/hiveboxes/:id/opencode/* |
OpenCode agent proxy |
See docs/api.md for full API reference.
- Architecture — design, filesystem layout, networking
- API Reference — REST API endpoints and examples
- CLI Reference — all commands and flags
- Deployment Guide — production setup, OpenRC, systemd
- Security Model — threat model, security layers, limitations
HiveBox is experimental software under active development. It is not production-ready and should not be used in critical or production environments. The authors assume no responsibility for any damage, data loss, or security issues that may arise from its use. Use at your own risk.
BSD 3 — see LICENSE.
