Decision Infrastructure for AI agents.
Stop agents before they make expensive mistakes.
Try it in 10 seconds
npx dashclaw-demoNo setup. Opens Decision Replay automatically.
Works with:
LangChain • CrewAI • OpenClaw • OpenAI • Anthropic • AutoGen • Claude Code • Codex • Gemini CLI • Custom agents
Intercept decisions. Enforce policies. Record evidence.
Agent → DashClaw → External Systems
DashClaw sits between your agents and your external systems. It evaluates policies before an agent action executes and records verifiable evidence of every decision.
$0 to deploy — Vercel free tier + Neon free tier. Click the button, add the Neon integration when prompted, fill in the environment variables, and you're live. Database schema is created automatically during the build — no manual migration step required.
- Open your app — Visit
https://your-app.vercel.appand sign in. - Copy the snippet — Mission Control shows a ready-to-run code example with your API key and base URL pre-filled.
- Run it —
node --env-file=.env demo.jsand watch governance happen.
- Live decision stream — Create a free Upstash Redis instance and add
UPSTASH_REDIS_REST_URLandUPSTASH_REDIS_REST_TOKENin Vercel env vars. Without this, Mission Control uses in-memory events (fine for getting started, but won't persist across serverless invocations). - Verify at /setup — Open
https://your-app.vercel.app/setupto confirm all systems are green.
Three ways to get governed — pick what fits your workflow:
Give your AI agent the dashclaw-platform-intelligence skill and it instruments itself — no code changes, no manual wiring. The agent registers with DashClaw, sets up guard checks, records decisions, and starts tracking assumptions automatically.
# Download the skill into your agent's skill directory
cp -r public/downloads/dashclaw-platform-intelligence .claude/skills/Set two environment variables and your agent is governed on its next run:
export DASHCLAW_BASE_URL=https://your-dashclaw-instance.com
export DASHCLAW_API_KEY=your_api_keyThis is the fastest path. We gave our own OpenClaw agent the skill and it put itself on DashClaw in one conversation.
Govern every Bash, Edit, Write, and MultiEdit call Claude Code makes — no SDK instrumentation needed:
cp hooks/dashclaw_pretool.py .claude/hooks/
cp hooks/dashclaw_posttool.py .claude/hooks/Set DASHCLAW_BASE_URL, DASHCLAW_API_KEY, and DASHCLAW_HOOK_MODE=enforce. Every tool call becomes a governed, replayable decision record. See hooks/README.md for the full guide.
For custom agents where you want precise control over what gets governed:
npm install dashclaw # Node.js
pip install dashclaw # PythonThe 4-step governance loop — Guard, Record, Verify, Outcome — is covered in the Quickstart below.
DashClaw is not observability. It is control before execution.
AI agents generate actions from goals and context. They do not follow deterministic code paths. Therefore debugging alone is insufficient. Agents require governance.
DashClaw provides decision infrastructure to:
- Intercept risky agent actions.
- Enforce policy checks before execution.
- Require human approval (HITL) for sensitive operations.
- Record verifiable decision evidence to detect reasoning drift.
- Track agent learning velocity — the only platform that measures whether your agents are getting better or worse over time.
Run DashClaw instantly with one command.
npx dashclaw-demoWhat happens:
- A local DashClaw demo runtime starts automatically.
- A demo agent attempts a high-risk production deploy.
- DashClaw intercepts the decision and blocks the action before execution.
- Your browser opens directly to the Decision Replay showing the governance trail.
No repo clone. No environment variables. No configuration. Just one command.
- 🔴 High risk score (85)
- 🛑 Policy requires approval before deploy
- 🧠 Assumptions recorded by the agent
- 📊 Full decision timeline with outcome
Mission Control — Real-time strategic posture, decision timeline, and intervention feed.
Approval Queue — Human-in-the-loop intervention with risk scores and one-click Allow / Deny.
Guard Policies — Declarative rules that govern agent behavior before actions execute.
Drift Detection — Statistical behavioral drift analysis with critical alerts when agents deviate from baselines.
Fastest: Install the dashclaw-platform-intelligence skill and let your agent instrument itself.
Hands-on: Use the OpenAI Governed Agent Starter to see the SDK in a real customer communication workflow:
cd examples/openai-governed-agent
npm install && cp .env.example .env
# Add your DASHCLAW_API_KEY to .env
node index.jsNode.js:
npm install dashclawPython:
pip install dashclawNode.js:
import { DashClaw, GuardBlockedError, ApprovalDeniedError } from 'dashclaw';
const claw = new DashClaw({
baseUrl: process.env.DASHCLAW_BASE_URL, // or your DashClaw instance URL
apiKey: process.env.DASHCLAW_API_KEY,
agentId: 'my-agent'
});Python:
from dashclaw.client import DashClaw, GuardBlockedError, ApprovalDeniedError
import os
claw = DashClaw(
base_url=os.environ["DASHCLAW_BASE_URL"],
api_key=os.environ.get('DASHCLAW_API_KEY'),
agent_id='my-agent'
)The minimal governance loop wraps your agent's real-world actions:
// 1. Guard -> "Can I do X?"
const decision = await claw.guard({
action_type: 'database_query',
risk_score: 50
});
// 2. Record -> "I am attempting X."
const action = await claw.createAction({
action_type: 'database_query',
declared_goal: 'Extract user statistics'
});
// 3. Verify -> "I believe Y is true while doing X."
await claw.recordAssumption({
action_id: action.action_id,
assumption: 'The database is read-only for this credentials'
});
try {
// Execute the real action here...
// ...
// 4. Outcome -> "X finished with result Z."
await claw.updateOutcome(action.action_id, { status: 'completed' });
} catch (error) {
await claw.updateOutcome(action.action_id, { status: 'failed', error_message: error.message });
}Learning loop: The guard response includes a
learningfield with your agent's historical performance — recent scores, drift status, and patterns learned from past outcomes. Your agent gets smarter every cycle.
Approve agent actions from the terminal without opening a browser. This is the primary interface for developers using Claude Code, Codex, Gemini CLI, or any terminal-first workflow.
npm install -g @dashclaw/clidashclaw approvals # interactive inbox for all pending actions
dashclaw approve <actionId> # approve a specific action
dashclaw deny <actionId> --reason "Outside change window"When an agent calls waitForApproval(), the SDK prints a structured block to stdout showing the action ID, policy name, risk score, declared goal, and a replay link. Approve from any terminal and the agent unblocks instantly via SSE. The browser dashboard reflects the same decision within one second.
Every governed action has a permanent replay URL:
<DASHCLAW_BASE_URL>/replay/<actionId>
DashClaw includes a standalone Python integration test agent that exercises the major DashClaw SDK methods directly against a running instance.
To run it locally:
export DASHCLAW_API_KEY="your-api-key"
export DASHCLAW_BASE_URL="http://localhost:3000"
# Run the full SDK test agent
python scripts/test-sdk-agent.py --fullSee the script comments for more flags and usage.
The fastest path to self-host DashClaw is via Vercel + Neon.
- Fork this repo.
- Deploy to Vercel and connect a free Neon Postgres database.
- Run the interactive setup to configure secrets and run migrations:
node scripts/setup.mjs
- Your instance is live. Grab your API key from the dashboard and point your first agent at it.
For the complete API surface, check out the SDK Reference.

