Distributed, fault-tolerant memory system for AI agents — knowledge graphs, semantic search, LLM extraction, real-time hivemind channels, all replicated via Raft consensus.
Quick Start | Features | Architecture | API | Contributing
Your AI agents forget everything between sessions. When you run agent swarms, each agent has its own isolated memory. There's no shared consciousness.
HiveMindDB fixes that. It gives your agents persistent, replicated, shared memory — built on RaftTimeDB for fault tolerance.
Agent 1 learns something ──► HiveMindDB ──► Raft consensus
↓
Agent 2 knows it instantly ◄── real-time push ◄── all nodes
Agent 3 knows it instantly ◄── real-time push ◄── identical
Run just HiveMindDB as a single container — in-memory storage with local snapshots. No Raft replication, no SpacetimeDB. Good for local dev and single-agent setups.
git clone https://github.com/NodeNestor/HiveMindDB.git
cd HiveMindDB
docker build -f deploy/docker/Dockerfile -t hiveminddb .
docker run -d --name hiveminddb -p 8100:8100 -v hivemind-data:/data hiveminddbConnect your agent to http://localhost:8100.
For fault-tolerant multi-node deployment with Raft consensus. Requires all NodeNestor repos:
# Clone all repos into the same parent directory
git clone https://github.com/NodeNestor/HiveMindDB.git
git clone https://github.com/NodeNestor/RaftimeDB.git
git clone https://github.com/NodeNestor/CodeGate.git
git clone https://github.com/NodeNestor/AgentCore.git
# Use the integration docker-compose (builds everything from source)
# Get it from: https://github.com/NodeNestor/AgentCore/blob/main/examples/docker-compose.swarm.yml
docker compose up -d --buildgit clone https://github.com/NodeNestor/HiveMindDB.git
cd HiveMindDB
cargo build --release -p hiveminddb # server binary
cargo build --release -p hmdb # CLI tool
# Install MCP server dependencies
cd crates/mcp-server && npm install
# Run
./target/release/hiveminddb --listen-addr 0.0.0.0:8100# 1. Start HiveMindDB (build from source or Docker — see above)
docker run -d --name hiveminddb -p 8100:8100 -v hivemind-data:/data hiveminddb
# 2. Clone the repo (needed for the MCP server)
git clone https://github.com/NodeNestor/HiveMindDB.git
cd HiveMindDB/crates/mcp-server && npm install
# 3. Register MCP server with Claude Code
claude mcp add-json hiveminddb '{"command":"node","args":["<path-to>/HiveMindDB/crates/mcp-server/src/index.js","--url","http://localhost:8100"]}'
# Done. Claude Code now has 20 memory tools: remember, recall, search, extract, etc.Note: Replace
<path-to>with the actual path where you cloned HiveMindDB.
For fully automatic memory — no manual remember calls needed:
# 1. Start HiveMindDB + register MCP server (see "With Claude Code" above)
# 2. Install hooks (from AgentCore repo)
git clone https://github.com/NodeNestor/AgentCore.git # if not already cloned
mkdir -p ~/.claude/hooks/hivemind
cp AgentCore/hooks/hivemind/* ~/.claude/hooks/hivemind/
chmod +x ~/.claude/hooks/hivemind/*.sh
# 3. Add to ~/.claude/settings.json:{
"env": {
"HIVEMINDDB_URL": "http://localhost:8100",
"AGENT_ID": "my-claude",
"AGENT_NAME": "My Local Claude"
},
"hooks": {
"SessionStart": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "bash ~/.claude/hooks/hivemind/session-start.sh",
"timeout": 10000
}]
}],
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "bash ~/.claude/hooks/hivemind/prompt-search.sh",
"timeout": 5000
}]
}],
"PostToolUse": [{
"matcher": "Edit|Write",
"hooks": [{
"type": "command",
"command": "bash ~/.claude/hooks/hivemind/track-changes.sh",
"timeout": 5000,
"async": true
}]
}],
"Stop": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "bash ~/.claude/hooks/hivemind/session-stop.sh",
"timeout": 5000,
"async": true
}]
}]
}
}Note: Timeouts are in milliseconds. On Windows, prefix commands with
bash(e.g.bash ~/.claude/hooks/hivemind/session-start.sh). Requirescurlandjqon your PATH.
What the hooks do:
| Hook | Event | Effect |
|---|---|---|
session-start.sh |
Session starts | Registers agent, recalls recent memories, injects as context |
prompt-search.sh |
User submits prompt | Semantic search → injects relevant memories (RAG) |
track-changes.sh |
File edited/written | Logs file change as memory (async, non-blocking) |
session-stop.sh |
Claude stops | Sends heartbeat (async, non-blocking) |
Required env vars: HIVEMINDDB_URL, AGENT_ID, AGENT_NAME
AgentCore auto-installs hooks and MCP tools when HIVEMINDDB_URL is set:
# Add to your docker-compose.yml or .env
HIVEMINDDB_URL=http://hivemind:8100
MEMORY_PROVIDER=hiveminddbThat's it. The entrypoint module 52-memory-hooks.sh handles everything:
- Copies hook scripts to the agent
- Merges hook config into
settings.json - MCP tools auto-discovered from
library.json
# Use CodeGate as the LLM provider for extraction
HIVEMIND_LLM_PROVIDER=codegate # Uses http://localhost:9212/v1
# Or point at your CodeGate instance directly:
HIVEMIND_LLM_PROVIDER=http://codegate:9212/v1> Remember that the user prefers Rust over Python for new projects.
✓ Stored memory #1 under topic "preferences"
> What does the user prefer?
Found 1 result:
#1 [score: 0.95] User prefers Rust over Python for new projects
tags: preferences
> Extract knowledge from this conversation
Added 3 memories:
#2: User is building RaftTimeDB
#3: User prefers dark mode
#4: User works with SpacetimeDB
Added 2 entities:
RaftTimeDB (Project)
SpacetimeDB (Technology)
Added 1 relationship:
RaftTimeDB --uses--> SpacetimeDB
> Who maintains RaftTimeDB?
Entity: ludde (Person)
--maintains--> RaftTimeDB (Project)
--prefers--> Rust (Language)
| Feature | Description |
|---|---|
| Persistent Memory | Facts, preferences, and knowledge survive across sessions |
| Knowledge Graph | Entities + typed relationships with graph traversal |
| Hybrid Search | Keyword + vector similarity — local embeddings by default (22M param ONNX model, CPU-only, zero config) |
| LLM Extraction | Automatically extract facts, entities, and relationships from conversations |
| Bi-Temporal | Old facts are invalidated, not deleted — query "what did we know last Tuesday?" |
| Hivemind Channels | Agents subscribe to channels, get real-time WebSocket updates |
| Conflict Resolution | LLM determines ADD/UPDATE/NOOP for new facts vs existing knowledge |
| Full Audit Trail | Every memory change is recorded — who changed what, when, and why |
| Snapshot Persistence | Periodic JSON snapshots to disk, auto-restore on restart |
| Raft Replication | Optional RaftTimeDB replication for multi-node fault tolerance |
| MCP Native | Drop-in MCP server for Claude Code, OpenCode, Aider (20 tools) |
| AgentCore Compatible | Same remember/recall/forget/search interface |
| CodeGate Support | Use your CodeGate proxy for LLM and embedding calls |
| REST + WebSocket API | Works with any HTTP client or agent framework |
| Graceful Shutdown | Ctrl+C saves final snapshot, drains connections cleanly |
┌─────────────────────────────────────────────┐
│ Your AI Agent Swarm │
│ Claude Code · OpenCode · Aider · … │
└──────────────────┬──────────────────────────┘
│ MCP / REST / WebSocket
┌──────────────────▼──────────────────────────┐
│ HiveMindDB Sidecar │
│ Memory Engine · Knowledge Graph · Channels │
│ LLM Extraction · Vector Embeddings │
│ Snapshot Persistence · Replication Client │
└──────────────────┬──────────────────────────┘
│ WebSocket (replication)
┌──────────────────▼──────────────────────────┐
│ RaftTimeDB (Raft Consensus) │
│ Multi-shard · Leader forwarding · TLS │
└──────────────────┬──────────────────────────┘
│
┌──────────────────▼──────────────────────────┐
│ SpacetimeDB (Deterministic Storage) │
│ WASM module · Tables · Reducers │
└─────────────────────────────────────────────┘
hmdb status # Cluster stats + embedding/extraction info
hmdb add "User prefers Rust" --user ludde # Add a memory
hmdb search "what does the user prefer?" # Hybrid search
hmdb extract "User said they prefer Rust" # LLM extraction
hmdb extract --file conversation.json # Extract from conversation file
hmdb entity "RaftTimeDB" # Entity + relationships
hmdb traverse 1 --depth 3 # Graph traversal
hmdb history 42 # Audit trail
hmdb forget 42 --reason "outdated" # Invalidate
hmdb channels # List channels
hmdb agents # List agents| Tool | Description |
|---|---|
remember |
Store memory under a topic |
recall |
Recall all memories for a topic |
forget |
Invalidate all memories for a topic |
search |
Hybrid search (keyword + vector) |
list_topics |
List topics with counts |
| Tool | Description |
|---|---|
memory_add |
Add memory with full metadata |
memory_search |
Hybrid search with filters |
memory_history |
Full audit trail |
extract |
LLM knowledge extraction from conversation |
graph_add_entity |
Add knowledge graph entity |
graph_add_relation |
Create entity relationship |
graph_query |
Find entity + relationships |
graph_traverse |
Graph traversal from entity |
channel_create |
Create hivemind channel |
channel_share |
Share memory to channel |
channel_list |
List all channels |
agent_register |
Register agent in hivemind |
agent_status |
List agents + status |
hivemind_status |
Full cluster status |
Full REST API at http://localhost:8100/api/v1/:
| Endpoint | Method | Description |
|---|---|---|
/memories |
POST/GET | Add/list memories |
/memories/:id |
GET/PUT/DELETE | Get, update, invalidate |
/memories/:id/history |
GET | Audit trail |
/search |
POST | Hybrid search (keyword + vector) |
/extract |
POST | LLM knowledge extraction |
/entities |
POST | Add entity |
/entities/:id |
GET | Get entity |
/entities/find |
POST | Find by name |
/entities/:id/relationships |
GET | Entity relationships |
/relationships |
POST | Add relationship |
/graph/traverse |
POST | Graph traversal |
/channels |
POST/GET | Create/list channels |
/channels/:id/share |
POST | Share memory to channel |
/agents/register |
POST | Register agent |
/agents |
GET | List agents |
/agents/:id/heartbeat |
POST | Agent heartbeat |
/status |
GET | Cluster stats |
POST /api/v1/relationships
{
"source_entity_id": 1,
"target_entity_id": 2,
"relation_type": "uses",
"created_by": "agent-1"
}Required: source_entity_id, target_entity_id, relation_type, created_by
POST /api/v1/channels
{
"name": "general",
"created_by": "agent-1",
"description": "General discussion channel",
"channel_type": "broadcast"
}Required: name, created_by. Optional: description, channel_type
POST /api/v1/agents/register
{
"agent_id": "agent-1",
"name": "Claude Code",
"agent_type": "claude",
"capabilities": ["code", "memory"],
"metadata": {}
}Required: agent_id, name, agent_type. Optional: capabilities, metadata
WebSocket at ws://localhost:8100/ws for real-time channel subscriptions.
| Env Var | Default | Description |
|---|---|---|
HIVEMIND_LISTEN_ADDR |
0.0.0.0:8100 |
API address |
HIVEMIND_RTDB_URL |
ws://127.0.0.1:3001 |
RaftTimeDB URL |
HIVEMIND_LLM_PROVIDER |
anthropic |
LLM provider (openai/anthropic/ollama/codegate/URL) |
HIVEMIND_LLM_API_KEY |
- | LLM API key |
HIVEMIND_LLM_MODEL |
claude-sonnet-4-20250514 |
LLM model |
HIVEMIND_EMBEDDING_MODEL |
local:all-MiniLM-L6-v2 |
Embedding model (see below) |
HIVEMIND_EMBEDDING_API_KEY |
- | Embedding API key (not needed for local) |
HIVEMIND_DATA_DIR |
./data |
Snapshot directory |
HIVEMIND_SNAPSHOT_INTERVAL |
60 |
Snapshot interval (seconds) |
HIVEMIND_ENABLE_REPLICATION |
false |
Enable Raft replication |
HiveMindDB includes built-in local embeddings powered by fastembed (ONNX Runtime, CPU-only). No external API key or service needed — embeddings just work out of the box.
Default model: all-MiniLM-L6-v2 (22M params, 384 dimensions, ~22MB download on first run)
| Provider | Format | Example |
|---|---|---|
| Local (default) | local:<model> |
local:all-MiniLM-L6-v2 |
| OpenAI | openai:<model> |
openai:text-embedding-3-small |
| Ollama | ollama:<model> |
ollama:nomic-embed-text |
| CodeGate | codegate:<model> |
codegate:text-embedding-3-small |
| Custom URL | http://host:port/v1 |
Any OpenAI-compatible endpoint |
| Model | Params | Dims | Best For |
|---|---|---|---|
all-MiniLM-L6-v2 |
22M | 384 | General purpose (default, fastest) |
bge-small-en-v1.5 |
33M | 384 | English, high quality |
snowflake-arctic-embed-xs |
~22M | 384 | Lightweight, fast |
nomic-embed-text-v1.5 |
137M | 768 | Multilingual, highest quality |
jina-embeddings-v2-base-code |
137M | 768 | Code-aware |
embedding-gemma-300m |
300M | - | Google, multilingual |
See embeddings.rs for the full list of 20+ supported models.
To build without the local embedding engine (smaller binary, API-only):
cargo build --no-default-featuresThen set HIVEMIND_EMBEDDING_MODEL=openai:text-embedding-3-small and provide an API key.
cargo build # Build core + CLI
cargo test # Run all tests
cd crates/mcp-server && npm install # Install MCP server depsContributions welcome! See CONTRIBUTING.md for guidelines.