Skip to content

skynetcmd/m3-memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 M3 Memory

M3 Memory Logo

GitHub Stars GitHub Forks Discord

PyPI version PyPI downloads Python 3.11+ License: MIT MCP 25 tools CI Platform

Your AI agents finally remember things between sessions.

M3 Memory gives Claude Code, Gemini CLI, and Aider persistent, private memory that runs entirely on your hardware. No cloud. No API keys. No subscriptions.

  • 🔒 100% private — everything stays on your machine, works fully offline
  • One config linepip install and a single JSON block, that's it
  • 🧠 Persistent across sessions and devices — your agent picks up right where it left off

⚡ Quick Start (1 minute)

Prerequisites: Python 3.11+, and a local embedding server — Ollama, LM Studio, or any OpenAI-compatible endpoint. Ollama is the easiest way to start:

ollama pull nomic-embed-text && ollama serve

Install:

pip install m3-memory

Add to your MCP config:

{
  "mcpServers": {
    "memory": { "command": "mcp-memory" }
  }
}

Restart your agent. It now has memory.

✅ Claude Code   ✅ Gemini CLI   ✅ Aider   ✅ OpenClaw

Done.


😩 The Problem

Every time you start a new session, your AI agent has amnesia. It forgets your project structure, your preferences, the decisions you made together yesterday.

You paste the same context. You re-explain the same architecture. You correct the same mistakes.

Worse, when facts change — a port number, a dependency version, a deployment target — there's no mechanism to update what the agent "knows." Old and new information coexist. The agent picks whichever it sees first. Contradictions accumulate silently, and you don't notice until something breaks.

Agents that rely on file-based memory (like OpenClaw) can face an additional problem: performance can degrade as the number of memory files grows. More files can mean slower reads, slower context loading, and eventually a system that bogs down under its own history.

This is the default experience with every major coding agent today.

✅ With M3 Memory

Your agents don't forget anymore — and you don't have to repeat yourself to your AI. Architecture decisions, server configs, debugging history, your preferences — all remembered, all searchable, all persistent across sessions and devices.

When facts change, M3 detects the contradiction, updates the record, and preserves the full history. No stale data. No manual cleanup. No "actually, I told you yesterday..."

You don't change how you work. You don't manage memory. You just talk to your agent, and it knows what it should know.


💡 The Moment It Clicks

Session 1:

You: "Our API server runs on port 8080."

Session 2 (three days later):

You: "We moved the API to port 9000."

Session 3 (a week later):

You: "What port is the API on?"


Without M3:

Agent: "I don't have that information. Could you tell me what port your API runs on?"

With M3:

Agent: "Port 9000. (Updated from 8080 — the change was recorded on March 12th.)"


No prompts. No manual logic. The contradiction was detected and resolved automatically. The full history is preserved.


🎯 Who This Is For

Use M3 Memory if you:

  • Use Claude Code, Gemini CLI, Aider, or any MCP-compatible agent
  • Want persistent memory that survives across sessions and devices
  • Prefer local-first — no cloud dependency, no API costs, works offline
  • Don't want to build and maintain memory infrastructure yourself
  • Care about privacy and data ownership
  • Work across multiple machines and want your agent's knowledge to follow you

Not for you if:

  • You're building LangChain or CrewAI pipelines — consider Mem0, which integrates natively with those frameworks
  • You want a full stateful agent runtime with its own orchestration — consider Letta
  • You only need short-term chat context within a single session

🎯 Use Cases

🤖 Coding agents Remember architecture decisions, configs, and debugging steps across sessions — stop re-explaining your project every time
🧠 Personal assistants Persist user preferences, goals, and history long-term — your agent learns who you are
🧑‍💻 Dev workflows Track environment changes, server configs, and fixes over time — build institutional knowledge automatically
🌐 Multi-device setups You're debugging a deployment issue at a coffee shop. Claude Code recalls the architecture decisions from last week, the server configs from yesterday, and the troubleshooting steps that worked before — all from local SQLite, no internet required. Later, at your Windows desktop at home, Gemini CLI picks up exactly where you left off. Same memories. Same knowledge graph. Synced the moment you hit the local network.

✨ Features

🔍 Hybrid Search

TL;DR: You get the right memory, not just a similar one. Three-stage pipeline: FTS5 keyword matching, semantic vector similarity, and MMR diversity re-ranking. Results scored with full breakdown via memory_suggest. Better recall than vector-only search, especially for technical content with exact names and versions.

🚫 Automatic Contradiction Detection

TL;DR: Old facts fix themselves. Write conflicting information and M3 detects the contradiction automatically. The outdated memory is superseded via bitemporal versioning, a supersedes relationship is recorded, and the full history is preserved. No stale data. No manual cleanup.

⏳ Bitemporal History

TL;DR: Time-travel debugging for your agent's knowledge. Query as_of="2026-01-15" to see exactly what your agent believed on any past date. Every change is tracked with both the time the fact was true and the time it was recorded. Essential for compliance audits and understanding how your agent's knowledge evolved.

🕸️ Knowledge Graph

TL;DR: Memories connect to each other automatically. Related facts are linked on write when cosine similarity exceeds 0.7. Eight relationship types: related, supports, contradicts, extends, supersedes, references, consolidates, message. Traverse up to 3 hops with memory_graph to explore connected knowledge.

🔄 Cross-Device Sync

TL;DR: Same memory on every machine. Write on your MacBook, continue on your Windows desktop. Bi-directional delta sync across SQLite, PostgreSQL, and ChromaDB. Your agent's knowledge follows you — no cloud intermediary required.

🛡️ GDPR Built-In

TL;DR: Compliance as MCP tools, not afterthoughts. Two dedicated tools handle the legal requirements your agents must respect:

  • gdpr_forgetArticle 17 (Right to Erasure): permanently hard-deletes all memories for a user, no trace left behind
  • gdpr_exportArticle 20 (Data Portability): exports everything stored for a user as portable JSON, ready to hand over on request

No custom implementation needed. Call the tool, it's done.

🔒 Fully Local + Private

TL;DR: Your data never leaves your machine. Local embeddings via Ollama, LM Studio, or any OpenAI-compatible endpoint. Zero cloud calls. Zero API costs. Works completely offline.

🧹 Self-Maintaining

TL;DR: Memory stays clean without you thinking about it. Automatic decay, expiry purging, orphan pruning, deduplication, and retention enforcement. Run memory_maintenance periodically, or let it handle itself. Old memories consolidate into LLM-generated summaries when a category gets too large.


🧰 Start Simple

M3 Memory ships 25 tools, but you don't need most of them to get started. Your agent will discover and use them automatically.

Begin with three: memory_write, memory_search, and memory_update. That covers 90% of daily use. The rest — knowledge graph traversal, deduplication, GDPR compliance, cross-device sync — is there when you need it.


🧰 Core Tools

Tool What it does
memory_write Store a memory — facts, decisions, preferences, configs, observations
memory_search Retrieve relevant memories using hybrid search
memory_suggest Same as search, but with full score breakdown (vector, BM25, MMR)
memory_get Fetch a specific memory by ID
memory_update Refine existing knowledge — content, title, metadata, importance

Full list of all 25 tools


🆚 How It Compares

Feature M3-Memory Mem0 Letta LangChain Memory
Local-first ✅ 100% ⚠️ partial ✅ good ⚠️ partial
MCP native ✅ 25 tools ⚠️ wrappers ⚠️ indirect ❌ no
Contradiction handling ✅ automatic ⚠️ LLM-based ⚠️ agent-driven ⚠️ manual
GDPR tools ✅ built-in ⚠️ supported ⚠️ via tools ❌ custom
Cross-device sync ✅ built-in ⚠️ limited ⚠️ git-based ⚠️ limited
Setup ✅ 1 line ⚠️ SDK needed ❌ full runtime ❌ framework only
Cost ✅ free, MIT ⚠️ $249/mo Pro ⚠️ OSS + SaaS ✅ free

🏗️ Architecture

graph TD
    subgraph "🤖 AI Agents"
        C[Claude Code]
        G[Gemini CLI]
        A[Aider / OpenClaw]
    end

    subgraph "🌉 MCP Bridge"
        MB[memory_bridge.py — 25 MCP tools]
    end

    subgraph "💾 Storage Layers"
        SQ[(SQLite — Local L1)]
        PG[(PostgreSQL — Sync L2)]
        CH[(ChromaDB — Federated L3)]
    end

    C & G & A <--> MB
    MB <--> SQ
    SQ <-->|Bi-directional Delta Sync| PG
    SQ <-->|Push/Pull| CH
Loading

The Memory Write Pipeline

sequenceDiagram
    participant A as Agent
    participant M as M3 Memory
    participant L as Local LLM
    participant S as SQLite

    A->>M: memory_write(content)
    M->>M: Safety Check (XSS / injection / poisoning)
    M->>L: Generate Embedding
    L-->>M: Vector [0.12, -0.05, ...]
    M->>M: Contradiction Detection
    M->>M: Auto-Link Related Memories
    M->>M: SHA-256 Content Hash
    M->>S: Store Memory + Vector
    S-->>M: Success
    M-->>A: Created: <uuid>
Loading

🎬 See It in Action

Demo 1 — Contradiction resolution

Your agent writes two conflicting facts. The old one is automatically superseded — no manual cleanup:

memory_write: "API server runs on port 8080"
memory_write: "API server moved to port 9000"

→ Port 8080 memory superseded. History preserved. Agent now knows port 9000.

Demo: agent writes conflicting facts — old memory auto-superseded, full history preserved


Demo 2 — Hybrid search across 1,000 memories

memory_search: "database connection config"

→ Returns FTS5 keyword matches + semantic neighbors + MMR-diversified results
   with full score breakdown (vector, BM25, MMR)

Demo: memory_search returns FTS5 + vector + MMR ranked results with score breakdown


Demo 3 — Cross-device sync

[MacBook] memory_write: "Deploy target changed to us-east-2"
[Windows desktop] memory_search: "deploy target"

→ Same memory. Instantly available. No cloud intermediary.

Demo: memory written on MacBook appears on Windows desktop via SQLite→PostgreSQL sync

GIFs coming soon — contribute a recording or watch #showcase.


📚 Documentation

File Purpose
QUICKSTART.md Plain-English guide — new here? Start here
CORE_FEATURES.md Feature overview
ARCHITECTURE.md Full system internals + all 25 MCP tools
TECHNICAL_DETAILS.md Deep dive: search pipeline, schema, sync, security
COMPARISON.md M3 vs Mem0 vs Letta vs LangChain Memory vs Zep
ENVIRONMENT_VARIABLES.md Config and credential setup
ROADMAP.md Upcoming milestones
CHANGELOG.md Release history
CONTRIBUTING.md How to contribute
GOOD_FIRST_ISSUES.md Good first issues for new contributors

🤝 Community

Discord

Get help, share your setup, and follow development. M3_Bot is live — use !ask <question> in any channel.


🛣️ Roadmap

Milestone Highlights
v0.2 Docker image · auto MCP Registry · CLI polish
v0.3 Local web dashboard · Prometheus metrics · search explain mode
v0.4 Multi-agent shared namespaces · P2P encrypted sync
v1.0 Public benchmark suite · stable Python SDK · full docs site

Vote on features → ROADMAP.md


🧩 Project Structure

bin/          MCP bridge, core engine, sync, and maintenance scripts
m3_memory/    Python package — CLI entry point (mcp-memory)
memory/       SQLite database and migrations
docs/         Architecture diagrams and install guides
examples/     Demo notebooks and ready-to-paste mcp.json configs
tests/        End-to-end test suite (41 tests)

🚀 Next Steps

  1. Star the repo — helps others find it
  2. 🧪 Try a real session — install, write a memory, close your agent, reopen it, and search
  3. 💬 Share feedback — what worked, what didn't
  4. 🐛 Open an issue — bugs, questions, feature requests
  5. 🤝 Contribute — good first issues listed

🤝 Contributing

See CONTRIBUTING.md · Good first issues: GOOD_FIRST_ISSUES.md


Star History Chart

Your AI should remember. Your data should stay yours.

M3 Memory: the foundation for agents that don't forget.

About

Local-first Agentic Memory Layer for MCP Agents • 25 tools • Hybrid search (FTS5 + vector + MMR) • GDPR • 100% local

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors