Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -185,19 +185,17 @@ models:
words: words
segments: segments
memory:
provider: chronicle
provider: openmemory_mcp
timeout_seconds: 1200
extraction:
enabled: true
prompt: 'Extract important information from this conversation and return a JSON
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
dates, locations, numbers, and key details hehehe. Keep items concise and useful.
Comment on lines +192 to 194
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove debug text from extraction prompt.

The prompt contains "hehehe" which appears to be leftover debug/test text and should be removed before merging.

🔎 Proposed fix
     prompt: Extract important information from this conversation and return a JSON
       object with an array named "facts". Include personal preferences, plans, names,
-      dates, locations, numbers, and key details hehehe. Keep items concise and useful.
+      dates, locations, numbers, and key details. Keep items concise and useful.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
dates, locations, numbers, and key details hehehe. Keep items concise and useful.
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
dates, locations, numbers, and key details. Keep items concise and useful.
🤖 Prompt for AI Agents
In config.yml around lines 192 to 194, the prompt string includes leftover debug
text "hehehe"; remove that token so the prompt reads cleanly (e.g., "Extract
important information from this conversation and return a JSON object with an
array named \"facts\". Include personal preferences, plans, names, dates,
locations, numbers, and key details. Keep items concise and useful."). Update
the file by deleting the "hehehe" text and fixing any surrounding
whitespace/punctuation so the sentence remains grammatically correct, then run a
quick lint/check to ensure YAML validity.


'
openmemory_mcp:
server_url: http://localhost:8765
server_url: http://host.docker.internal:8765
client_name: chronicle
user_id: default
user_id: openmemory
timeout: 30
mycelia:
api_url: http://localhost:5173
Expand Down
21 changes: 18 additions & 3 deletions extras/openmemory-mcp/.env.template
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,23 @@
# Required: OpenAI API Key for memory processing
OPENAI_API_KEY=

# Optional: User identifier (defaults to system username)
# Optional: User identifier (defaults to 'openmemory')
USER=openmemory
OPENMEMORY_USER_ID=openmemory

# Optional: Frontend URL (if using UI)
NEXT_PUBLIC_API_URL=http://localhost:8765
# Optional: API Key for OpenMemory MCP server authentication
API_KEY=

# Optional: Frontend configuration
NEXT_PUBLIC_API_URL=http://localhost:8765
NEXT_PUBLIC_USER_ID=openmemory

# Neo4j Configuration (graph store for OpenMemory)
# Default credentials: neo4j/taketheredpillNe0
# Access Neo4j browser at http://localhost:7474
NEO4J_URL=bolt://neo4j:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=taketheredpillNe0

# Qdrant Configuration (vector store for OpenMemory)
QDRANT_URL=http://mem0_store:6333
4 changes: 3 additions & 1 deletion extras/openmemory-mcp/.gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
cache/
cache/
mem0-fork/
.env
87 changes: 70 additions & 17 deletions extras/openmemory-mcp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

This directory contains a local deployment of the OpenMemory MCP (Model Context Protocol) server, which can be used as an alternative memory provider for Chronicle.

**Note:** This deployment builds from the [Ushadow-io/mem0](https://github.com/Ushadow-io/mem0) fork instead of the official mem0.ai release, providing custom features and enhancements.

## What is OpenMemory MCP?

OpenMemory MCP is a memory service from mem0.ai that provides:
Expand All @@ -13,23 +15,41 @@ OpenMemory MCP is a memory service from mem0.ai that provides:

## Quick Start

### 1. Configure Environment
### 1. Run Setup Script

The setup script will:
- Clone the Ushadow-io/mem0 fork
- Configure your environment with API keys
- Prepare the service for deployment

```bash
./setup.sh
```

Or provide API key directly:
```bash
cp .env.template .env
# Edit .env and add your OPENAI_API_KEY
./setup.sh --openai-api-key your-api-key-here
```

### 2. Start Services

The docker-compose.yml is located in the fork directory. You can start services using:

**Option A: Using Chronicle's unified service manager (Recommended)**
```bash
# Start backend only (recommended)
./run.sh
# From project root
uv run --with-requirements setup-requirements.txt python services.py start openmemory-mcp --build
```

# Or start with UI (optional)
./run.sh --with-ui
**Option B: Manually from the fork directory**
```bash
# From extras/openmemory-mcp
cd mem0-fork/openmemory
docker compose up --build -d
```

**Note:** The first build may take several minutes as Docker builds the services from source.

### 3. Configure Chronicle

In your Chronicle backend `.env` file:
Expand All @@ -48,13 +68,20 @@ The deployment includes:
- FastAPI backend with MCP protocol support
- Memory extraction using OpenAI
- REST API and MCP endpoints
- Development mode with hot-reload enabled

2. **Qdrant Vector Database** (port 6334)
2. **Qdrant Vector Database** (port 6333)
- Stores memory embeddings
- Enables semantic search
- Isolated from main Chronicle Qdrant
- Note: Uses same port as Chronicle's Qdrant (services are isolated by Docker network)

3. **OpenMemory UI** (port 3001, optional)
3. **Neo4j Graph Database** (ports 7474, 7687)
- Advanced graph-based memory features
- APOC and Graph Data Science plugins enabled
- Web browser interface for visualization
- Default credentials: `neo4j/taketheredpillNe0`

4. **OpenMemory UI** (port 3333)
- Web interface for memory management
- View and search memories
- Debug and testing interface
Expand All @@ -64,10 +91,15 @@ The deployment includes:
- **MCP Server**: http://localhost:8765
- REST API: `/api/v1/memories`
- MCP SSE: `/mcp/{client_name}/sse/{user_id}`

- **Qdrant Dashboard**: http://localhost:6334/dashboard
- API Docs: http://localhost:8765/docs

- **Qdrant Dashboard**: http://localhost:6333/dashboard

- **UI** (if enabled): http://localhost:3001
- **Neo4j Browser**: http://localhost:7474
- Username: `neo4j`
- Password: `taketheredpillNe0`

- **OpenMemory UI**: http://localhost:3333

## How It Works with Chronicle

Expand All @@ -82,7 +114,22 @@ This replaces Chronicle's built-in memory processing with OpenMemory's implement

## Managing Services

**Using Chronicle's unified service manager (from project root):**
```bash
# View status
uv run --with-requirements setup-requirements.txt python services.py status

# Stop services
uv run --with-requirements setup-requirements.txt python services.py stop openmemory-mcp

# Restart services
uv run --with-requirements setup-requirements.txt python services.py restart openmemory-mcp --build
```

**Manually from the fork directory:**
```bash
cd extras/openmemory-mcp/mem0-fork/openmemory

# View logs
docker compose logs -f

Expand Down Expand Up @@ -140,10 +187,16 @@ This test verifies:

### Port Conflicts

If ports are already in use, edit `docker-compose.yml`:
- Change `8765:8765` to another port for MCP server
- Change `6334:6333` to another port for Qdrant
- Update Chronicle's `OPENMEMORY_MCP_URL` accordingly
**Qdrant Port Note**: OpenMemory uses port 6333 for Qdrant, same as Chronicle's main Qdrant. However, they are isolated by Docker networks and won't conflict. Services communicate via container names, not localhost ports.

If you need to change ports, edit `mem0-fork/openmemory/docker-compose.yml`:
- MCP Server: Change `8765:8765` to another port
- Qdrant: Change `6333:6333` to another port
- Neo4j Browser: Change `7474:7474` to another port
- Neo4j Bolt: Change `7687:7687` to another port
- UI: Change `3333:3000` to another port

Update Chronicle's `OPENMEMORY_MCP_URL` if you change the MCP server port.

### Memory Not Working

Expand Down
81 changes: 64 additions & 17 deletions extras/openmemory-mcp/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,50 +1,97 @@
services:
# Qdrant vector database for OpenMemory (following original naming)
# Qdrant vector database for OpenMemory
mem0_store:
image: qdrant/qdrant
ports:
- "6335:6333" # Different port to avoid conflict with main Qdrant
- "6335:6333" # Different port to avoid conflict with main Chronicle Qdrant
volumes:
- ./data/mem0_storage:/qdrant/storage
networks:
- mem0-network
restart: unless-stopped

# OpenMemory MCP Server (official Docker image)
# OpenMemory MCP API Server (pre-built image from fork)
openmemory-mcp:
image: mem0/openmemory-mcp:latest
image: ghcr.io/ushadow-io/u-mem0-api:latest
env_file:
- .env
environment:
- USER=${USER:-openmemory}
- API_KEY=${API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OPENMEMORY_USER_ID=${OPENMEMORY_USER_ID:-openmemory}
- QDRANT_URL=http://mem0_store:6333
- NEO4J_URL=${NEO4J_URL:-bolt://neo4j:7687}
- NEO4J_USERNAME=${NEO4J_USERNAME:-neo4j}
- NEO4J_PASSWORD=${NEO4J_PASSWORD:-taketheredpillNe0}
Comment on lines +23 to +25
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Hardcoded Neo4j password is a major security vulnerability.

The password taketheredpillNe0 is hardcoded in both the docker-compose file and .env.template, and Neo4j is exposed on ports 7474 and 7687. This creates a significant security risk since:

  • The password is publicly visible in the repository
  • Neo4j is accessible from the host network
  • An attacker could easily discover and exploit this credential
🔎 Recommended fixes

Option 1 (Recommended): Remove defaults entirely and require users to set secure passwords:

-      - NEO4J_PASSWORD=${NEO4J_PASSWORD:-taketheredpillNe0}
+      - NEO4J_PASSWORD=${NEO4J_PASSWORD:?NEO4J_PASSWORD must be set in .env}

-      - NEO4J_AUTH=neo4j/taketheredpillNe0
+      - NEO4J_AUTH=${NEO4J_USERNAME:-neo4j}/${NEO4J_PASSWORD:?NEO4J_PASSWORD required}

Option 2: Generate a random password during setup:

Add to your setup script:

# Generate secure random password if not set
if [ -z "$NEO4J_PASSWORD" ]; then
  NEO4J_PASSWORD=$(openssl rand -base64 32)
  echo "NEO4J_PASSWORD=$NEO4J_PASSWORD" >> .env
fi

Option 3: Use Docker secrets (for production):

secrets:
  neo4j_password:
    file: ./secrets/neo4j_password.txt
environment:
  - NEO4J_AUTH=neo4j/run/secrets/neo4j_password

Also update the README to emphasize that users must change the default password before deployment.

Also applies to: 65-65

🤖 Prompt for AI Agents
In extras/openmemory-mcp/docker-compose.yml (around lines 23-25 and line 65)
there is a hardcoded Neo4j password; remove that default value so the compose
file does not provide a fallback password, require NEO4J_PASSWORD to be supplied
from the environment (and update .env.template to leave it empty), and update
README to instruct users to set a strong password before deployment; for
production recommend using Docker secrets or an init script that generates and
stores a random password if none is provided, and avoid binding Neo4j to the
host network/ports by default unless explicitly configured for local dev.

depends_on:
- mem0_store
- neo4j
ports:
- "8765:8765"
networks:
- mem0-network
- chronicle-network
restart: unless-stopped
healthcheck:
test: ["CMD", "python", "-c", "import requests; exit(0 if requests.get('http://localhost:8765/docs').status_code == 200 else 1)"]
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8765/docs')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
networks:
- default
- chronicle-network

# OpenMemory UI (starts by default with the MCP server)
# OpenMemory UI (pre-built image from fork)
openmemory-ui:
image: mem0/openmemory-ui:latest
image: ghcr.io/ushadow-io/u-mem0-ui:latest
ports:
- "3001:3000" # Different port to avoid conflict
- "3333:3000"
environment:
- NEXT_PUBLIC_API_URL=http://localhost:8765
- NEXT_PUBLIC_USER_ID=openmemory
- NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL:-http://localhost:8765}
- NEXT_PUBLIC_USER_ID=${NEXT_PUBLIC_USER_ID:-openmemory}
depends_on:
- openmemory-mcp
networks:
- mem0-network
restart: unless-stopped

# Neo4j graph database for advanced memory features
neo4j:
image: neo4j:latest
container_name: neo4j-mem0
volumes:
- ./data/neo4j_data:/data
- ./data/neo4j_logs:/logs
- ./data/neo4j_config:/config
environment:
- NEO4J_AUTH=neo4j/taketheredpillNe0
- NEO4J_server_memory_heap_initial__size=1G
- NEO4J_server_memory_heap_max__size=2G
- NEO4J_server_memory_pagecache_size=1G
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_PLUGINS=["apoc", "graph-data-science"]
- NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
Comment on lines +69 to +73
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security risk: Unrestricted procedures and file operations enabled.

Lines 69-73 enable APOC file import/export and mark all APOC/GDS procedures as unrestricted. This configuration:

  • Allows procedures to read/write files on the host filesystem through volume mounts
  • Bypasses Neo4j's security sandboxing
  • Could be exploited if an attacker gains access to execute Cypher queries

Consider restricting procedures to only those explicitly needed:

-      - NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
+      - NEO4J_dbms_security_procedures_unrestricted=apoc.load.*,apoc.export.*

If full unrestricted access is genuinely required, document why in a comment and add authentication/network isolation safeguards.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_PLUGINS=["apoc", "graph-data-science"]
- NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_PLUGINS=["apoc", "graph-data-science"]
- NEO4J_dbms_security_procedures_unrestricted=apoc.load.*,apoc.export.*
🤖 Prompt for AI Agents
In extras/openmemory-mcp/docker-compose.yml around lines 69-73, the compose file
currently enables APOC file import/export and marks all apoc/gds procedures as
unrestricted which exposes host file access and bypasses Neo4j sandboxing;
change these env vars to only enable the specific APOC/GDS procedures you need
(remove the broad NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.* and
instead list exact procedure names), disable file import/export unless required
(remove or set
NEO4J_apoc_import_file_enabled/NEO4J_apoc_export_file_enabled=false) or
constrain import paths to allowed directories, and if you truly need
unrestricted access add an inline comment documenting why plus enforce stronger
authentication and network isolation (e.g., limit container network access and
require strong credentials).

ports:
- "7474:7474"
- "7687:7687"
networks:
- mem0-network
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:7474"]
interval: 30s
timeout: 10s
retries: 5
start_period: 40s

volumes:
mem0_storage:
neo4j_data:
neo4j_logs:
neo4j_config:

networks:
default:
name: openmemory-mcp_default
mem0-network:
driver: bridge
chronicle-network:
external: true
external: true
3 changes: 2 additions & 1 deletion extras/openmemory-mcp/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,8 @@ fi

# Start services
echo "🚀 Starting OpenMemory MCP services..."
docker compose up -d $PROFILE
echo " (Building from Ushadow-io/mem0 fork...)"
docker compose up --build -d $PROFILE

# Wait for services to be ready
echo "⏳ Waiting for services to be ready..."
Expand Down
Loading