Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for…#228
Update OpenMemory MCP configuration to build from Ushadow-io/mem0 for…#228AnkushMalaker wants to merge 6 commits intodevfrom
Conversation
…k, add setup script for cloning, and enhance README with setup instructions.
WalkthroughSwitched memory provider to OpenMemory MCP and updated memory-related config; added Mycelia timeout; replaced/expanded OpenMemory MCP extras (env template, docker-compose, README, setup/run scripts) to use a mem0 fork, Neo4j and Qdrant, and updated service ports and networking; adjusted services.py service metadata and env checks. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
thestumonkey
left a comment
There was a problem hiding this comment.
use the right compose values from openmemorry/docker compose
Should probably submodule in instead of cloning.
the folder naming is bad. why is it openmemory-mcp? it's not an mcp server.
"mem0-fork" is a terrible name.
can submodule it in or call it mem0-ushadow-fork
| openmemory-mcp: | ||
| image: mem0/openmemory-mcp:latest | ||
| build: | ||
| context: ./mem0-fork/openmemory/api |
There was a problem hiding this comment.
where is this coming from? doens't match file system
There was a problem hiding this comment.
#228 (comment) as explained here, its created during setup.sh flow
extras/openmemory-mcp/setup.sh
Outdated
| echo "======================" | ||
|
|
||
| # Clone the mem0 fork if not already present | ||
| MEM0_DIR="mem0-fork" |
There was a problem hiding this comment.
Although this is fine for testing, I think that this is confusing and should just be in the openmemeory folder, as we already have many mem types
There was a problem hiding this comment.
This is in the openmemory-mcp folder, I will rename the root level service as extras/openmemory
Is that what you're saying?
There was a problem hiding this comment.
I'll rename in another PR as it'll show too many changes in this one
|
|
||
| # OpenMemory MCP Server (official Docker image) | ||
| # OpenMemory MCP Server (built from Ushadow-io/mem0 fork) | ||
| openmemory-mcp: |
There was a problem hiding this comment.
You need to use the compose file from the fork
I should just be able to "run" your built docker image instead of having to clone and build it, yes- but, between cloning and submodule, I'd like submodule if it was more permanent or files in this project required access from that fork. Clarifying the flow Using the wizard.sh flow, the services setups are run. |
- Changed memory provider from `chronicle` to `openmemory_mcp` in `config.yml`. - Updated server URL for OpenMemory MCP to use `http://host.docker.internal:8765`. - Modified service path in `services.py` to reflect new directory structure. - Enhanced `.env.template` with additional configuration options for OpenMemory. - Removed obsolete `docker-compose.yml` file for OpenMemory MCP. - Updated README with new service start instructions and port configurations. - Improved setup script to configure the `.env` file correctly for OpenMemory.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
extras/openmemory-mcp/run.sh (1)
58-58: Fix missing directory path in docker compose execution.The
run.shscript executesdocker compose upfromextras/openmemory-mcp/, but the docker-compose.yml is located inmem0-fork/openmemory/. Add a directory change before executing docker compose:# After line 50, before docker compose up: cd mem0-fork/openmemoryAlternatively, use the
-fflag to specify the compose file path:docker compose -f mem0-fork/openmemory/docker-compose.yml up --build -d $PROFILEWithout this fix,
docker compose upwill fail to find the docker-compose.yml file. Additionally, after applying this fix, the grep patternopenmemory-mcpwill correctly match the fork's service container names.
🧹 Nitpick comments (2)
extras/openmemory-mcp/README.md (1)
94-102: Consider wrapping bare URLs in angle brackets.For better Markdown compatibility, wrap bare URLs in angle brackets to prevent potential parsing issues.
🔎 Proposed fix
- - API Docs: http://localhost:8765/docs + - API Docs: <http://localhost:8765/docs> -- **Qdrant Dashboard**: http://localhost:6333/dashboard +- **Qdrant Dashboard**: <http://localhost:6333/dashboard> -- **Neo4j Browser**: http://localhost:7474 +- **Neo4j Browser**: <http://localhost:7474> -- **OpenMemory UI**: http://localhost:3333 +- **OpenMemory UI**: <http://localhost:3333>extras/openmemory-mcp/setup.sh (1)
86-100: Consider adding cleanup for temporary file.The temporary file created with
mktempmay not be cleaned up if the script is interrupted between creation andmv. Consider adding a trap to ensure cleanup.🔎 Proposed fix
+# Cleanup trap for temporary files +cleanup() { + rm -f "$temp_file" 2>/dev/null +} +trap cleanup EXIT + # Update .env file safely using awk - replace existing line or append if missing temp_file=$(mktemp)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
config.ymlextras/openmemory-mcp/.env.templateextras/openmemory-mcp/.gitignoreextras/openmemory-mcp/README.mdextras/openmemory-mcp/docker-compose.ymlextras/openmemory-mcp/run.shextras/openmemory-mcp/setup.shservices.py
💤 Files with no reviewable changes (1)
- extras/openmemory-mcp/docker-compose.yml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use Black formatter with 100-character line length for Python code
Use isort for Python import organization
ALL imports must be at the top of the file after the docstring - never import modules in the middle of functions or files
Group imports in Python files: standard library, third-party, then local imports
Use lazy imports sparingly and only when absolutely necessary for circular import issues in Python
Always raise errors in Python, never silently ignore - use explicit error handling with proper exceptions rather than silent failures
Avoid defensivehasattr()checks in Python - research and understand input/response or class structure instead
Files:
services.py
🧠 Learnings (3)
📚 Learning: 2025-12-20T01:11:51.495Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T01:11:51.495Z
Learning: Check if src/ is volume mounted in Docker - if not, run `docker compose build` so code changes are reflected (do not simply run `docker compose restart`)
Applied to files:
extras/openmemory-mcp/run.sh
📚 Learning: 2025-12-20T01:11:51.495Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T01:11:51.495Z
Learning: Use `docker compose build` without `--no-cache` by default for faster builds - only use `--no-cache` when explicitly needed for cached layer issues or troubleshooting
Applied to files:
extras/openmemory-mcp/run.sh
📚 Learning: 2025-12-08T23:52:34.959Z
Learnt from: AnkushMalaker
Repo: chronicler-ai/chronicle PR: 178
File: backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py:218-223
Timestamp: 2025-12-08T23:52:34.959Z
Learning: In Python code (chronicle project), prefer logging.exception() inside except blocks to automatically log the full stack trace. When re-raising exceptions, always chain with 'raise ... from e' to preserve the original context; use 'raise ... from None' only if you explicitly want to suppress the context. This improves debuggability across Python files.
Applied to files:
services.py
🪛 dotenv-linter (4.0.0)
extras/openmemory-mcp/.env.template
[warning] 9-9: [UnorderedKey] The OPENMEMORY_USER_ID key should go before the USER key
(UnorderedKey)
🪛 markdownlint-cli2 (0.18.1)
extras/openmemory-mcp/README.md
38-38: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
44-44: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
94-94: Bare URL used
(MD034, no-bare-urls)
96-96: Bare URL used
(MD034, no-bare-urls)
98-98: Bare URL used
(MD034, no-bare-urls)
102-102: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (12)
extras/openmemory-mcp/.gitignore (1)
1-3: LGTM!The
.gitignoreentries appropriately exclude the cloned fork directory and environment files containing secrets.extras/openmemory-mcp/run.sh (1)
50-51: LGTM!The
--buildflag addition aligns with the fork-based workflow, ensuring images are built from source. The informative message provides useful context. Based on learnings, using--buildwithout--no-cacheis the correct approach for faster builds.extras/openmemory-mcp/.env.template (1)
7-20: LGTM!The template is well-organized with clear section comments. The logical grouping of related variables (USER before OPENMEMORY_USER_ID) takes precedence over alphabetical ordering. Neo4j credentials in comments are appropriate for a local development template.
services.py (2)
36-41: LGTM!The path update to
extras/openmemory-mcp/mem0-fork/openmemoryaligns with the fork-based workflow introduced bysetup.sh. The expanded port list correctly reflects the new services (Neo4j browser 7474, Neo4j Bolt 7687, UI 3333).
48-54: The .env path handling for openmemory-mcp is correct. The official openmemory documentation confirms the .env file is located in the api/ subdirectory, which matches the code's check for(service_path / 'api' / '.env').exists()whenservice_name == 'openmemory-mcp'.config.yml (2)
188-188: LGTM!The provider switch from
chronicletoopenmemory_mcpaligns with the PR objectives.
196-198: LGTM!The
host.docker.internalURL is correct for Docker containers accessing host services, and theuser_id: openmemoryaligns with the defaults in.env.template.extras/openmemory-mcp/README.md (2)
5-6: LGTM!Clear note about using the fork, which sets expectations appropriately.
36-51: LGTM!Good documentation structure with both unified service manager and manual options. The first-build timing note is helpful for user expectations.
extras/openmemory-mcp/setup.sh (3)
1-4: LGTM!Good use of strict error handling with
set -euo pipefail.
25-43: LGTM!Good fork management with conditional cloning and optional update prompt. The error handling on clone failure is appropriate.
113-124: LGTM!Clear post-setup instructions with all relevant endpoints and configuration guidance for both Docker and local development.
| prompt: Extract important information from this conversation and return a JSON | ||
| object with an array named "facts". Include personal preferences, plans, names, | ||
| dates, locations, numbers, and key details hehehe. Keep items concise and useful. |
There was a problem hiding this comment.
Remove debug text from extraction prompt.
The prompt contains "hehehe" which appears to be leftover debug/test text and should be removed before merging.
🔎 Proposed fix
prompt: Extract important information from this conversation and return a JSON
object with an array named "facts". Include personal preferences, plans, names,
- dates, locations, numbers, and key details hehehe. Keep items concise and useful.
+ dates, locations, numbers, and key details. Keep items concise and useful.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| prompt: Extract important information from this conversation and return a JSON | |
| object with an array named "facts". Include personal preferences, plans, names, | |
| dates, locations, numbers, and key details hehehe. Keep items concise and useful. | |
| prompt: Extract important information from this conversation and return a JSON | |
| object with an array named "facts". Include personal preferences, plans, names, | |
| dates, locations, numbers, and key details. Keep items concise and useful. |
🤖 Prompt for AI Agents
In config.yml around lines 192 to 194, the prompt string includes leftover debug
text "hehehe"; remove that token so the prompt reads cleanly (e.g., "Extract
important information from this conversation and return a JSON object with an
array named \"facts\". Include personal preferences, plans, names, dates,
locations, numbers, and key details. Keep items concise and useful."). Update
the file by deleting the "hehehe" text and fixing any surrounding
whitespace/punctuation so the sentence remains grammatically correct, then run a
quick lint/check to ensure YAML validity.
- Introduced a new `docker-compose.yml` file to manage services for OpenMemory MCP, including Qdrant and Neo4j. - Enhanced `.env.template` with additional configuration options for Neo4j and Qdrant. - Updated `setup.sh` to reflect the use of pre-built Docker images and simplified the setup process. - Improved user instructions for starting services and accessing various components.
…d-lite into fix/open-memory-fork
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
extras/openmemory-mcp/.env.template (1)
19-23: Critical: Same hardcoded password issue applies here.This template contains the same hardcoded Neo4j password flagged in docker-compose.yml. The comment "Default credentials" suggests these are meant to be changed, but:
- There's no validation requiring users to change it
- The setup scripts don't enforce or prompt for a secure password
- Users may overlook this and deploy with the default
Ensure your setup script (setup.sh) validates that users have changed this password or generates a secure random password automatically. See the recommendations in the docker-compose.yml review.
🧹 Nitpick comments (3)
extras/openmemory-mcp/docker-compose.yml (1)
15-15: Pin Docker image versions instead of using:latesttags.All services use
:latesttags, which reduces reproducibility and can introduce unexpected breaking changes or security vulnerabilities when images are pulled at different times.🔎 Recommended approach
Pin specific image versions or use commit SHAs for the fork images:
- image: ghcr.io/ushadow-io/u-mem0-api:latest + image: ghcr.io/ushadow-io/u-mem0-api:v1.0.0 # or specific commit SHA - image: ghcr.io/ushadow-io/u-mem0-ui:latest + image: ghcr.io/ushadow-io/u-mem0-ui:v1.0.0 - image: neo4j:latest + image: neo4j:5.15.0 # or another stable versionAlso applies to: 44-44, 58-58
extras/openmemory-mcp/.env.template (2)
8-9: Consider consolidating redundant user identifier variables.Both
USERandOPENMEMORY_USER_IDare set to the same value. If they serve the same purpose, consider using a single variable to reduce configuration complexity and potential inconsistencies.If they are genuinely used by different components, add a comment explaining the distinction:
-# Optional: User identifier (defaults to 'openmemory') +# Optional: User identifier for MCP server (defaults to 'openmemory') USER=openmemory +# User identifier for OpenMemory frontend OPENMEMORY_USER_ID=openmemory
1-26: Static analysis: Optional key ordering suggestions.The dotenv-linter tool suggests reordering keys alphabetically (e.g.,
OPENMEMORY_USER_IDbeforeUSER,NEO4J_PASSWORDbeforeNEO4J_URL). While this is a stylistic preference that can improve consistency, it's not critical for functionality.You can safely ignore these warnings if you prefer logical grouping over alphabetical ordering. However, if you want to satisfy the linter:
🔎 Alphabetically ordered version
-USER=openmemory OPENMEMORY_USER_ID=openmemory +USER=openmemory -NEO4J_URL=bolt://neo4j:7687 -NEO4J_USERNAME=neo4j NEO4J_PASSWORD=taketheredpillNe0 +NEO4J_URL=bolt://neo4j:7687 +NEO4J_USERNAME=neo4j
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
extras/openmemory-mcp/.env.templateextras/openmemory-mcp/docker-compose.ymlextras/openmemory-mcp/setup.shservices.py
🚧 Files skipped from review as they are similar to previous changes (2)
- extras/openmemory-mcp/setup.sh
- services.py
🧰 Additional context used
🪛 dotenv-linter (4.0.0)
extras/openmemory-mcp/.env.template
[warning] 9-9: [UnorderedKey] The OPENMEMORY_USER_ID key should go before the USER key
(UnorderedKey)
[warning] 23-23: [UnorderedKey] The NEO4J_PASSWORD key should go before the NEO4J_URL key
(UnorderedKey)
🔇 Additional comments (1)
extras/openmemory-mcp/docker-compose.yml (1)
36-36: Verify healthcheck tool availability in container images.The neo4j:latest image includes
wgetin its default configuration, so the healthcheck on line 81 is valid. However, the ghcr.io/ushadow-io/u-mem0-api:latest image cannot be independently verified (not publicly discoverable), though as a FastAPI Python application it necessarily includes python3 for execution. If the u-mem0-api image differs significantly from expected Python base images, consider alternative healthchecks usingcurlor direct endpoint testing.
| - NEO4J_URL=${NEO4J_URL:-bolt://neo4j:7687} | ||
| - NEO4J_USERNAME=${NEO4J_USERNAME:-neo4j} | ||
| - NEO4J_PASSWORD=${NEO4J_PASSWORD:-taketheredpillNe0} |
There was a problem hiding this comment.
Critical: Hardcoded Neo4j password is a major security vulnerability.
The password taketheredpillNe0 is hardcoded in both the docker-compose file and .env.template, and Neo4j is exposed on ports 7474 and 7687. This creates a significant security risk since:
- The password is publicly visible in the repository
- Neo4j is accessible from the host network
- An attacker could easily discover and exploit this credential
🔎 Recommended fixes
Option 1 (Recommended): Remove defaults entirely and require users to set secure passwords:
- - NEO4J_PASSWORD=${NEO4J_PASSWORD:-taketheredpillNe0}
+ - NEO4J_PASSWORD=${NEO4J_PASSWORD:?NEO4J_PASSWORD must be set in .env}
- - NEO4J_AUTH=neo4j/taketheredpillNe0
+ - NEO4J_AUTH=${NEO4J_USERNAME:-neo4j}/${NEO4J_PASSWORD:?NEO4J_PASSWORD required}Option 2: Generate a random password during setup:
Add to your setup script:
# Generate secure random password if not set
if [ -z "$NEO4J_PASSWORD" ]; then
NEO4J_PASSWORD=$(openssl rand -base64 32)
echo "NEO4J_PASSWORD=$NEO4J_PASSWORD" >> .env
fiOption 3: Use Docker secrets (for production):
secrets:
neo4j_password:
file: ./secrets/neo4j_password.txt
environment:
- NEO4J_AUTH=neo4j/run/secrets/neo4j_passwordAlso update the README to emphasize that users must change the default password before deployment.
Also applies to: 65-65
🤖 Prompt for AI Agents
In extras/openmemory-mcp/docker-compose.yml (around lines 23-25 and line 65)
there is a hardcoded Neo4j password; remove that default value so the compose
file does not provide a fallback password, require NEO4J_PASSWORD to be supplied
from the environment (and update .env.template to leave it empty), and update
README to instruct users to set a strong password before deployment; for
production recommend using Docker secrets or an init script that generates and
stores a random password if none is provided, and avoid binding Neo4j to the
host network/ports by default unless explicitly configured for local dev.
| - NEO4J_apoc_export_file_enabled=true | ||
| - NEO4J_apoc_import_file_enabled=true | ||
| - NEO4J_apoc_import_file_use__neo4j__config=true | ||
| - NEO4J_PLUGINS=["apoc", "graph-data-science"] | ||
| - NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.* |
There was a problem hiding this comment.
Security risk: Unrestricted procedures and file operations enabled.
Lines 69-73 enable APOC file import/export and mark all APOC/GDS procedures as unrestricted. This configuration:
- Allows procedures to read/write files on the host filesystem through volume mounts
- Bypasses Neo4j's security sandboxing
- Could be exploited if an attacker gains access to execute Cypher queries
Consider restricting procedures to only those explicitly needed:
- - NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.*
+ - NEO4J_dbms_security_procedures_unrestricted=apoc.load.*,apoc.export.*If full unrestricted access is genuinely required, document why in a comment and add authentication/network isolation safeguards.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - NEO4J_apoc_export_file_enabled=true | |
| - NEO4J_apoc_import_file_enabled=true | |
| - NEO4J_apoc_import_file_use__neo4j__config=true | |
| - NEO4J_PLUGINS=["apoc", "graph-data-science"] | |
| - NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.* | |
| - NEO4J_apoc_export_file_enabled=true | |
| - NEO4J_apoc_import_file_enabled=true | |
| - NEO4J_apoc_import_file_use__neo4j__config=true | |
| - NEO4J_PLUGINS=["apoc", "graph-data-science"] | |
| - NEO4J_dbms_security_procedures_unrestricted=apoc.load.*,apoc.export.* |
🤖 Prompt for AI Agents
In extras/openmemory-mcp/docker-compose.yml around lines 69-73, the compose file
currently enables APOC file import/export and marks all apoc/gds procedures as
unrestricted which exposes host file access and bypasses Neo4j sandboxing;
change these env vars to only enable the specific APOC/GDS procedures you need
(remove the broad NEO4J_dbms_security_procedures_unrestricted=apoc.*,gds.* and
instead list exact procedure names), disable file import/export unless required
(remove or set
NEO4J_apoc_import_file_enabled/NEO4J_apoc_export_file_enabled=false) or
constrain import paths to allowed directories, and if you truly need
unrestricted access add an inline comment documenting why plus enforce stronger
authentication and network isolation (e.g., limit container network access and
require strong credentials).
…k, add setup script for cloning, and enhance README with setup instructions.
Summary by CodeRabbit
New Features
Chores
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.