Skip to content

Comments

Mcp#182

Closed
thestumonkey wants to merge 22 commits intoSimpleOpenSoftware:mainfrom
Ushadow-io:mcp
Closed

Mcp#182
thestumonkey wants to merge 22 commits intoSimpleOpenSoftware:mainfrom
Ushadow-io:mcp

Conversation

@thestumonkey
Copy link
Contributor

@thestumonkey thestumonkey commented Dec 9, 2025

Added an mcp server and settings page in order to configure API key

Summary by CodeRabbit

Release Notes

New Features

  • Added comprehensive setup guides for Linux, macOS, and Windows platforms
  • Introduced interactive setup wizard for streamlined configuration (run make wizard)
  • Enabled multi-environment management for simultaneous deployments
  • Integrated Mycelia as an optional memory provider
  • Added user API key generation and management for external integrations
  • New Settings page for account and integration management
  • Enhanced timeline visualization with multiple view options (Classic, Timeline, Gantt)
  • Added memory detail page with comprehensive metadata display

Documentation & Configuration

  • New installation guide consolidating cross-platform setup steps
  • Advanced setup guides for Caddy, Tailscale, SSL/TLS, and multi-environment deployments
  • Modular Docker Compose structure with shared infrastructure support
  • Environment variable templates for secrets and configuration management

UI & UX Improvements

  • Settings page for API key management
  • Multiple timeline visualization options for memory and conversation data
  • Memory search and filtering enhancements
  • Improved navigation with dedicated routers for conversations and memories

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 9, 2025

Walkthrough

This PR introduces comprehensive Mycelia memory provider integration, modularizes Docker Compose configuration into a base-plus-overrides structure, restructures the memory service architecture from a monolithic module to services.memory with pluggable providers, adds an MCP server for conversation data access, implements Mycelia OAuth synchronization and API key generation, expands frontend UI with new timeline visualizations (Gantt-based and D3-based), provides platform-specific setup guides, and establishes interactive setup wizard documentation.

Changes

Cohort / File(s) Summary
Configuration & Secrets
.env.secrets.template, .gitmodules
New template for environment secrets (auth, API keys, services); two new git submodules for Mycelia and OpenMemory extras.
Git & Ignore Patterns
.gitignore
Extended to exclude env secrets, backups, data directories, SSL certificates, IDE/tool directories, and deprecated compose files.
Setup Documentation
INSTALL.md, WIZARD.md, CLAUDE.md
New comprehensive installer guide, interactive setup wizard documentation, updated main docs with Quick Setup and Mycelia references.
Platform-Specific Guides
Docs/setup/prerequisites.md, Docs/setup/linux.md, Docs/setup/macos.md, Docs/setup/windows-*.md, Docs/setup/tailscale.md
Complete end-to-end setup guides for Linux, macOS, Windows (WSL2 and Git Bash), Tailscale integration, and prerequisites checklist.
Advanced Setup Guides
Docs/setup/advanced/*
Multi-environment architecture documentation, Caddy reverse proxy setup, environment management, SSL/TLS setup, Tailscale serve guide.
Development Documentation
Docs/development/*
Docker setup summary, setup completion guide, setup wizard summary, development docs index.
Makefile & Build
Makefile, README.md
Major expansion with wizard, environment management, infrastructure controls, Caddy management, Mycelia sync targets; added installer guide link.
Docker Compose Modularization
backends/advanced/docker-compose.yml, backends/advanced/compose/backend.yml, backends/advanced/compose/frontend.yml, backends/advanced/compose/overrides/*.yml
Restructured from monolithic to modular include-based system with base services (backend, frontend) and environment-specific overrides (dev, prod, test); updated network from internal to external chronicle-network.
Docker & Nginx
backends/advanced/Dockerfile, backends/advanced/docker-compose*.yml, backends/advanced/generate-caddyfile.sh, backends/advanced/nginx.conf
Updated Node and Dockerfile configurations (node:22, legacy peer deps), Caddyfile generation script, nginx sub_filter for path-based routing.
Environment Configuration
backends/advanced/.env.template, backends/advanced/src/advanced_omi_backend/app_config.py, backends/advanced/src/advanced_omi_backend/database.py
Added Mycelia provider configuration (MYCELIA_URL, MYCELIA_DB, MYCELIA_TIMEOUT); introduced MONGODB_DATABASE environment variable; added database getter utilities.
Memory Service Restructuring
backends/advanced/src/advanced_omi_backend/services/memory/
New services-level memory module with base classes, config builder, prompt templates; moved from old memory/ module; supports pluggable providers (FriendLite, OpenMemory MCP, Mycelia).
Mycelia Provider Implementation
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
New Mycelia memory service with JWT auth, LLM-based fact extraction, optional temporal entity enrichment, semantic search, and CRUD operations.
Mycelia Synchronization & OAuth
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py, backends/advanced/scripts/create_mycelia_api_key.py, backends/advanced/scripts/sync_friendlite_mycelia.py
New sync service for OAuth credential creation, API key generation with hashing/salting, user-centric sync operations, orphan object detection and reassignment.
MCP Server Implementation
backends/advanced/src/advanced_omi_backend/services/mcp_server.py
New MCP protocol server exposing conversation data (listing, retrieval, segments, audio) with user scoping and SSE support.
Auth & JWT
backends/advanced/src/advanced_omi_backend/auth.py
Added generate_jwt_for_user utility for external service tokens.
Models & Data
backends/advanced/src/advanced_omi_backend/models/conversation.py, backends/advanced/src/advanced_omi_backend/models/user.py
Added MYCELIA enum to MemoryProvider; added api_key and api_key_created_at fields to User.
Backend Routes & Endpoints
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py, backends/advanced/src/advanced_omi_backend/routers/modules/system_routes.py, backends/advanced/src/advanced_omi_backend/routers/modules/user_routes.py
New POST /memories endpoint for memory creation; new admin endpoints for memory provider management (GET/POST /admin/memory/provider); new user endpoints for API key generation/revocation (POST/DELETE /me/api-key).
Backend Controllers
backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py, backends/advanced/src/advanced_omi_backend/controllers/system_controller.py
Added add_memory endpoint controller; added get/set_memory_provider controllers for dynamic provider switching.
Workers & Job Management
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py, backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py, backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py, backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py, backends/advanced/src/advanced_omi_backend/utils/job_utils.py
Added zombie job detection logic; added speech validation after transcription with auto-deletion on failure; updated memory service import paths.
Transcription & Speech Detection
backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py, backends/advanced/src/advanced_omi_backend/services/transcription/
Increased speech detection thresholds (min_words 5→10, min_confidence 0.5→0.7, added min_duration); updated transcription provider import paths.
Memory Service Provider Chain
backends/advanced/src/advanced_omi_backend/services/memory/service_factory.py, backends/advanced/src/advanced_omi_backend/services/memory/base.py, backends/advanced/src/advanced_omi_backend/services/memory/providers/
Updated service factory to support Mycelia; extended base interfaces with get_memory, update_memory, delete_memory, initialization patterns; added MemoryEntry.to_dict serialization.
MCP Client & OpenMemory
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py, backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py
Extended MCPClient with user_email support, get_memory, update_memory; expanded OpenMemory MCP provider with same operations.
Mycelia Documentation
backends/advanced/Docs/mycelia-*.md, backends/advanced/Docs/README.md
New guides for Mycelia authentication/ownership model, auto-login, setup; updated main docs with Mycelia references.
Frontend Dependencies
backends/advanced/webui/package.json
Added d3, frappe-gantt, react-gantt-timeline, react-vertical-timeline-component, xss, prop-types with corresponding type definitions.
Frontend Styling
backends/advanced/webui/public/frappe-gantt.css, backends/advanced/webui/index.html
New Gantt chart CSS theming; added frappe-gantt.css stylesheet link.
Frontend Core
backends/advanced/webui/src/App.tsx, backends/advanced/webui/src/contexts/AuthContext.tsx
Added basename routing logic; extended User type with api_key fields; introduced getStorageKey abstraction for token management; added Mycelia JWT token persistence.
Frontend Pages & Components
backends/advanced/webui/src/pages/Conversations*.tsx, backends/advanced/webui/src/pages/Memories*.tsx, backends/advanced/webui/src/pages/Memory*.tsx, backends/advanced/webui/src/pages/Settings.tsx, backends/advanced/webui/src/pages/*Timeline*.tsx
New router components (ConversationsRouter, MemoriesRouter); new detail pages (MemoryDetail, Settings); new timeline visualizations (ConversationsTimeline, FrappeGanttTimeline, MyceliaTimeline, ReactGanttTimeline); updated Memories with navigation and metadata.
Frontend Hooks & Utilities
backends/advanced/webui/src/hooks/use*.ts, backends/advanced/webui/src/layout/Layout.tsx
Updated audio recording hooks for BACKEND_URL derivation and getStorageKey usage; new useD3Zoom hook for D3 zoom behavior; updated Layout navigation with Timeline and Settings.
Dockerfile & Webui
backends/advanced/webui/Dockerfile*
Updated Node image (18→22), npm ci→npm install with legacy peer deps, VITE_BACKEND_URL→VITE_BASE_PATH, dev server port (5173→3020), host binding for Docker access.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–75 minutes

Areas requiring special attention:

  • backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py — Complex LLM-based fact extraction, temporal entity enrichment, BSON ID/date parsing, JWT authentication, and Mycelia API integration; requires validation of OpenAI integration and semantic search logic.
  • backends/advanced/src/advanced_omi_backend/services/mcp_server.py — New MCP protocol implementation with user scoping, audio retrieval, and SSE handling; review MCP tool definitions, authentication flow, and response serialization.
  • backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py & related scripts — API key generation, hashing/salting, MongoDB interactions, and user synchronization workflows; verify cryptographic correctness and orphan reassignment logic.
  • backends/advanced/docker-compose.yml & overrides — Modularization from monolithic to include-based structure; verify compose syntax, network connectivity, and service dependencies across dev/prod/test profiles.
  • Mycelia memory provider chain (service_factory, base.py, mcp_client.py, openmemory_mcp.py) — Extensive interface expansion and initialization refactoring; ensure backward compatibility and consistent error handling across providers.
  • Frontend timeline components (FrappeGanttTimeline, MyceliaTimeline, ReactGanttTimeline, ConversationsTimeline) — New D3 and third-party visualization libraries; verify data transformation, event handling, zoom/pan behavior, and responsive resizing.
  • Memory service import path migration — Widespread changes from advanced_omi_backend.memory to advanced_omi_backend.services.memory; spot-check all controller, job, and route imports for consistency.
  • Speech detection and zombie job handling — Updated thresholds in conversation_utils; verify speech validation logic in transcription_jobs and job liveness checks don't introduce regressions.

Possibly related PRs

  • chronicler-ai/chronicle#131 — Both modify advanced backend initialization, Caddy/nginx setup (Caddyfile.template), and .gitignore entries.
  • chronicler-ai/chronicle#121 — Both modify Beanie models, queue/RQ processing, and audio/conversation service flows.
  • chronicler-ai/chronicle#69 — Both modify memory provider architecture, MemoryProvider enums, and service factory with MCP/OpenMemory integration.

Suggested reviewers

  • AnkushMalaker

🐰 The wizard waves their staff with glee,
Mycelia blooms in memory!
Timelines dance and compose flows free,
While APIs spring from Docker's spree. 🎭✨

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Mcp' is vague and does not convey meaningful information about the extensive changeset, which includes MCP server implementation, settings pages, Mycelia integration, multi-environment setup, and comprehensive documentation additions. Replace with a descriptive title summarizing the main change, such as 'Add MCP server with API key management and settings UI' or 'Implement MCP server, Mycelia integration, and multi-environment architecture'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 93.55% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@thestumonkey
Copy link
Contributor Author

@RabbitAI review

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (8)
backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py (1)

50-58: Docstring and config must match actual env behavior.

The docstring correctly documents SPEECH_DETECTION_MIN_DURATION (default: 10.0), but currently that env var isn’t actually read anywhere; min_duration is only obtained via settings.get("min_duration", 10.0). Once get_speech_detection_settings() is updated to include a min_duration key (using SPEECH_DETECTION_MIN_DURATION and the new default), this docstring will be accurate and discoverable. Until then, the documented knob doesn’t work.

Aligning the config wiring as suggested in config.py will fully realize what’s documented here.

backends/advanced/src/advanced_omi_backend/services/memory/providers/vector_stores.py (1)

243-268: Remove in-function import uuid and improve exception logging

In both delete_memory and update_memory you re-import uuid inside the method bodies (Lines 247 and 328) even though it’s already imported at the module level (Line 11). This violates the project guideline of keeping all imports at the top of the file and is unnecessary duplication.

Also, for new code paths it’s preferable to log full stack traces when catching exceptions.

Concrete suggestions:

@@     async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
-        try:
-            # Convert memory_id to proper format for Qdrant
-            import uuid
+        try:
+            # Convert memory_id to proper format for Qdrant
             try:
                 # Try to parse as UUID first
                 uuid.UUID(memory_id)
@@
-        except Exception as e:
-            memory_logger.error(f"Qdrant delete memory failed: {e}")
+        except Exception as e:
+            memory_logger.exception(f"Qdrant delete memory failed: {e}")
             return False
@@     async def update_memory(
-        try:
+        try:
@@
-            # Convert memory_id to proper format for Qdrant
-            # Qdrant accepts either UUID strings or unsigned integers
-            import uuid
+            # Convert memory_id to proper format for Qdrant
+            # Qdrant accepts either UUID strings or unsigned integers
             try:
@@
-        except Exception as e:
-            memory_logger.error(f"Qdrant update memory failed: {e}")
+        except Exception as e:
+            memory_logger.exception(f"Qdrant update memory failed: {e}")
             return False

This keeps imports at the top of the file and gives you stack traces for easier debugging. Based on learnings, using logger.exception here is preferred.

Also applies to: 311-354

backends/advanced/src/advanced_omi_backend/models/user.py (1)

22-29: Avoid storing or returning raw API keys on the User model

The additions:

class UserRead(...):
    ...
    api_key: Optional[str] = None
    api_key_created_at: Optional[datetime] = None

class User(..., Document):
    ...
    # API key for MCP access
    api_key: Optional[str] = None
    api_key_created_at: Optional[datetime] = None

suggest that the full MCP API key is:

  • Persisted directly on the users collection; and
  • Exposed via the UserRead schema.

If api_key is a bearer secret, this is a significant security risk (DB compromise or accidental logging/serialization leaks the key). Best practice is to:

  • Store only a hashed version of the key (plus metadata like created_at, last_used_at), and
  • Return the cleartext key only once at creation time, using a dedicated response model, not on the generic UserRead.

Consider refactoring to a separate UserApiKey model/collection with hashed_key, name, created_at, etc., and adjusting the API so UserRead never contains the raw secret.

Also applies to: 62-69

backends/advanced/webui/src/pages/Memories.tsx (1)

163-168: Fix truthy check on memory.score in threshold filter

Current logic:

const thresholdFilteredMemories = isSemanticFilterActive && relevanceThreshold > 0
  ? currentMemories.filter(memory => {
      if (!memory.score) return true // If no score, show it
      const relevancePercentage = memory.score * 100
      return relevancePercentage >= relevanceThreshold
    })
  : currentMemories

treats a score of 0 as “no score” because !0 is truthy, so those memories bypass the threshold filter. If you ever get low‑scoring results, they won’t be filtered out.

Use an explicit null/undefined check instead:

-      if (!memory.score) return true // If no score, show it
+      if (memory.score == null) return true // If no score, show it
       const relevancePercentage = memory.score * 100
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (1)

47-52: Move os import to the top of the file.

The import os statement on line 48 violates the coding guideline that all imports must be at the top of the file. Move this import alongside the other standard library imports.

 import logging
 import uuid
+import os
 from typing import List, Dict, Any, Optional
 import httpx

Then update the __init__ method:

         # Use custom CA certificate if available
-        import os
         ca_bundle = os.getenv('REQUESTS_CA_BUNDLE')

As per coding guidelines, imports must be at the top of the file.

backends/advanced/webui/src/contexts/AuthContext.tsx (1)

27-73: Also clear mycelia_jwt_token when the access token is invalid

You correctly introduce getStorageKey and store a Mycelia JWT alongside the main access token on login:

localStorage.setItem(getStorageKey('token'), access_token)
localStorage.setItem(getStorageKey('mycelia_jwt_token'), access_token)

However, in the initialization/verification path, if authApi.getMe() fails, you only clear the main token:

localStorage.removeItem(getStorageKey('token'))
setToken(null)
setUser(null)

This leaves a potentially stale mycelia_jwt_token in localStorage, which could let the Mycelia auto-login flow proceed even after the backend has rejected the session token.

Recommend clearing both keys in the failure path to keep them in sync:

       } catch (error) {
         console.error('❌ AuthContext: Token verification failed:', error)
         // Token is invalid, clear it
-        localStorage.removeItem(getStorageKey('token'))
+        localStorage.removeItem(getStorageKey('token'))
+        localStorage.removeItem(getStorageKey('mycelia_jwt_token'))
         setToken(null)
         setUser(null)
       }

You might also consider doing the same in any other place where you intentionally invalidate the access token.

backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py (2)

369-388: delete_memory does not use user_id/user_email for MCP context, unlike peer methods.

The get_memory and update_memory methods properly switch the MCP client's user context before operations, but delete_memory ignores the provided user_id and user_email parameters. This may cause the deletion to occur under the wrong user context.

-    async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
+    async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
         """Delete a specific memory by ID.
         
         Args:
             memory_id: Unique identifier of the memory to delete
+            user_id: Optional user identifier for filtering
+            user_email: Optional user email (not used by OpenMemory MCP)
             
         Returns:
             True if successfully deleted, False otherwise
         """
         if not self._initialized:
             await self.initialize()
         
+        # Update MCP client user context for this operation
+        original_user_id = self.mcp_client.user_id
+        self.mcp_client.user_id = user_id or self.user_id
+
         try:
             success = await self.mcp_client.delete_memory(memory_id)
             if success:
                 memory_logger.info(f"🗑️ Deleted memory {memory_id} via MCP")
             return success
         except Exception as e:
-            memory_logger.error(f"Delete memory failed: {e}")
+            memory_logger.exception(f"Delete memory failed: {e}")
             return False
+        finally:
+            # Restore original user_id
+            self.mcp_client.user_id = original_user_id

400-416: Bug: delete_all_user_memories uses self.user_id instead of the passed user_id parameter.

Line 404 sets self.mcp_client.user_id = self.user_id but should use the function's user_id parameter to delete memories for the correct user.

         # Update MCP client user context for this operation
         original_user_id = self.mcp_client.user_id
-        self.mcp_client.user_id = self.user_id  # Use configured user ID
+        self.mcp_client.user_id = user_id  # Use the passed user_id parameter
🟡 Minor comments (18)
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py-502-505 (1)

502-505: Move import to the top of the file.

The import statement at line 503 violates the coding guideline requiring all imports at the top of the file. Importing inside loops causes unnecessary overhead on each iteration.

As per coding guidelines, move the import to the top of the file:

 import asyncio
 import os
 import logging
 import time
 from typing import Dict, Any
+from rq import get_current_job
 
 from advanced_omi_backend.models.job import JobPriority, BaseRQJob, async_job
-
+from advanced_omi_backend.utils.job_utils import check_job_alive
 from advanced_omi_backend.controllers.queue_controller import (
     transcription_queue,
     redis_conn,
     JOB_RESULT_TTL,
     REDIS_URL,
 )

Then remove the import from inside the loop:

     while True:
         # Check if job still exists in Redis (detect zombie state)
-        from advanced_omi_backend.utils.job_utils import check_job_alive
         if not await check_job_alive(redis_client, current_job):
             break

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py-196-263 (1)

196-263: Move imports to the top of the file and consider more specific exception handling.

This block contains multiple imports inside the function (lines 197, 215-216, 221) which violates the coding guideline requiring all imports at the top of the file. Additionally, the bare Exception catches at lines 241 and 246 should ideally be more specific.

As per coding guidelines, move all imports to the top of the file:

 import asyncio
 import os
 import logging
 import time
 from typing import Dict, Any
+from rq import get_current_job
+from rq.job import Job
 
 from advanced_omi_backend.models.job import JobPriority, BaseRQJob, async_job
-
 from advanced_omi_backend.controllers.queue_controller import (
     transcription_queue,
     redis_conn,
     JOB_RESULT_TTL,
     REDIS_URL,
 )
+from advanced_omi_backend.utils.conversation_utils import (
+    analyze_speech,
+    mark_conversation_deleted,
+)

For the exception handling at lines 235-242, consider catching more specific exceptions if possible:

                 for job_id in job_patterns:
                     try:
                         dependent_job = Job.fetch(job_id, connection=redis_conn)
                         if dependent_job and dependent_job.get_status() in ['queued', 'deferred', 'scheduled']:
                             dependent_job.cancel()
                             cancelled_jobs.append(job_id)
                             logger.info(f"✅ Cancelled dependent job: {job_id}")
-                    except Exception as e:
+                    except (NoSuchJobError, InvalidJobOperation) as e:
                         logger.debug(f"Job {job_id} not found or already completed: {e}")

Note: If catching specific RQ exceptions is not feasible in this context, the current broad exception handling is acceptable for this cleanup code, but add a comment explaining why.

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py-284-287 (1)

284-287: Move import to the top of the file.

The import statement at line 285 violates the coding guideline that requires all imports to be at the top of the file after the docstring. Importing inside loops causes unnecessary overhead on each iteration.

As per coding guidelines, move the import to the top of the file:

 import asyncio
 import logging
 import time, os
 from datetime import datetime
 from typing import Dict, Any
 from rq.job import Job
 from advanced_omi_backend.models.job import async_job
 from advanced_omi_backend.controllers.queue_controller import redis_conn
+from rq import get_current_job
+from advanced_omi_backend.utils.job_utils import check_job_alive

Then remove the import from inside the loop:

     while True:
         # Check if job still exists in Redis (detect zombie state)
-        from advanced_omi_backend.utils.job_utils import check_job_alive
         if not await check_job_alive(redis_client, current_job):
             break

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py-263-266 (1)

263-266: Move imports to the top of the file.

The import statements at lines 264-265 violate the coding guideline that requires all imports to be at the top of the file after the docstring.

As per coding guidelines, move these imports to the top of the file:

 import asyncio
 import os
 import logging
 import time
 from typing import Dict, Any, Optional
+from rq import get_current_job
 
 from advanced_omi_backend.models.job import JobPriority, async_job
-
+from advanced_omi_backend.utils.job_utils import check_job_alive
 from advanced_omi_backend.controllers.queue_controller import (
     default_queue,
     JOB_RESULT_TTL,
 )

Then remove these imports from inside the function:

-    # Get current job for zombie detection
-    from rq import get_current_job
-    from advanced_omi_backend.utils.job_utils import check_job_alive
+    # Get current job for zombie detection
     current_job = get_current_job()

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx-603-611 (1)

603-611: Duplicate CSS class in select element.

The focus:ring-2 class is duplicated on Line 606.

- className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-2 focus:ring-blue-500 focus:border-blue-500"
+ className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-blue-500 focus:border-blue-500"
backends/advanced/webui/public/frappe-gantt.css-1-1 (1)

1-1: Missing var() function for CSS variable.

The .holiday-label selector uses background:--g-weekend-label-color which is invalid. CSS variables must be wrapped in the var() function.

Find and replace in this minified CSS:

- background:--g-weekend-label-color
+ background:var(--g-weekend-label-color)
backends/advanced/src/advanced_omi_backend/auth.py-101-133 (1)

101-133: Move imports to the top of the file per coding guidelines

The function violates the project's Python guidelines: from datetime import datetime, timedelta and import jwt must be moved to the top of the file with other imports (grouped after standard library imports), not inside the function.

Additionally, a minor optimization: compute now = datetime.utcnow() once at the start of the payload creation and use it for both iat and exp = now + timedelta(hours=24) to ensure tight timestamp alignment.

PyJWT v2.0+ returns str directly from jwt.encode(), so the -> str annotation is correct; if using an older PyJWT version, you may need to add .decode("utf-8").

backends/advanced/src/advanced_omi_backend/services/memory/prompts.py-553-554 (1)

553-554: Module-level prompt initialization captures stale date.

TEMPORAL_ENTITY_EXTRACTION_PROMPT is initialized with datetime.now() at module import time. In long-running processes, this date context will become stale. The prompt includes "Today's date" which would be incorrect after midnight.

Consider generating the prompt dynamically when needed, or documenting this limitation:

-TEMPORAL_ENTITY_EXTRACTION_PROMPT = build_temporal_extraction_prompt(datetime.now())
+# Note: For long-running processes, call build_temporal_extraction_prompt(datetime.now())
+# directly to get fresh date context. This cached version is for convenience.
+TEMPORAL_ENTITY_EXTRACTION_PROMPT = build_temporal_extraction_prompt(datetime.now())

Or better, make it a function that callers invoke:

def get_temporal_extraction_prompt() -> str:
    """Get temporal extraction prompt with current date context."""
    return build_temporal_extraction_prompt(datetime.now())
Docs/setup/windows-gitbash.md-216-222 (1)

216-222: The make installation method via llvm-mingw is problematic for POSIX-style Makefiles.

mingw32-make (a native Windows build of GNU Make) executes recipes using cmd.exe batch semantics and Windows PATH conventions, unlike the POSIX-compatible make provided by MSYS2. This causes incompatibilities with shell syntax, path handling, and Unix tool assumptions—issues that manifest in many standard Makefiles and autotools-generated build files.

Since Git Bash runs within the MSYS2 environment, the recommended approach is to install make directly via MSYS2:

pacman -S make

Alternatively, use Chocolatey (choco install make) or Scoop (scoop install make), which provide Windows builds that work reliably with Git Bash. The Git for Windows SDK also includes a compatible make installation.

Avoid relying on mingw32-make unless the project specifically requires native MinGW build semantics (e.g., CMake "MinGW Makefiles" generator).

backends/advanced/webui/src/pages/MemoriesRouter.tsx-12-17 (1)

12-17: Handle token removal when token becomes null.

The current logic only stores the token when it's truthy, but doesn't remove the old token from localStorage when the token becomes null or undefined. This could leave stale tokens in storage.

Apply this diff to handle token removal:

   useEffect(() => {
     // Store JWT in localStorage for potential direct Mycelia access
     if (token) {
       localStorage.setItem('mycelia_jwt_token', token)
+    } else {
+      localStorage.removeItem('mycelia_jwt_token')
     }
   }, [token])
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py-145-145 (1)

145-145: Remove duplicate import.

This import is duplicated from line 49 and should be removed. The import should only appear once at the top of the file (see previous comment).

Once the imports are moved to the top of the file as suggested in the previous comment, remove this duplicate import statement.

As per coding guidelines: "NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file after the docstring".

backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py-49-49 (1)

49-49: Move import to the top of the file.

The import statement is placed in the middle of the function. Per coding guidelines, ALL imports must be at the top of the file after the docstring, not in the middle of functions.

Apply this diff to move the import to the top:

 """
 Memory-related RQ job functions.
 
 This module contains jobs related to memory extraction and processing.
 """
 
 import logging
 import time
 import uuid
 from datetime import UTC, datetime
 from typing import Dict, Any
 
 from advanced_omi_backend.models.job import JobPriority, BaseRQJob, async_job
 from advanced_omi_backend.controllers.queue_controller import (
     memory_queue,
     JOB_RESULT_TTL,
 )
+from advanced_omi_backend.models.conversation import Conversation
+from advanced_omi_backend.services.memory import get_memory_service
+from advanced_omi_backend.users import get_user_by_id

Then remove the imports from inside the function at lines 48-50.

As per coding guidelines: "NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file after the docstring".

Committable suggestion skipped: line range outside the PR's diff.

Docs/setup/prerequisites.md-343-347 (1)

343-347: Confirm GitHub Issues link points to the intended project.

The “Getting Help” section links to https://github.com/BasedHardware/Friend/issues, while this repository is chronicler-ai/chronicle and the doc is titled “Chronicle Prerequisites”. If support for Chronicle should use a different tracker (e.g., this repo’s Issues), it might be worth updating or clarifying that link to avoid confusing users.

Docs/setup/tailscale.md-689-689 (1)

689-689: Incorrect repository reference in help link.

The Chronicle Issues link points to BasedHardware/Friend but this is the chronicler-ai/chronicle repository. Update to reference the correct repository.

-- **Chronicle Issues**: https://github.com/BasedHardware/Friend/issues
+- **Chronicle Issues**: https://github.com/chronicler-ai/chronicle/issues
backends/advanced/webui/src/pages/ConversationsTimeline.tsx-219-230 (1)

219-230: Potential runtime error in ISO timestamp detection.

Line 221 uses timestamp.split('T')[1].includes('-') without checking if the split produces two elements. This will throw if timestamp is a date string without a time component (e.g., "2025-12-09").

Apply this diff to make the detection more robust:

   const formatDate = (timestamp: number | string): Date => {
     if (typeof timestamp === 'string') {
-      const isoString = timestamp.endsWith('Z') || timestamp.includes('+') || timestamp.includes('T') && timestamp.split('T')[1].includes('-')
+      const isoString = timestamp.endsWith('Z') || timestamp.includes('+') || 
+        (timestamp.includes('T') && timestamp.split('T')[1]?.includes('-'))
         ? timestamp
         : timestamp + 'Z'
       return new Date(isoString)
     }
     if (timestamp === 0) {
       return new Date()
     }
     return new Date(timestamp * 1000)
   }
backends/advanced/src/advanced_omi_backend/routers/modules/user_routes.py-54-56 (1)

54-56: Misleading comment about API key length.

secrets.token_urlsafe(32) generates 43 URL-safe base64 characters, not "64 hex characters". The 32 refers to bytes of randomness.

-        # Generate a secure random API key (32 bytes = 64 hex characters)
+        # Generate a secure random API key (32 bytes of entropy, ~43 characters)
         new_api_key = secrets.token_urlsafe(32)
backends/advanced/webui/src/pages/MyceliaTimeline.tsx-269-274 (1)

269-274: Fragile memory ID extraction - assumes no hyphens in original ID.

The ID parsing assumes format "memory-id-rangeIndex" and uses slice(0, -1).join('-'). If the original memory ID contains hyphens (e.g., UUIDs like abc-def-123-0), this will incorrectly truncate it.

Consider storing the original memory ID in the task data structure or using a different delimiter:

+interface TimelineTask {
+  id: string
+  memoryId: string  // Store original memory ID
+  name: string
+  ...
+}

 timeRanges.forEach((range, index) => {
   tasks.push({
     id: `${memory.id}-${index}`,
+    memoryId: memory.id,
     name: range.name || memory.metadata?.name || memory.content.substring(0, 30),
     ...
   })
 })

 .on('click', function(event, d) {
   event.stopPropagation()
-  const memoryId = d.id.split('-').slice(0, -1).join('-')
-  navigate(`/memories/${memoryId}`)
+  navigate(`/memories/${d.memoryId}`)
 })

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py-174-174 (1)

174-174: Use time.time() instead of deprecated asyncio.get_event_loop().time().

asyncio.get_event_loop() is deprecated in Python 3.10+ when called from a coroutine. For generating unique IDs, use time.time() or time.monotonic() directly.

+import time
+
 async def add_memory(content: str, user: User, source_id: Optional[str] = None):
     ...
-        memory_source_id = source_id or f"manual_{user.user_id}_{int(asyncio.get_event_loop().time())}"
+        memory_source_id = source_id or f"manual_{user.user_id}_{int(time.time())}"

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +1 to +11
#!/usr/bin/env python3
"""Create a proper Mycelia API key (not OAuth client) for Friend-Lite user."""

import os
import sys
import secrets
import hashlib
from pymongo import MongoClient
from bson import ObjectId
from datetime import datetime

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move all imports to the top of the file.

Lines 26, 138, and 266 import modules (base64, re) inside functions. As per coding guidelines, ALL imports must be at the top of the file.

 #!/usr/bin/env python3
 """Create a proper Mycelia API key (not OAuth client) for Friend-Lite user."""

+import base64
 import os
+import re
 import sys
 import secrets
 import hashlib
 from pymongo import MongoClient
 from bson import ObjectId
 from datetime import datetime

Then remove the inline imports at lines 26, 138, and 266.

🤖 Prompt for AI Agents
In backends/advanced/scripts/create_mycelia_api_key.py around lines 1 to 11 and
referencing inline imports at lines 26, 138, and 266, move the inline imports
(base64 and re) to the top of the file with the other imports, i.e., add import
base64 and import re alongside os, sys, secrets, hashlib, etc., and then remove
the inline import statements at lines 26, 138, and 266 so all imports are
consolidated at the file top per guidelines.

Comment on lines +53 to +56
print(f"📊 Connected to MongoDB:")
print(f" URL: {mongo_url}")
print(f" Friend-Lite DB: {friendlite_db}")
print(f" Mycelia DB: {mycelia_db}\n")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid printing full MongoDB URL - may contain credentials.

The MongoDB connection string may contain username and password. Consider masking or truncating sensitive parts before printing.

         print(f"📊 Connected to MongoDB:")
-        print(f"   URL: {mongo_url}")
+        # Mask credentials in URL for security
+        from urllib.parse import urlparse
+        parsed = urlparse(mongo_url)
+        safe_url = f"{parsed.scheme}://{parsed.hostname}:{parsed.port or 27017}"
+        print(f"   URL: {safe_url}")
         print(f"   Friend-Lite DB: {friendlite_db}")
         print(f"   Mycelia DB: {mycelia_db}\n")

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.14.8)

53-53: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In backends/advanced/scripts/sync_friendlite_mycelia.py around lines 53 to 56,
the script prints the full MongoDB connection URL which may contain
username/password; instead parse the URI and redact credentials before logging
(e.g., replace the user:pass portion with user:**** or remove credentials
entirely, or log only the host(s) and port(s) and the database names). Update
the prints to build a sanitized_mongo_url from the parsed URI (masking the
password or the entire userinfo portion) and log that sanitized string along
with friendlite_db and mycelia_db.

Comment on lines 31 to 36
# Default speech detection settings
DEFAULT_SPEECH_DETECTION_SETTINGS = {
"min_words": 5, # Minimum words to create conversation
"min_confidence": 0.5, # Word confidence threshold (unified)
"min_duration": 2.0, # Minimum speech duration (seconds)
"min_words": 10, # Minimum words to create conversation (increased from 5)
"min_confidence": 0.7, # Word confidence threshold (increased from 0.5)
"min_duration": 10.0, # Minimum speech duration in seconds (increased from 2.0)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Wire min_duration through get_speech_detection_settings (env + defaults).

You’ve added min_duration to DEFAULT_SPEECH_DETECTION_SETTINGS and analyze_speech’s docstring mentions SPEECH_DETECTION_MIN_DURATION, but get_speech_detection_settings() currently only exposes min_words and min_confidence. As a result:

  • analyze_speech always falls back to the hard-coded default 10.0 via settings.get("min_duration", 10.0).
  • Setting SPEECH_DETECTION_MIN_DURATION in the environment has no effect, which is surprising given the docstring.

Recommend wiring it through here:

 def get_speech_detection_settings():
     """Get speech detection settings from environment or defaults."""
 
     return {
-        "min_words": int(os.getenv("SPEECH_DETECTION_MIN_WORDS", DEFAULT_SPEECH_DETECTION_SETTINGS["min_words"])),
-        "min_confidence": float(os.getenv("SPEECH_DETECTION_MIN_CONFIDENCE", DEFAULT_SPEECH_DETECTION_SETTINGS["min_confidence"])),
+        "min_words": int(
+            os.getenv(
+                "SPEECH_DETECTION_MIN_WORDS",
+                DEFAULT_SPEECH_DETECTION_SETTINGS["min_words"],
+            )
+        ),
+        "min_confidence": float(
+            os.getenv(
+                "SPEECH_DETECTION_MIN_CONFIDENCE",
+                DEFAULT_SPEECH_DETECTION_SETTINGS["min_confidence"],
+            )
+        ),
+        "min_duration": float(
+            os.getenv(
+                "SPEECH_DETECTION_MIN_DURATION",
+                DEFAULT_SPEECH_DETECTION_SETTINGS["min_duration"],
+            )
+        ),
     }

This keeps the stricter defaults while making the duration threshold actually configurable and aligned with the docstring in analyze_speech.

Also applies to: 134-140

🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/config.py around lines 31-36 (and
also update the analogous block at lines 134-140), add wiring for "min_duration"
into get_speech_detection_settings so the env var SPEECH_DETECTION_MIN_DURATION
overrides the DEFAULT_SPEECH_DETECTION_SETTINGS value; specifically, read the
env var (parse to float with a safe default), include it in the returned
settings dict alongside min_words and min_confidence, and ensure the function
returns the key "min_duration" so analyze_speech uses the configured value
rather than always falling back to 10.0.

Comment on lines +46 to 52
mongodb_database = os.getenv("MONGODB_DATABASE", "friend-lite")
client = AsyncIOMotorClient(mongodb_uri)
try:
database = client.get_default_database("friend-lite")
database = client.get_default_database(mongodb_database)
except ConfigurationError:
database = client["friend-lite"]
database = client[mongodb_database]
raise
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the exception handling logic - fallback is unreachable.

The code assigns a fallback database on line 51 when get_default_database() fails, but immediately re-raises the exception on line 52. This prevents the fallback from being used and causes initialization to fail despite having a valid fallback database.

Apply this diff to fix the fallback logic:

             mongodb_database = os.getenv("MONGODB_DATABASE", "friend-lite")
             client = AsyncIOMotorClient(mongodb_uri)
             try:
                 database = client.get_default_database(mongodb_database)
             except ConfigurationError:
                 database = client[mongodb_database]
-                raise
-            _beanie_initialized = True
+            
             # Initialize Beanie
             await init_beanie(
                 database=database,
                 document_models=[User, Conversation, AudioFile],
             )

Note: The duplicate _beanie_initialized = True on lines 53 and 60 should also be cleaned up - only the assignment on line 60 (after successful init_beanie) is needed.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/models/job.py around lines 46 to
52, the except block currently sets a fallback database but immediately
re-raises the ConfigurationError, making the fallback unreachable; remove the
"raise" so the fallback assignment (database = client[mongodb_database]) is used
when get_default_database() fails, and ensure only a single assignment of
_beanie_initialized = True remains (remove the earlier duplicate and keep the
assignment after successful init_beanie).

Comment on lines +10 to +11
from fastapi import APIRouter, Depends, Query, Body
from pydantic import BaseModel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove unused Body import.

The Body import is not used in this file. Only BaseModel from pydantic is needed.

-from fastapi import APIRouter, Depends, Query, Body
+from fastapi import APIRouter, Depends, Query
 from pydantic import BaseModel
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from fastapi import APIRouter, Depends, Query, Body
from pydantic import BaseModel
from fastapi import APIRouter, Depends, Query
from pydantic import BaseModel
🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py
around lines 10 to 11, the import line includes an unused Body symbol from
fastapi; remove Body from the import list so only APIRouter, Depends, and Query
are imported, and keep BaseModel from pydantic as-is to eliminate the unused
import warning.

Comment on lines +29 to +31
# Add base tag for path-based routing (if X-Forwarded-Prefix header is present)
sub_filter '<head>' '<head><base href="$http_x_forwarded_prefix/">';
sub_filter_once on;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: XSS vulnerability from unsanitized header injection.

The X-Forwarded-Prefix header value is injected directly into HTML without sanitization or validation, creating an XSS vulnerability. Additionally, the base tag is injected unconditionally even when the header is absent, which may cause unexpected SPA behavior.

Attack vector example:

X-Forwarded-Prefix: "><script>alert('XSS')</script><a href="

Results in:

<head><base href=""><script>alert('XSS')</script><a href="/">

Apply this diff to add validation and conditional injection:

+    # Validate and set base path only when X-Forwarded-Prefix is present
+    map $http_x_forwarded_prefix $validated_base_path {
+        default "";
+        "~^/[a-zA-Z0-9/_-]+$" "$http_x_forwarded_prefix/";
+    }
+
     # Add base tag for path-based routing (if X-Forwarded-Prefix header is present)
-    sub_filter '<head>' '<head><base href="$http_x_forwarded_prefix/">';
+    sub_filter '<head>' '<head>$validated_base_path';
     sub_filter_once on;
+    sub_filter_last_modified off;
+    sub_filter_types text/html;

Then update the map block to conditionally inject the base tag:

     # Validate and set base path only when X-Forwarded-Prefix is present
     map $http_x_forwarded_prefix $validated_base_path {
         default "";
-        "~^/[a-zA-Z0-9/_-]+$" "$http_x_forwarded_prefix/";
+        "~^/[a-zA-Z0-9/_-]+$" '<base href="$http_x_forwarded_prefix/">';
     }

This ensures:

  1. Only alphanumeric paths with /, _, - are allowed (prevents XSS)
  2. Base tag is only injected when header is present and valid
  3. Empty/missing headers don't inject unnecessary base tags
🤖 Prompt for AI Agents
In backends/advanced/webui/nginx.conf around lines 29-31, the X-Forwarded-Prefix
header is injected directly into HTML causing an XSS risk and the base tag is
added even when the header is absent; fix by adding an nginx map that validates
the header against a whitelist regex allowing only alphanumerics, slashes,
underscores and hyphens (and normalizing to include a leading and trailing slash
as needed), set the mapped variable to empty when the header is missing or
invalid, and change the sub_filter to inject the base tag only when the mapped
variable is non-empty so that unsanitized/invalid headers are never inserted and
empty/missing headers do not add a base tag.

"axios": "^1.6.2",
"clsx": "^2.0.0",
"d3": "^7.9.0",
"frappe-gantt": "^1.0.4",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Search for imports/usage of both Gantt libraries
echo "=== frappe-gantt usage ==="
rg -n 'from.*frappe-gantt|import.*frappe-gantt|require.*frappe-gantt' backends/advanced/webui/

echo ""
echo "=== react-gantt-timeline usage ==="
rg -n 'from.*react-gantt-timeline|import.*react-gantt-timeline|require.*react-gantt-timeline' backends/advanced/webui/

echo ""
echo "=== Check package.json for both dependencies ==="
cat backends/advanced/webui/package.json | grep -A 2 -B 2 "gantt"

Repository: chronicler-ai/chronicle

Length of output: 190


🏁 Script executed:

# Check npm registry for frappe-gantt package
npm view frappe-gantt versions --json 2>&1 | head -50
npm view frappe-gantt time.modified 2>&1
npm view frappe-gantt dist-tags 2>&1
npm view frappe-gantt maintainers 2>&1

Repository: chronicler-ai/chronicle

Length of output: 190


🌐 Web query:

frappe-gantt npm package version 1.0.4 maintenance status

💡 Result:

Short answer: version 1.0.4 exists but not on the original unscoped frappe-gantt package — it’s published under at least one scoped/forked package (@rickeless/frappe-gantt). Security/health trackers list 1.0.4 and mark the project as actively maintained/healthy.

Sources:

  • npm (original frappe-gantt showing v1.0.3). [1]
  • npm (@rickeless/frappe-gantt showing v1.0.4 published 2 months ago). [2]
  • Snyk package report (lists latest 1.0.4; maintenance rated “Healthy”). [3]
  • Frappe blog post about Gantt v1 release (Jan 9, 2025). [4]

If you want, I can: (a) check the upstream GitHub repo for recent commits/releases and who published 1.0.4, or (b) show exact publish dates and changelog for 1.0.4. Which do you prefer?


🌐 Web query:

frappe-gantt npm registry active maintained

💡 Result:

Short answer: Yes — frappe-gantt is actively maintained and published on npm.

Evidence:

  • npm package page shows recent releases and weekly downloads. [1]
  • Official GitHub repo (frappe/gantt) is active (recent commits, open/closed PRs and issues). [2]
  • Frappe announced Gantt v1 and ongoing work in Jan 9, 2025, indicating active maintenance by the Frappe team. [3]

Sources:
[1] npm — frappe-gantt package.
[2] GitHub — frappe/gantt repository (commits, PRs, issues).
[3] Frappe blog — “Gantt v1 is out!” (Jan 9, 2025).


Fix frappe-gantt version specification — 1.0.4 does not exist on the unscoped package.

The version 1.0.4 is not available on the frappe-gantt npm package (latest is 1.0.3). Version 1.0.4 exists only under the scoped package @rickeless/frappe-gantt. Either use "^1.0.3" for the original package or verify the intended package name. The frappe-gantt library itself is actively maintained by the Frappe team as of January 2025.

Additionally, verify that both frappe-gantt and react-gantt-timeline serve distinct purposes in the application. If they have overlapping functionality, consider consolidating to a single library to reduce dependency complexity.

🤖 Prompt for AI Agents
In backends/advanced/webui/package.json around line 17, the dependency
"frappe-gantt": "^1.0.4" references a non-existent unscoped version; change it
to the correct existing version (e.g., "^1.0.3") or replace the package name
with the intended scoped package "@rickeless/frappe-gantt": "^1.0.4" if that was
deliberate, then run npm install and update lockfile; also audit whether
"frappe-gantt" and "react-gantt-timeline" overlap and, if so, consolidate to one
library to avoid redundant dependencies.

Comment on lines +57 to +75
useEffect(() => {
if (!svgRef.current) return

const svg = d3.select(svgRef.current)
const node = svg.node()

if (node) {
node.style.touchAction = 'none'
node.style.webkitUserSelect = 'none'
node.style.userSelect = 'none'
}

svg.call(zoomBehavior as any)
svg.property('__zoom', transform)

return () => {
svg.on('.zoom', null)
}
}, [zoomBehavior, transform])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

transform in dependency array causes effect to re-run on every zoom event.

Including transform in the dependencies means this effect re-runs every time the user zooms, re-applying styles and re-attaching the zoom behavior. D3 already manages the __zoom property internally during zoom operations, so line 70 becomes redundant after initialization and may interfere with D3's state.

Consider separating initialization from transform syncing:

   useEffect(() => {
     if (!svgRef.current) return

     const svg = d3.select(svgRef.current)
     const node = svg.node()

     if (node) {
       node.style.touchAction = 'none'
       node.style.webkitUserSelect = 'none'
       node.style.userSelect = 'none'
     }

     svg.call(zoomBehavior as any)
-    svg.property('__zoom', transform)

     return () => {
       svg.on('.zoom', null)
     }
-  }, [zoomBehavior, transform])
+  }, [zoomBehavior])

If you need to programmatically set an initial transform, consider a separate one-time initialization or use a ref to track whether it's the first mount.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
useEffect(() => {
if (!svgRef.current) return
const svg = d3.select(svgRef.current)
const node = svg.node()
if (node) {
node.style.touchAction = 'none'
node.style.webkitUserSelect = 'none'
node.style.userSelect = 'none'
}
svg.call(zoomBehavior as any)
svg.property('__zoom', transform)
return () => {
svg.on('.zoom', null)
}
}, [zoomBehavior, transform])
useEffect(() => {
if (!svgRef.current) return
const svg = d3.select(svgRef.current)
const node = svg.node()
if (node) {
node.style.touchAction = 'none'
node.style.webkitUserSelect = 'none'
node.style.userSelect = 'none'
}
svg.call(zoomBehavior as any)
return () => {
svg.on('.zoom', null)
}
}, [zoomBehavior])

Comment on lines +264 to +275
return `
<div class="popup-wrapper" style="background: white; padding: 12px; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); min-width: 250px;">
<h3 style="margin: 0 0 8px 0; font-size: 14px; font-weight: 600;">${task.name}</h3>
<p style="margin: 4px 0; font-size: 12px; color: #666;">
<strong>Start:</strong> ${startDate.toLocaleDateString('en-US', formatOptions)}
</p>
<p style="margin: 4px 0; font-size: 12px; color: #666;">
<strong>End:</strong> ${endDate.toLocaleDateString('en-US', formatOptions)}
</p>
${memory?.content ? `<p style="margin: 8px 0 0 0; font-size: 12px; color: #333; border-top: 1px solid #eee; padding-top: 8px;">${memory.content}</p>` : ''}
</div>
`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential XSS vulnerability in popup HTML.

User-controlled content (task.name and memory.content) is interpolated directly into HTML without sanitization. A malicious memory name or content could inject scripts.

Sanitize user content before inserting into HTML:

const escapeHtml = (str: string) => {
  const div = document.createElement('div')
  div.textContent = str
  return div.innerHTML
}

// Then use it:
custom_popup_html: (task: any) => {
  const memory = displayMemories.find(m => task.id.startsWith(m.id))
  const safeName = escapeHtml(task.name)
  const safeContent = memory?.content ? escapeHtml(memory.content) : ''
  // ... use safeName and safeContent in the template
}
🤖 Prompt for AI Agents
backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx lines 264-275: the
popup template interpolates user-controlled task.name and memory.content
directly into HTML causing XSS risk; implement a small HTML-escaping helper
(e.g., createElement('div') and set textContent to get escaped HTML or a utility
that replaces &,<,>,",') and use the escaped values (safeName and safeContent)
instead of raw task.name/memory.content when building the template; ensure
memory.content defaults to an empty string when absent and avoid using innerHTML
with untrusted input.

Comment on lines +169 to +175
<div className="mt-4 p-3 bg-blue-50 dark:bg-blue-900/20 border border-blue-200 dark:border-blue-800 rounded-md">
<p className="text-sm text-blue-800 dark:text-blue-300">
<strong>MCP Server URL:</strong> http://your-server:8000/mcp/conversations/sse
<br />
<strong>Authorization Header:</strong> Bearer {apiKey}
</p>
</div>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Replace hardcoded server URL with dynamic value.

Lines 171 displays http://your-server:8000/mcp/conversations/sse, which is a placeholder. Users will need to manually replace this, creating friction. The actual server URL should be derived from the current window location or configuration.

               <div className="mt-4 p-3 bg-blue-50 dark:bg-blue-900/20 border border-blue-200 dark:border-blue-800 rounded-md">
                 <p className="text-sm text-blue-800 dark:text-blue-300">
-                  <strong>MCP Server URL:</strong> http://your-server:8000/mcp/conversations/sse
+                  <strong>MCP Server URL:</strong> {window.location.protocol}//{window.location.hostname}:8000/mcp/conversations/sse
                   <br />
                   <strong>Authorization Header:</strong> Bearer {apiKey}
                 </p>
               </div>

@thestumonkey thestumonkey deleted the mcp branch December 10, 2025 15:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant