Skip to content

Comments

Plugin UI v2 & other fixes#278

Closed
AnkushMalaker wants to merge 12 commits intotrain-who-spokefrom
plugin-ui-v2
Closed

Plugin UI v2 & other fixes#278
AnkushMalaker wants to merge 12 commits intotrain-who-spokefrom
plugin-ui-v2

Conversation

@AnkushMalaker
Copy link
Collaborator

@AnkushMalaker AnkushMalaker commented Jan 27, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added always-persist audio setting to automatically create conversations for all sessions
    • Introduced structured plugin configuration UI with test connection functionality
    • Implemented transcription fallback to batch mode when streaming is unavailable
    • Added processing status tracking for conversations
    • Added version number tracking for conversation transcripts and memory versions
  • UI Enhancements

    • New miscellaneous settings panel in system configuration
    • Plugin settings form with orchestration, configuration, and secrets management
  • Bug Fixes

    • Improved error visibility in job status API and transcription failures
  • Testing

    • Added mock servers for LLM and STT to support testing without external APIs

✏️ Tip: You can customize this high-level summary in your review settings.

futureisAJASU and others added 5 commits January 14, 2026 10:27
This commit introduces a comprehensive plugin configuration UI with the following enhancements:
- Add PluginSettingsForm component for plugin configuration
- Create modular plugin configuration components:
  * EnvVarsSection - manage plugin environment variables
  * FormField - reusable form field component
  * OrchestrationSection - configure plugin orchestration settings
  * PluginConfigPanel - main plugin configuration panel
  * PluginListSidebar - plugin list and navigation
- Update plugin service to support new configuration endpoints
- Enhance system controller and routes for plugin management
- Update Plugins page with new UI components
- Enhance API service with plugin configuration methods
…ist feature

- Updated Docker Compose configuration to include mock streaming services for testing.
- Introduced `always_persist` flag in audio stream and conversation management, ensuring audio is saved even if transcription fails.
- Enhanced conversation model to track processing status and persist audio data, improving reliability in audio handling.
- Added integration tests to verify the functionality of the always_persist feature, ensuring audio is correctly stored in various scenarios.
- Improved logging for audio processing and conversation state transitions to facilitate debugging and monitoring.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review
📝 Walkthrough

Walkthrough

This PR introduces comprehensive audio persistence and plugin management features, including always-persist audio recording, structured plugin configuration with metadata discovery, OpenMemory MCP integration with metadata-based isolation, transcription fallback to batch mode, conversation versioning with numeric indices, new web UI components for plugin and miscellaneous settings management, and extensive test infrastructure including mock LLM/STT servers and integration test suite.

Changes

Cohort / File(s) Summary
Configuration & Models
backends/advanced/src/advanced_omi_backend/config.py, config/defaults.yml, backends/advanced/src/advanced_omi_backend/models/conversation.py
Added get_misc_settings() and save_misc_settings() functions for managing audio and transcription settings. Added always_persist_enabled and use_provider_segments config options. Extended MemoryVersion with processing_status and always_persist fields; added active_transcript_version_number and active_memory_version_number computed properties to Conversation.
Audio Streaming & Persistence
backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py, backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py, tests/libs/audio_stream_library.py
Added always_persist parameter to audio streaming methods (send_audio_start, stream_wav_file, stream_audio_file, start_stream). Modified audio job to create placeholder conversations when always_persist_enabled is true; stores conversation IDs in Redis with TTL.
System & Plugin Management
backends/advanced/src/advanced_omi_backend/controllers/system_controller.py, backends/advanced/src/advanced_omi_backend/routers/modules/system_routes.py, backends/advanced/src/advanced_omi_backend/services/plugin_service.py
Added comprehensive plugin metadata discovery and structured configuration endpoints: get_plugins_metadata(), update_plugin_config_structured(), test_plugin_connection(). Introduced schema inference helpers (extract_env_var_name(), infer_field_type(), load_schema_yml(), infer_schema_from_config(), mask_secrets_in_config(), get_plugin_metadata()).
Security & Logging Utilities
backends/advanced/src/advanced_omi_backend/utils/logging_utils.py
New module providing comprehensive secret masking utilities: mask_dict(), mask_string(), safe_log_config(), mask_connection_string(), create_masked_repr(), and is_secret_field() for secure configuration logging.
Plugin Implementations
backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/plugin.py, backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/email_service.py, backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py
Added plugin metadata (name, description) and test_connection() method to EmailSummarizerPlugin and HomeAssistantPlugin. Enhanced SMTP error handling in email service with specific exception types and detailed diagnostic messages.
Memory Providers
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py, backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py
Updated add_memories() signature to accept optional metadata. Modified OpenMemory provider to use metadata-based user isolation instead of client context manipulation; search_memories() and get_all_memories() now filter results by chronicle_user_id.
Transcription & Websocket
backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py, backends/advanced/src/advanced_omi_backend/services/transcription/mock_provider.py, backends/advanced/src/advanced_omi_backend/services/transcription/streaming_consumer.py, backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py
Added mock transcription provider support with get_mock_transcription_provider(). Extended MockTranscriptionProvider with fail_mode parameter for testing failures. Reduced discovery loop delay in streaming consumer. Introduced rolling batch processing in websocket controller when accumulating 30 minutes of audio; added streaming mode validation with structured WebSocket error responses; stores user context for batch tracking; adds _process_rolling_batch() helper.
Conversation & Job Processing
backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py, backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py, backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py
Modified conversation flow to reuse placeholder conversations from Redis for streaming sessions; updates processing_status field through title generation. Reworked memory job to conditionally create memory versions only when memories exist; maps provider using enum mapping. Introduced transcription fallback with create_audio_only_conversation() and transcription_fallback_check_job() to handle unavailable streaming transcription.
Application Startup
backends/advanced/src/advanced_omi_backend/app_factory.py
Added initialize_openmemory_user() function to register OpenMemory users on startup with connectivity testing and fallback to auto-creation.
Controllers & API
backends/advanced/src/advanced_omi_backend/controllers/conversation_controller.py, backends/advanced/src/advanced_omi_backend/controllers/queue_controller.py, backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py
Exposed processing_status and always_persist fields in conversation responses. Added documentation for always_persist setting in queue controller. Enhanced get_job_status to include error details (error_message, exc_info) when job status is failed.
Frontend - Plugin Components
backends/advanced/webui/src/components/PluginSettingsForm.tsx, backends/advanced/webui/src/components/plugins/FormField.tsx, backends/advanced/webui/src/components/plugins/EnvVarsSection.tsx, backends/advanced/webui/src/components/plugins/OrchestrationSection.tsx, backends/advanced/webui/src/components/plugins/PluginConfigPanel.tsx, backends/advanced/webui/src/components/plugins/PluginListSidebar.tsx
Comprehensive new plugin UI components: PluginSettingsForm with plugin selection and configuration editing; FormField with multi-type support (string, number, boolean, password, enum); EnvVarsSection for secrets and environment variables; OrchestrationSection for event-based plugin triggering; PluginConfigPanel with tabbed interface and test connection; PluginListSidebar with status indicators and enable/disable toggles.
Frontend - Pages & Services
backends/advanced/webui/src/pages/Conversations.tsx, backends/advanced/webui/src/pages/System.tsx, backends/advanced/webui/src/pages/Plugins.tsx, backends/advanced/webui/src/components/ConversationVersionDropdown.tsx, backends/advanced/webui/src/components/ConversationVersionHeader.tsx, backends/advanced/webui/src/services/api.ts, backends/advanced/webui/src/hooks/useSimpleAudioRecording.ts
Extended Conversation interface with numeric version fields. Added miscellaneous settings panel to System page with toggles for always_persist_enabled and use_provider_segments. Added form/YAML toggle to Plugins page. Updated version components to use numeric indices. Added new API endpoints for misc settings and structured plugin config. Enhanced audio recording hook to handle JSON error messages from WebSocket.
Docker & Test Infrastructure
backends/advanced/docker-compose-test.yml, tests/Makefile, tests/bin/start-containers.sh, tests/bin/start-rebuild-containers.sh
Added mock-streaming-stt and mock-llm services to docker-compose. Updated speaker-service-test to use CUDA cu12.6. Extended Makefile with test-no-api and test-with-api-keys targets; added config-driven container management and show-config target. Added .env.test loading to container startup scripts.
Test Configuration
tests/configs/deepgram-openai.yml, tests/configs/mock-services.yml, tests/configs/mock-transcription-failure.yml
Enabled always_persist_enabled and speaker recognition in deepgram-openai config. Added mock STT/STT-stream services and updated LLM/embedding URLs to mock server in mock-services.yml. Created new mock-transcription-failure.yml config with invalid API keys for failure testing.
Test Mock Servers
tests/libs/mock_llm_server.py, tests/libs/mock_streaming_stt_server.py
New mock LLM HTTP server (port 11435) with OpenAI-compatible endpoints: /v1/chat/completions, /v1/embeddings, /v1/models. Implements fact extraction and memory update response modes. New Deepgram-compatible streaming STT WebSocket server (port 9999) with interim/final results and word-level metadata.
Test Suites & Keywords
tests/integration/always_persist_audio_tests.robot, tests/resources/conversation_keywords.robot, tests/resources/redis_keywords.robot, tests/resources/system_keywords.robot, tests/resources/websocket_keywords.robot, tests/infrastructure/infra_tests.robot
Added comprehensive integration test suite for always-persist feature with scenarios for placeholder conversations, transcription failures, concurrent sessions. Added keywords: Get Conversations By Client ID, Wait For Conversation By Client ID, Verify Conversation Processing Status, Verify Conversation Always Persist Flag, Verify Placeholder Conversation Title, Verify Redis Key Exists, Verify Conversation Current Key, Start/Stop Mock Transcription Server, Set/Get Always Persist Enabled, Open/Stream Audio Stream With Always Persist.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Audio Client
    participant WS as WebSocket Controller
    participant Audio as Audio Job
    participant Conv as Conversation Job
    participant Redis as Redis
    participant DB as Database
    participant TJ as Transcription Job

    Client->>WS: audio_start (always_persist=true)
    WS->>Audio: enqueue audio_persistence_job
    WS->>WS: initialize_streaming_session()
    
    Audio->>Redis: check conversation:current:session_id
    Redis-->>Audio: null (first time)
    
    alt always_persist_enabled = true
        Audio->>DB: create placeholder conversation
        DB-->>Audio: conversation_id
        Audio->>Redis: SET conversation:current:session_id → conversation_id (TTL: 1hr)
        Audio->>Audio: log placeholder creation
    else always_persist_enabled = false
        Audio->>Audio: log waiting for speech detection
    end

    Client->>WS: audio chunks
    WS->>Audio: accumulate chunks
    
    loop Every 30 minutes of audio
        WS->>WS: _process_rolling_batch()
        WS->>Conv: create batch conversation
        WS->>TJ: enqueue batch transcription job
    end

    WS->>TJ: stream_speech_detection_job
    TJ->>TJ: detect speech
    
    alt speech detected
        TJ->>Conv: process with conversation_id from Redis
        Conv->>DB: update placeholder status to 'pending_transcription'
    else no speech detected
        TJ->>Conv: mark placeholder as 'transcription_failed'
        TJ->>TJ: enqueue transcription_fallback_check_job
    end

    TJ->>TJ: transcription_fallback_check_job
    TJ->>TJ: wait for batch transcription
    TJ->>Conv: finalize conversation
Loading
sequenceDiagram
    participant Admin as Admin User
    participant UI as Plugin UI
    participant API as System API
    participant PS as Plugin Service
    participant PC as Plugin Component
    participant Disk as File System

    Admin->>UI: navigate to Plugins
    UI->>API: GET /admin/plugins/metadata
    API->>PS: get_plugin_metadata(plugin_id)
    
    PS->>Disk: load orchestration.yml
    PS->>Disk: load plugin config.yml
    PS->>Disk: load schema.yml (if exists)
    PS->>PS: infer_schema_from_config()
    PS->>PS: mask_secrets_in_config()
    PS-->>API: plugin metadata with schema
    API-->>UI: metadata payload

    UI->>UI: render PluginSettingsForm
    UI->>UI: load plugin in PluginConfigPanel
    
    Admin->>UI: edit config values
    Admin->>UI: click Test Connection
    UI->>API: POST /admin/plugins/test-connection/plugin_id
    API->>PC: test_connection(config)
    PC->>PC: validate required fields
    PC->>PC: attempt connection
    PC-->>API: success/failure response
    API-->>UI: test result

    Admin->>UI: click Save
    UI->>API: POST /admin/plugins/config/structured/plugin_id
    API->>PS: update_plugin_config_structured()
    PS->>Disk: backup existing config
    PS->>Disk: update plugins.yml
    PS->>Disk: update plugin/config.yml
    PS->>Disk: update .env file
    PS-->>API: success response with restart indicator
    API-->>UI: save confirmation
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Plugin UI v2 & other fixes' is vague and does not clearly describe the actual scope of changes, which span plugin configuration UI, audio persistence, transcription fallback handling, and multiple backend/frontend enhancements. Consider a more descriptive title that highlights the primary changes, such as 'Add Plugin UI v2, always_persist audio feature, and miscellaneous settings management' or separate into focused PRs if the scope is too broad.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 87.60% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

⚠️ Robot Framework Test Results (No API Keys)

Status: ❌ Some tests failed

ℹ️ Note: This run excludes tests requiring external API keys (Deepgram, OpenAI).
Tests tagged with requires-api-keys will run on dev/main branches.

Metric Count
✅ Passed 80
❌ Failed 12
📊 Total 92

📊 View Reports

GitHub Pages (Live Reports):

Download Artifacts:


View full workflow run

- Introduced a new YAML configuration file to simulate transcription failures using invalid API keys for Deepgram services.
- Configured both standard and streaming speech-to-text models with invalid credentials to facilitate testing of error handling in audio processing.
- Enhanced the testing framework by providing mock models for LLM and embeddings, ensuring comprehensive coverage of failure scenarios.
- Updated logging levels for transcription errors to use error severity, providing clearer insights into issues.
- Distinguish between transcription service failures and legitimate no speech scenarios in session termination logs.
- Enhanced session failure messages to guide users in checking transcription service configurations.
- Introduced functions to retrieve and save miscellaneous settings, including `always_persist_enabled` and `use_provider_segments`, using OmegaConf.
- Updated the system controller and routes to handle new endpoints for managing miscellaneous settings, ensuring admin access control.
- Refactored audio processing jobs to read the `always_persist_enabled` setting from global configuration, improving audio persistence behavior.
- Enhanced the web UI to allow administrators to view and modify miscellaneous settings, providing better control over audio processing features.
- Added integration tests to verify the functionality of the new settings management, ensuring robust handling of audio persistence scenarios.
- Updated the Makefile to introduce new test commands for running tests with and without API keys, improving CI integration.
- Refactored integration tests to replace static sleep calls with polling mechanisms for conversation creation, enhancing reliability and reducing flakiness.
- Added a new keyword to wait for conversations by client ID, streamlining test logic and improving readability.
- Updated documentation in the Makefile to reflect changes in test commands and configurations.
…nality

- Added an asynchronous function to initialize and register an OpenMemory user if the OpenMemory MCP provider is configured, improving user management.
- Enhanced the MCPClient to accept custom metadata when adding memories, allowing for better tracking and filtering of memories by user.
- Updated the OpenMemoryMCPService to utilize the configured OpenMemory user for memory operations, ensuring accurate user context in memory processing.
- Modified integration tests to use shorter device names for consistency and to avoid truncation issues, improving test reliability.
@AnkushMalaker
Copy link
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

- Created Dockerfile for a mock LLM server, including dependencies and configuration for running the server on port 11435.
- Created Dockerfile for a mock streaming STT server, including dependencies and configuration for running the server on port 9999.
- Both Dockerfiles streamline the setup process for testing related functionalities.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In `@backends/advanced/src/advanced_omi_backend/app_factory.py`:
- Around line 45-88: The initialize_openmemory_user function performs blocking
network operations during startup; change its invocation so it runs in the
background instead of being awaited synchronously: locate the startup code that
calls initialize_openmemory_user and replace the direct await with scheduling it
via asyncio.create_task(initialize_openmemory_user()) (or use loop.create_task)
so the app doesn't block; alternatively, if you prefer time-bounding, add a
short per-call timeout around MCPClient.test_connection / add_memories (e.g.,
asyncio.wait_for) inside initialize_openmemory_user to limit how long MCPClient
(used in the async with block and methods test_connection, add_memories,
delete_memory) can hang. Ensure you keep build_memory_config_from_env and
MemoryProvider checks unchanged and preserve logging behavior when running as a
background task.

In `@backends/advanced/src/advanced_omi_backend/controllers/system_controller.py`:
- Around line 1243-1306: In test_plugin_connection, env vars from the passed-in
config are only added to test_config separately and not used to expand
placeholders inside settings; update test_plugin_connection so you first build
test_config from config['settings'], then merge provided env_vars (handling
masked values by reading os.getenv when value == '••••••••••••' and lower-casing
keys) into that settings dict before calling expand_env_vars(test_config), so
any ${ENV_VAR} placeholders in settings resolve using the supplied env_vars;
keep using plugin_class.test_connection for the call and preserve
logging/exception behavior.

In `@backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py`:
- Around line 646-692: The test_connection block creates a HAMCPClient
(mcp_client) but never closes its internal httpx.AsyncClient; wrap the usage of
mcp_client in a try/finally (or async with) around the calls to _render_template
and get_all_entities, and ensure you call await mcp_client.close() in the
finally so the client is always closed even on errors; update the function
(test_connection) to reference mcp_client, _render_template, get_all_entities,
and close as the cleanup point.
- Around line 667-672: The code calls a non-existent method
mcp_client.get_all_entities() which will raise at runtime; change the call to
mcp_client.discover_entities() and compute entity_count from the returned
dictionary (e.g., entity_count = len(entities_dict)) while keeping the existing
try/except behavior; update references to the symbols mcp_client,
get_all_entities(), discover_entities(), and entity_count so the plugin uses
discover_entities()'s dict result instead of the removed API.

In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py`:
- Around line 75-84: The current response exposes job.exc_info (full stack
trace) when status == "failed"; replace this with a short safe summary (e.g.,
"error_summary" containing the exception message or the first line of
job.exc_info) for regular users and only include the full exc_info when the
requester is an admin—check a known admin flag (e.g., current_user.is_admin or
request.user.is_admin) before adding "exc_info" to the response; update the
block that builds response (where response = {"job_id": job.id, "status":
status} and the following failed-handling code) to produce a sanitized
"error_message"/"error_summary" and conditionally add full "exc_info" for admins
only.

In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py`:
- Around line 121-131: The info log currently prints the full payload (payload
variable) including sensitive fields like text and user metadata; update the
logging in memory creation to redact PII by logging only safe fields (e.g.,
self.user_id, self.client_name/app, server_url) plus text length and/or metadata
keys rather than values, and move the full payload or a redacted copy to
memory_logger.debug instead of memory_logger.info; specifically change the
memory_logger.info call that references payload to instead log a concise message
with self.user_id, self.client_name, the length of text (len(text)), and a
redacted/summary of default_metadata, and use memory_logger.debug for any full
or redacted payload dump.

In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py`:
- Around line 204-223: Several methods still change MCP user context or perform
operations by the configured OpenMemory user instead of consistently using
metadata-based scoping; update get_memory, update_memory,
delete_all_user_memories (and any helper that switches MCP user) to perform
operations under the shared OpenMemory user but enforce chronicle_user_id
metadata checks: for get_memory, fetch the memory via MCP and verify
result["metadata"].get("chronicle_user_id")==user_id before returning; for
update_memory, first retrieve and validate metadata ownership, then apply
updates only if ownership matches; for delete_all_user_memories, search for
memories with metadata["chronicle_user_id"]==user_id (use the same search/filter
pattern and _mcp_result_to_memory_entry) and delete each matched memory by id
rather than switching user context or bulk-deleting across the configured user.
Ensure every path that touches memories verifies metadata ownership by
chronicle_user_id.
- Around line 145-159: The memory_logger.info call currently includes raw
user_email PII; remove or redact the email from the log message and instead log
non-PII identifiers (e.g., user_id, source_id, client_id) only, while still
passing user_email in the metadata for self.mcp_client.add_memories if needed;
update the memory_logger.info invocation in openmemory_mcp.py (the block that
builds enriched_transcript, metadata and calls self.mcp_client.add_memories) to
exclude or mask user_email.

In `@backends/advanced/src/advanced_omi_backend/services/plugin_service.py`:
- Around line 344-436: mask_secrets_in_config currently only masks values that
still contain a ${ENV_VAR} placeholder (uses extract_env_var_name), so when
load_plugin_config has already expanded placeholders to real secrets those
values leak; update mask_secrets_in_config to additionally iterate secret env
names from config_schema['env_vars'] and for each secret env_var compare its
os.environ value to each string config value (or detect exact match) and replace
matching values with the mask (and empty string if env not set), keeping the
existing placeholder-based masking logic intact; references:
mask_secrets_in_config, extract_env_var_name, get_plugin_metadata (calls
load_plugin_config -> mask_secrets_in_config), config_schema and
schema['env_vars'] for locating secret env var names.

In `@backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py`:
- Around line 105-110: The Redis TTL for the conversation key (set via
redis_client.set(conversation_key, conversation.conversation_id, ex=3600)) is
too short for long-running jobs; either extend the expiry to the job's maximum
runtime or refresh the key periodically while the job is active. Update the call
to use a constant/MAX runtime (e.g., MAX_JOB_RUNTIME_SECONDS) or add a
background refresh loop that calls redis_client.expire(conversation_key,
new_ttl) (or resets redis_client.set with the same value) from within the audio
job/task function that owns conversation_key and conversation.conversation_id so
the key never expires mid-session.

In `@backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py`:
- Around line 777-791: The polling loop masks failed RQ jobs because it checks
batch_job.is_finished before batch_job.is_failed; change the logic in the loop
that polls batch_job (uses batch_job.refresh(), batch_job.is_finished,
batch_job.is_failed, batch_job.exc_info, timeout_seconds, waited, max_wait) to
check batch_job.is_failed first and immediately raise an Exception including
batch_job.exc_info when true, otherwise handle is_finished success as before;
keep the polling sleep/wait logic and timeout behavior unchanged.

In `@backends/advanced/webui/src/components/plugins/FormField.tsx`:
- Around line 4-15: FieldSchema declares type 'array' but renderField() lacks a
case for 'array', causing arrays to be coerced to strings via the default
onChange(e.target.value); either implement an explicit array renderer in
renderField() that renders a proper list input (e.g., add a case 'array' that
parses/updates arrays via JSON or multiple inputs and calls onChange with an
actual array) or remove 'array' from the FieldSchema union so the type cannot be
produced; update any callers that rely on plugin_service.py's array schemas to
match the chosen approach.
- Around line 89-156: The switch case for 'password' contains a lexical
declaration (const displayValue) which violates the noSwitchDeclarations rule;
wrap the entire case 'password' contents in a block (add { ... } after case
'password':) so displayValue and related declarations (isMaskedValue, isEditing,
setIsEditing, onChange, showPassword, etc.) are block-scoped, preserving
behavior of the input, onFocus handler, show/hide button, and conditional help
text.

In `@backends/advanced/webui/src/components/plugins/OrchestrationSection.tsx`:
- Around line 71-101: The toggle is currently a non-focusable clickable div (in
OrchestrationSection) and the label's htmlFor points to no input; replace or
refactor the toggle into an actual accessible control: render a hidden
checkbox/input (id referenced by the label) or use a <button> with type="button"
that maps to handleEnabledChange, add keyboard handlers (onKeyDown to toggle on
Space/Enter) and proper ARIA state (aria-checked reflecting config.enabled,
role="switch" or role="checkbox"), and ensure the element uses disabled to set
aria-disabled and visually indicates focus (focus-visible styles) so screen
readers and keyboard users can operate the toggle; update references to
config.enabled, disabled, and handleEnabledChange accordingly.

In `@backends/advanced/webui/src/components/plugins/PluginListSidebar.tsx`:
- Around line 132-155: The toggle in PluginListSidebar is implemented as a
non-focusable <label> with an onClick, so screen readers and keyboard users
cannot properly interact with it; replace it with a real focusable control
(e.g., a button or an input[type="checkbox"]) that calls
onToggleEnabled(plugin.plugin_id, !plugin.enabled), exposes aria-checked (or
checked) and role if needed, includes accessible text/aria-label reflecting
Enabled/Disabled, and keeps the same visual classes so behavior and styling
remain intact; update any onClick handlers on the wrapper to use
onKeyDown/keyboard activation patterns if you keep a custom element.
- Around line 99-110: The plugin row currently rendered as a clickable <div> is
not keyboard-accessible; replace or update the element used in PluginListSidebar
(the block keyed by plugin.plugin_id and invoking onSelectPlugin) so it is a
semantic interactive control: use a <button> (or add role="button", tabIndex=0)
and wire keyboard handling (activate on Enter/Space via onKeyDown) and add
appropriate ARIA state (e.g., aria-pressed or aria-selected when isSelected) and
an accessible label so keyboard and assistive tech can select plugins; ensure
existing onClick/onSelectPlugin calls are preserved and visual focus styles
remain.

In `@backends/advanced/webui/src/components/PluginSettingsForm.tsx`:
- Around line 98-119: The current initialization in PluginSettingsForm creates a
fresh config that overwrites any existing plugin configuration; instead hydrate
from existing values: initialize orchestration using plugin.orchestration or
plugin.config?.orchestration (preserving enabled, events, condition) and for
settings/env_vars populate each key by first checking
plugin.config?.settings[key] (or plugin.config?.env_vars[key]) then falling back
to schema.value or schema.default; ensure you only use defaults when no current
value exists and keep existing events/condition intact (refer to the config
object, orchestration, settings, env_vars, and plugin.config_schema to locate
fields).
- Around line 166-171: The connection test is sending masked secret values from
currentConfig.env_vars which causes false failures; update handleTestConnection
to reuse the same filtering used in handleSave (use selectedPluginId,
currentConfig.env_vars and originalPluginConfig.env_vars) — build a filtered
env_vars object that skips keys whose value is the mask (e.g., '••••' or
startsWith masked dots) or that are unchanged from
originalPluginConfig.env_vars, then pass that filtered env_vars to
systemApi.testPluginConnection so masked secrets are not sent to the backend.

In `@backends/advanced/webui/src/hooks/useSimpleAudioRecording.ts`:
- Around line 231-262: The server-error handler calls cleanup() and flips state
but startRecording can still continue; add an abort ref (e.g., abortedRef or
startAbortRef) that the error handler sets to true when a server error is
received, then update startRecording to check that ref after any await points
and before setting setCurrentStep or setIsRecording so the flow bails out if
aborted; ensure cleanup() is still called once and the same abort guard is added
in the other startRecording path referenced in the comment (the second
startRecording block) so both flows respect the server-error abort.

In `@tests/configs/mock-services.yml`:
- Around line 49-55: The mock STT service entry named "mock-stt" has model_url
set to "http://localhost:9999", which is inconsistent with other mock services
and fails from inside Docker; update the model_url value for the "mock-stt"
service (the YAML block with name: mock-stt and model_type: stt) to use
"http://host.docker.internal:9999" so containerized tests can reach the
host-based mock STT batch service.
🟡 Minor comments (7)
backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py-753-760 (1)

753-760: Add defensive checks before accessing client_state attributes in _handle_batch_mode_audio.

Lines 751-753 access client_state.user_id, client_state.user_email, and client_state.client_id without validation. While these are set during the audio-start message handler (lines 1342-1344) and the state machine enforces audio-start before audio-chunk processing can occur, defensive checks using hasattr or getattr would guard against protocol violations or future refactoring that might bypass the current ordering guarantee.

tests/resources/redis_keywords.robot-123-155 (1)

123-155: Make key selection deterministic when multiple keys match.
KEYS output order isn't guaranteed; picking the first match can lead to flaky assertions when multiple suffix keys exist. Sort the keys and, when expected_conversation_id is provided, prefer the key that matches the expected value instead of failing on the first key's mismatch.

💡 Suggested refinement (deterministic selection + expected value match)
@@
-    @{keys}=    Split String    ${result.stdout}    \n
-    ${keys_list}=    Evaluate    [k for k in ${keys} if k.strip()]
-    ${num_keys}=    Get Length    ${keys_list}
+    @{keys}=    Split String    ${result.stdout}    \n
+    ${keys_list}=    Evaluate    [k for k in ${keys} if k.strip()]
+    Sort List    ${keys_list}
+    ${num_keys}=    Get Length    ${keys_list}
@@
-    ${redis_key}=    Get From List    ${keys_list}    0
-    Log    Found Redis key: ${redis_key}
-
-    # Get the conversation_id value
-    ${conversation_id}=    Redis Command    GET    ${redis_key}
-
-    # Optionally verify it matches expected value
-    IF    '${expected_conversation_id}' != '${None}'
-        Should Be Equal As Strings    ${conversation_id}    ${expected_conversation_id}
-        ...    Redis key value mismatch: expected ${expected_conversation_id}, got ${conversation_id}
-    END
+    ${redis_key}=    Get From List    ${keys_list}    -1
+    ${conversation_id}=    Redis Command    GET    ${redis_key}
+
+    # If expected value provided, prefer the key that matches it
+    IF    '${expected_conversation_id}' != '${None}'
+        FOR    ${candidate}    IN    @{keys_list}
+            ${candidate_value}=    Redis Command    GET    ${candidate}
+            IF    '${candidate_value}' == '${expected_conversation_id}'
+                ${redis_key}=    Set Variable    ${candidate}
+                ${conversation_id}=    Set Variable    ${candidate_value}
+                Exit For Loop
+            END
+        END
+        Should Be Equal As Strings    ${conversation_id}    ${expected_conversation_id}
+        ...    Redis key value mismatch: expected ${expected_conversation_id}, got ${conversation_id}
+    END
+    Log    Found Redis key: ${redis_key}
backends/advanced/webui/src/pages/System.tsx-178-189 (1)

178-189: Avoid misc-settings calls for non-admin users.
/api/misc-settings is admin-only; calling it when isAdmin is false will just generate 403s and console noise.

✅ Minimal guard
   const loadMiscSettings = async () => {
+    if (!isAdmin) return
     try {
       setMiscLoading(true)
       const response = await systemApi.getMiscSettings()
backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py-122-124 (1)

122-124: Remove unused f-string prefix.
Ruff F541: no interpolation is used here.

🔧 Quick fix
-        logger.info(f"🔍 always_persist=False - will wait for speech detection to create conversation")
+        logger.info("🔍 always_persist=False - will wait for speech detection to create conversation")
backends/advanced/webui/src/components/ConversationVersionDropdown.tsx-140-143 (1)

140-143: Guard against findIndex returning -1 (shows v0).

If the active id isn’t found, findIndex yields -1, resulting in a misleading v0. Add a guard (and consider a nullish check for numeric fallback).

🛠️ Suggested fix
+  const formatActiveVersionLabel = (
+    versions: Array<TranscriptVersion | MemoryVersion>,
+    activeId: string,
+    fallbackNumber?: number
+  ) => {
+    const idx = versions.findIndex(v => v.version_id === activeId)
+    if (idx >= 0) return `v${idx + 1}`
+    return fallbackNumber != null ? `v${fallbackNumber}` : '-'
+  }

...
-              Transcript: {versionHistory ?
-                `v${versionHistory.transcript_versions.findIndex(v => v.version_id === versionHistory.active_transcript_version) + 1}` :
-                (versionInfo?.active_transcript_version_number ? `v${versionInfo.active_transcript_version_number}` : '-')
-              }
+              Transcript: {versionHistory
+                ? formatActiveVersionLabel(
+                    versionHistory.transcript_versions,
+                    versionHistory.active_transcript_version,
+                    versionInfo?.active_transcript_version_number
+                  )
+                : (versionInfo?.active_transcript_version_number != null ? `v${versionInfo.active_transcript_version_number}` : '-')
+              }

...
-              Memory: {versionHistory ?
-                `v${versionHistory.memory_versions.findIndex(v => v.version_id === versionHistory.active_memory_version) + 1}` :
-                (versionInfo?.active_memory_version_number ? `v${versionInfo.active_memory_version_number}` : '-')
-              }
+              Memory: {versionHistory
+                ? formatActiveVersionLabel(
+                    versionHistory.memory_versions,
+                    versionHistory.active_memory_version,
+                    versionInfo?.active_memory_version_number
+                  )
+                : (versionInfo?.active_memory_version_number != null ? `v${versionInfo.active_memory_version_number}` : '-')
+              }

Also applies to: 202-205

backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py-64-64 (1)

64-64: Make metadata explicitly optional.

Implicit Optional is flagged by type checkers/ruff. Use Optional[Dict[str, Any]] (or Dict[str, Any] | None) to satisfy PEP 484. Optional is already imported in the file.

♻️ Suggested fix
-    async def add_memories(self, text: str, metadata: Dict[str, Any] = None) -> List[str]:
+    async def add_memories(self, text: str, metadata: Optional[Dict[str, Any]] = None) -> List[str]:
tests/Makefile-244-259 (1)

244-259: Ensure API-key test target starts containers with the real config.

Line 244-259 runs tests without ensuring containers are started with deepgram-openai.yml. If test-no-api was run previously, this target will still hit mock services. Consider restarting containers with the real config in this target.

🛠️ Suggested adjustment
 test-with-api-keys:
 	`@echo` "🧪 Running tests that require API keys..."
 	`@if` [ -z "$$DEEPGRAM_API_KEY" ] || [ -z "$$OPENAI_API_KEY" ]; then \
 		echo "❌ Error: DEEPGRAM_API_KEY and OPENAI_API_KEY must be set"; \
 		echo "   export DEEPGRAM_API_KEY='your-key-here'"; \
 		echo "   export OPENAI_API_KEY='your-key-here'"; \
 		exit 1; \
 	fi
+	@$(MAKE) containers-stop
+	`@TEST_CONFIG_FILE`=/app/test-configs/deepgram-openai.yml $(MAKE) containers-start
 	CREATE_FIXTURE=true uv run --with-requirements test-requirements.txt robot --outputdir $(OUTPUTDIR) \
 		--name "API Key Tests" \
 		--console verbose \
 		--loglevel INFO:INFO \
 		--include requires-api-keys \
 		$(TEST_DIR)
🧹 Nitpick comments (21)
backends/advanced/src/advanced_omi_backend/services/transcription/mock_provider.py (1)

54-56: Unused audio_duration variable.

The audio_duration is calculated but never used. Consider either removing it or using it to dynamically scale word timestamps based on actual audio length.

🧹 Suggested fix
-        # Calculate audio duration from bytes (assuming 16-bit PCM)
-        audio_duration = len(audio_data) / (sample_rate * 2)  # 2 bytes per sample
-
         # Return a mock transcript with word-level timestamps

Alternatively, if you want dynamic timing based on audio duration:

         # Calculate audio duration from bytes (assuming 16-bit PCM)
         audio_duration = len(audio_data) / (sample_rate * 2)  # 2 bytes per sample
+        # Scale factor to spread words across actual audio duration
+        scale = audio_duration / 6.5 if audio_duration > 0 else 1.0
backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py (1)

73-77: Hardcoded fail_mode=False limits test flexibility.

The mock provider path always creates MockTranscriptionProvider(fail_mode=False), which means tests using RegistryBatchTranscriptionProvider with model_provider="mock" cannot trigger failure scenarios. The fail_mode parameter is only accessible through get_mock_transcription_provider().

If failure testing through the registry path is needed, consider reading fail_mode from configuration or environment.

💡 Optional: Support fail_mode through config or environment
         # Special handling for mock provider (no HTTP server needed)
         if self.model.model_provider == "mock":
             from .mock_provider import MockTranscriptionProvider
-            mock = MockTranscriptionProvider(fail_mode=False)
+            # Allow fail_mode to be configured for testing scenarios
+            import os
+            fail_mode = os.getenv("MOCK_TRANSCRIPTION_FAIL_MODE", "false").lower() == "true"
+            mock = MockTranscriptionProvider(fail_mode=fail_mode)
             return await mock.transcribe(audio_data, sample_rate, diarize)
backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py (3)

844-888: Improve exception handling in streaming validation.

Several exception handling issues:

  1. Line 873: raise ValueError(error_msg) inside try block is caught by the except on line 877
  2. Line 878: Use logging.exception instead of logging.error to capture stack trace
  3. Line 880: Missing from err or from None to distinguish from exception handling errors

The control flow with catching ValueError to re-raise it is convoluted.

♻️ Suggested refactor for cleaner error handling
                     await websocket.send_json(error_response)
                     application_logger.info(f"📤 Sent streaming error to WebUI client {client_id}")

                     # Close the websocket connection after sending error
                     await websocket.close(code=1008, reason="Streaming transcription not configured")
                     application_logger.info(f"🔌 Closed WebSocket connection for {client_id} due to streaming config error")

-                    # Raise ValueError to exit the handler completely
-                    raise ValueError(error_msg)
-                except ValueError:
-                    # Re-raise ValueError to exit handler
-                    raise
+                    # Return with error indication to exit handler
+                    raise ValueError(error_msg) from None
                 except Exception as e:
-                    application_logger.error(f"Failed to send error to client: {e}")
-                    # Still raise ValueError to exit handler
-                    raise ValueError(error_msg)
+                    application_logger.exception(f"Failed to send error to client: {e}")
+                    raise ValueError(error_msg) from e

937-958: Unused parameter user_email and f-string without placeholders.

The user_email parameter is never used in _process_rolling_batch. Additionally, line 957 has an f-string without placeholders.

🧹 Suggested fix
 async def _process_rolling_batch(
     client_state,
     user_id: str,
-    user_email: str,
     client_id: str,
     batch_number: int
 ) -> None:
     """
     Process accumulated batch audio as a rolling segment.

     Creates conversation titled "Recording Part {batch_number}" and enqueues transcription.

     Args:
         client_state: Client state with batch_audio_chunks
         user_id: User ID
-        user_email: User email
         client_id: Client ID
         batch_number: Sequential batch number (1, 2, 3...)
     """
     if not hasattr(client_state, 'batch_audio_chunks') or not client_state.batch_audio_chunks:
-        application_logger.warning(f"⚠️ No audio chunks to process for rolling batch")
+        application_logger.warning("⚠️ No audio chunks to process for rolling batch")
         return

And update the call site at line 754-760:

         await _process_rolling_batch(
             client_state,
             user_id=client_state.user_id,
-            user_email=client_state.user_email,
             client_id=client_state.client_id,
             batch_number=client_state.batch_chunks_processed + 1
         )

986-994: Unused num_chunks variable.

The result of convert_audio_to_chunks is assigned but never used.

🧹 Suggested fix
         # Convert to MongoDB chunks
-        num_chunks = await convert_audio_to_chunks(
+        await convert_audio_to_chunks(
             conversation_id=conversation_id,
             audio_data=complete_audio,
             sample_rate=sample_rate,
             channels=channels,
             sample_width=width
         )
backends/advanced/docker-compose-test.yml (2)

141-141: Consider making CUDA version overrideable for non‑GPU setups.
An env‑level override keeps the profile usable on machines without compatible drivers.

♻️ Possible tweak
     args:
-        PYTORCH_CUDA_VERSION: cu12.6
+        PYTORCH_CUDA_VERSION: ${PYTORCH_CUDA_VERSION:-cu12.6}

168-193: Optional: gate mock services behind a profile.
If these mocks are only needed for specific suites, a profile can keep default runs lighter.

♻️ Example
   mock-streaming-stt:
     build:
       context: ../..
       dockerfile: tests/Dockerfile.mock-streaming-stt
@@
     restart: unless-stopped
+    profiles:
+      - mocks
@@
   mock-llm:
     build:
       context: ../..
       dockerfile: tests/Dockerfile.mock-llm
@@
     restart: unless-stopped
+    profiles:
+      - mocks
tests/resources/system_keywords.robot (1)

44-81: Avoid fixed sleep for mock server readiness.
A hard Sleep 2s can be flaky or wasteful; prefer polling for port readiness or a log line before proceeding.

backends/advanced/webui/src/components/plugins/EnvVarsSection.tsx (1)

66-77: Minor: Verify the env_var display renders correctly.

The template literal ${fieldSchema.env_var} on line 69 will render as the literal string (e.g., $DEEPGRAM_API_KEY). If the intent is to show just the variable name without the $ prefix, consider removing it. If displaying the shell-style variable reference is intentional, this is fine as-is.

tests/resources/conversation_keywords.robot (1)

36-48: Naming: Consider renaming to Validate Conversation Count By Client ID.

The keyword name suggests it waits/polls for a conversation, but it actually performs a single check and asserts. The actual waiting is delegated to Wait Until Keyword Succeeds in the calling tests. While this pattern works, the name could be clearer:

  • Current: Wait For Conversation By Client ID (implies polling)
  • Suggested: Validate Conversation Count By Client ID or Assert Conversation Exists By Client ID

This is a minor documentation concern since the docstring does mention "Polls until..." but the keyword itself doesn't implement polling.

tests/integration/always_persist_audio_tests.robot (1)

41-45: Consider: Fixed sleep in Test Cleanup may cause flakiness or slow tests.

The 2-second sleep allows the backend to finalize processing, but this could be either too short (causing flakiness if processing takes longer) or unnecessarily slow. Consider if there's a status to poll for, or if this delay is acceptable given test reliability requirements.

tests/libs/mock_streaming_stt_server.py (2)

152-153: Use logging.exception for better error context.

When catching exceptions, logging.exception automatically includes the stack trace, providing better debugging information.

Proposed fix
                 except json.JSONDecodeError:
-                    logger.error(f"Invalid JSON from {client_id}: {message}")
+                    logger.exception(f"Invalid JSON from {client_id}: {message}")

167-169: Remove extraneous f-string prefixes.

These strings have no placeholders, so the f prefix is unnecessary.

Proposed fix
     logger.info(f"Starting Mock Streaming STT Server on {host}:{port}")
-    logger.info(f"Deepgram-compatible nested response format")
-    logger.info(f"Speech detection: >2.0s duration, >5 words")
+    logger.info("Deepgram-compatible nested response format")
+    logger.info("Speech detection: >2.0s duration, >5 words")
backends/advanced/src/advanced_omi_backend/clients/audio_stream_client.py (2)

155-159: Remove debug print statements.

These print() statements duplicate the logger.info() calls and appear to be debug artifacts. Using only the logger is more appropriate for production code as it respects log levels and configuration.

Proposed fix
-        print(f"🔵 CLIENT: Sending audio-start message: {header}")
         logger.info(f"🔵 CLIENT: Sending audio-start message: {header}")
         await self.ws.send(json.dumps(header) + "\n")
-        print(f"✅ CLIENT: Sent audio-start with mode={recording_mode}, always_persist={always_persist}")
         logger.info(f"✅ CLIENT: Sent audio-start with mode={recording_mode}, always_persist={always_persist}")

464-466: Use logging.exception for better error context.

When logging in an exception handler, logging.exception automatically includes the stack trace, which aids debugging.

Proposed fix
             except Exception as e:
                 session.error = str(e)
-                logger.error(f"❌ CLIENT: Stream {stream_id} failed to start: {e}")
+                logger.exception(f"❌ CLIENT: Stream {stream_id} failed to start: {e}")
backends/advanced/src/advanced_omi_backend/utils/logging_utils.py (2)

94-97: Consider masking empty string secrets.

The condition if is_secret and value won't mask falsy values like empty strings (""), 0, or False. While this might be intentional (no need to mask empty values), it could leak information that a secret field is empty vs having a value.

If you want to mask all secret fields regardless of value:

♻️ Suggested change
-        if is_secret and value:
+        if is_secret and value is not None:

254-256: Inconsistent case sensitivity for secret attribute matching.

create_masked_repr uses case-sensitive matching (key in secret_attrs), while mask_dict uses case-insensitive matching (key.lower() in secret_fields_lower). This inconsistency could lead to secrets not being masked if the casing doesn't match exactly.

♻️ Suggested fix for consistency
+    secret_attrs_lower = {attr.lower() for attr in secret_attrs}
+
     for key in dir(obj):
         # Skip private/magic attributes and methods
         if key.startswith('_') or callable(getattr(obj, key)):
             continue
         
         value = getattr(obj, key)
         
         # Mask secret attributes
-        if key in secret_attrs:
+        if key.lower() in secret_attrs_lower:
             value_repr = f"'{mask}'"
backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/email_service.py (1)

173-183: Add exception chaining to preserve stack trace.

The detailed error messages are helpful, but without exception chaining (from e), the original traceback is lost, making debugging harder. This is flagged by Ruff B904.

♻️ Suggested fix
         except smtplib.SMTPAuthenticationError as e:
-            raise Exception(f"SMTP Authentication failed for {self.username}. Check credentials. For Gmail, use an App Password instead of your regular password. Error: {str(e)}")
+            raise Exception(f"SMTP Authentication failed for {self.username}. Check credentials. For Gmail, use an App Password instead of your regular password. Error: {e!s}") from e
         except smtplib.SMTPConnectError as e:
-            raise Exception(f"Failed to connect to SMTP server {self.host}:{self.port}. Check host and port. Error: {str(e)}")
+            raise Exception(f"Failed to connect to SMTP server {self.host}:{self.port}. Check host and port. Error: {e!s}") from e
         except smtplib.SMTPServerDisconnected as e:
-            raise Exception(f"SMTP server disconnected unexpectedly. Check TLS settings (port 587 needs TLS, port 465 needs SSL). Error: {str(e)}")
+            raise Exception(f"SMTP server disconnected unexpectedly. Check TLS settings (port 587 needs TLS, port 465 needs SSL). Error: {e!s}") from e
         except TimeoutError as e:
-            raise Exception(f"Connection to {self.host}:{self.port} timed out. Check firewall/network settings. Error: {str(e)}")
+            raise Exception(f"Connection to {self.host}:{self.port} timed out. Check firewall/network settings. Error: {e!s}") from e
         except Exception as e:
-            raise Exception(f"SMTP connection test failed: {type(e).__name__}: {str(e)}")
+            raise Exception(f"SMTP connection test failed: {type(e).__name__}: {e!s}") from e
tests/libs/mock_llm_server.py (2)

302-308: Remove extraneous f-string prefixes.

Lines 303-308 use f-strings but contain no placeholders. Per Ruff F541, these should be plain strings.

♻️ Suggested fix
     logger.info(f"Starting Mock LLM Server on {host}:{port}")
-    logger.info(f"OpenAI-compatible endpoints:")
-    logger.info(f"  - POST /v1/chat/completions")
-    logger.info(f"  - POST /v1/embeddings")
-    logger.info(f"  - GET /v1/models")
-    logger.info(f"  - GET /health")
-    logger.info(f"Deterministic embeddings: 1536 dimensions")
+    logger.info("OpenAI-compatible endpoints:")
+    logger.info("  - POST /v1/chat/completions")
+    logger.info("  - POST /v1/embeddings")
+    logger.info("  - GET /v1/models")
+    logger.info("  - GET /health")
+    logger.info("Deterministic embeddings: 1536 dimensions")

256-282: Consider prefixing unused request parameter.

The request parameter is required by aiohttp's handler signature but unused in handle_models and handle_health. Prefixing with _ indicates intentional non-use and silences the linter.

♻️ Suggested fix
-async def handle_models(request: web.Request) -> web.Response:
+async def handle_models(_request: web.Request) -> web.Response:
     """Handle /v1/models endpoint."""
     ...

-async def handle_health(request: web.Request) -> web.Response:
+async def handle_health(_request: web.Request) -> web.Response:
     """Handle health check endpoint."""
     ...
backends/advanced/src/advanced_omi_backend/config.py (1)

207-231: Avoid partial writes when saving misc settings.

If one section saves and the other fails, the call returns False but state is already partially updated. Consider a single atomic save or returning per-section status/rollback details.

Comment on lines +45 to +88
async def initialize_openmemory_user() -> None:
"""Initialize and register OpenMemory user if using OpenMemory MCP provider.

This function:
- Checks if OpenMemory MCP is configured as the memory provider
- Registers the configured user with OpenMemory server
- Creates a test memory and deletes it to trigger user creation
- Logs success or warning if OpenMemory is not reachable
"""
from advanced_omi_backend.services.memory.config import build_memory_config_from_env, MemoryProvider

memory_provider_config = build_memory_config_from_env()

if memory_provider_config.memory_provider != MemoryProvider.OPENMEMORY_MCP:
return

try:
from advanced_omi_backend.services.memory.providers.mcp_client import MCPClient

# Get configured user_id and server_url
openmemory_config = memory_provider_config.openmemory_config
user_id = openmemory_config.get("user_id", "openmemory") if openmemory_config else "openmemory"
server_url = openmemory_config.get("server_url", "http://host.docker.internal:8765") if openmemory_config else "http://host.docker.internal:8765"
client_name = openmemory_config.get("client_name", "chronicle") if openmemory_config else "chronicle"

application_logger.info(f"Registering OpenMemory user: {user_id} at {server_url}")

# Make a lightweight registration call (create and delete dummy memory)
async with MCPClient(server_url=server_url, client_name=client_name, user_id=user_id) as client:
# Test connection first
is_connected = await client.test_connection()
if is_connected:
# Create and immediately delete a dummy memory to trigger user creation
memory_ids = await client.add_memories("Chronicle initialization - user registration test")
if memory_ids:
# Delete the test memory
await client.delete_memory(memory_ids[0])
application_logger.info(f"✅ Registered OpenMemory user: {user_id}")
else:
application_logger.warning(f"⚠️ OpenMemory MCP not reachable at {server_url}")
application_logger.info("User will be auto-created on first memory operation")
except Exception as e:
application_logger.warning(f"⚠️ Could not register OpenMemory user: {e}")
application_logger.info("User will be auto-created on first memory operation")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check MCPClient for timeout and test_connection behavior
rg -n "class MCPClient|def test_connection|timeout" backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py -A 5

Repository: SimpleOpenSoftware/chronicle

Length of output: 1689


🏁 Script executed:

# Check where initialize_openmemory_user is called
rg -n "initialize_openmemory_user" backends/advanced/src/advanced_omi_backend/app_factory.py -B 5 -A 5

Repository: SimpleOpenSoftware/chronicle

Length of output: 1052


🏁 Script executed:

# Check if there's any background task or await pattern
grep -n "await initialize_openmemory_user\|asyncio.create_task\|asyncio.ensure_future" backends/advanced/src/advanced_omi_backend/app_factory.py

Repository: SimpleOpenSoftware/chronicle

Length of output: 114


🏁 Script executed:

# Get test_connection full implementation
rg -n "async def test_connection" backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py -A 15

Repository: SimpleOpenSoftware/chronicle

Length of output: 851


🏁 Script executed:

# Check if MCPClient calls are sequential or concurrent
rg -n "test_connection\|add_memories\|delete_memory" backends/advanced/src/advanced_omi_backend/app_factory.py -B 2 -A 2

Repository: SimpleOpenSoftware/chronicle

Length of output: 54


🏁 Script executed:

# Get full test_connection implementation
rg -n "async def test_connection" backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py -A 25

Repository: SimpleOpenSoftware/chronicle

Length of output: 1283


🏁 Script executed:

# Check the initialization sequence in app_factory
sed -n '45,95p' backends/advanced/src/advanced_omi_backend/app_factory.py

Repository: SimpleOpenSoftware/chronicle

Length of output: 2689


Move OpenMemory user registration to background task to prevent startup blocking.

This function is directly awaited during startup and makes sequential network calls with 30-second timeouts (up to 90 seconds if OpenMemory is unreachable). Use asyncio.create_task() or equivalent to defer this initialization, or implement a shorter timeout. This contradicts the stated intent in the preceding comment about avoiding startup blocking.

🧰 Tools
🪛 Ruff (0.14.14)

86-86: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/app_factory.py` around lines 45 -
88, The initialize_openmemory_user function performs blocking network operations
during startup; change its invocation so it runs in the background instead of
being awaited synchronously: locate the startup code that calls
initialize_openmemory_user and replace the direct await with scheduling it via
asyncio.create_task(initialize_openmemory_user()) (or use loop.create_task) so
the app doesn't block; alternatively, if you prefer time-bounding, add a short
per-call timeout around MCPClient.test_connection / add_memories (e.g.,
asyncio.wait_for) inside initialize_openmemory_user to limit how long MCPClient
(used in the async with block and methods test_connection, add_memories,
delete_memory) can hang. Ensure you keep build_memory_config_from_env and
MemoryProvider checks unchanged and preserve logging behavior when running as a
background task.

Comment on lines +1243 to +1306
async def test_plugin_connection(plugin_id: str, config: dict) -> dict:
"""Test plugin connection/configuration without saving.

Calls the plugin's test_connection method if available to validate
configuration (e.g., SMTP connection, Home Assistant API).

Args:
plugin_id: Plugin identifier
config: Configuration to test (same structure as update_plugin_config_structured)

Returns:
Test result with success status and details
"""
try:
from advanced_omi_backend.services.plugin_service import discover_plugins, expand_env_vars

# Validate plugin exists
discovered_plugins = discover_plugins()
if plugin_id not in discovered_plugins:
raise ValueError(f"Plugin '{plugin_id}' not found")

plugin_class = discovered_plugins[plugin_id]

# Check if plugin supports testing
if not hasattr(plugin_class, 'test_connection'):
return {
"success": False,
"message": f"Plugin '{plugin_id}' does not support connection testing",
"status": "unsupported"
}

# Build complete config from provided data
test_config = {}

# Merge settings
if 'settings' in config:
test_config.update(config['settings'])

# Add env vars (expand any ${ENV_VAR} references with test values)
if 'env_vars' in config:
for key, value in config['env_vars'].items():
# Skip masked values
if value == '••••••••••••':
# Use actual env var value
value = os.getenv(key, '')
test_config[key.lower()] = value

# Expand any remaining env var references
test_config = expand_env_vars(test_config)

# Call plugin's test_connection static method
result = await plugin_class.test_connection(test_config)

logger.info(f"Test connection for '{plugin_id}': {result.get('message', 'No message')}")

return result

except Exception as e:
logger.exception(f"Error testing connection for plugin '{plugin_id}'")
return {
"success": False,
"message": f"Connection test failed: {str(e)}",
"status": "error"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Provided env vars won’t actually populate settings during test.

env_vars are only lower‑cased into test_config, so ${ENV_VAR} placeholders in settings won’t resolve unless the env var is already in the process environment. That can cause false negatives for newly supplied secrets (e.g., HA_TOKENhome_assistant_token mismatch). Consider expanding settings using the provided env values instead.

🛠️ Proposed fix (expand placeholders with provided env vars)
-        # Build complete config from provided data
-        test_config = {}
-
-        # Merge settings
-        if 'settings' in config:
-            test_config.update(config['settings'])
-
-        # Add env vars (expand any ${ENV_VAR} references with test values)
-        if 'env_vars' in config:
-            for key, value in config['env_vars'].items():
-                # Skip masked values
-                if value == '••••••••••••':
-                    # Use actual env var value
-                    value = os.getenv(key, '')
-                test_config[key.lower()] = value
-
-        # Expand any remaining env var references
-        test_config = expand_env_vars(test_config)
+        # Build complete config from provided data
+        test_config = {}
+
+        # Merge settings
+        if 'settings' in config:
+            test_config.update(config['settings'])
+
+        # Expand settings using provided env var overrides (without persisting them)
+        original_env = {}
+        for key, value in config.get('env_vars', {}).items():
+            if value == '••••••••••••':
+                continue
+            original_env[key] = os.getenv(key)
+            os.environ[key] = value
+        try:
+            test_config = expand_env_vars(test_config)
+        finally:
+            for key, old in original_env.items():
+                if old is None:
+                    os.environ.pop(key, None)
+                else:
+                    os.environ[key] = old
🧰 Tools
🪛 Ruff (0.14.14)

1262-1262: Abstract raise to an inner function

(TRY301)


1262-1262: Avoid specifying long messages outside the exception class

(TRY003)


1298-1298: Consider moving this statement to an else block

(TRY300)


1304-1304: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/controllers/system_controller.py`
around lines 1243 - 1306, In test_plugin_connection, env vars from the passed-in
config are only added to test_config separately and not used to expand
placeholders inside settings; update test_plugin_connection so you first build
test_config from config['settings'], then merge provided env_vars (handling
masked values by reading os.getenv when value == '••••••••••••' and lower-casing
keys) into that settings dict before calling expand_env_vars(test_config), so
any ${ENV_VAR} placeholders in settings resolve using the supplied env_vars;
keep using plugin_class.test_connection for the call and preserve
logging/exception behavior.

Comment on lines +646 to +692
# Create temporary MCP client
mcp_client = HAMCPClient(
base_url=ha_url,
token=ha_token,
timeout=timeout
)

# Test API connectivity with Template API
logger.info(f"Testing Home Assistant API connection to {ha_url}...")
start_time = time.time()

test_result = await mcp_client._render_template("{{ 1 + 1 }}")
connection_time_ms = int((time.time() - start_time) * 1000)

if str(test_result).strip() != "2":
return {
"success": False,
"message": f"Unexpected template result: {test_result}",
"status": "error"
}

# Try to fetch entities count for additional info
try:
entities = await mcp_client.get_all_entities()
entity_count = len(entities)
except Exception:
entity_count = None

return {
"success": True,
"message": f"Successfully connected to Home Assistant at {ha_url}",
"status": "success",
"details": {
"ha_url": ha_url,
"connection_time_ms": connection_time_ms,
"entity_count": entity_count,
"api_test": "Template rendering successful"
}
}

except Exception as e:
logger.error(f"Home Assistant connection test failed: {e}", exc_info=True)
return {
"success": False,
"message": f"Connection test failed: {str(e)}",
"status": "error"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing resource cleanup: MCP client not closed after test.

The HAMCPClient creates an internal httpx.AsyncClient that should be closed after use. The test_connection method creates a client but never calls await mcp_client.close(), which could lead to resource leaks if this method is called repeatedly.

♻️ Suggested fix using try/finally
             # Create temporary MCP client
             mcp_client = HAMCPClient(
                 base_url=ha_url,
                 token=ha_token,
                 timeout=timeout
             )

-            # Test API connectivity with Template API
-            logger.info(f"Testing Home Assistant API connection to {ha_url}...")
-            start_time = time.time()
-
-            test_result = await mcp_client._render_template("{{ 1 + 1 }}")
-            connection_time_ms = int((time.time() - start_time) * 1000)
-
-            if str(test_result).strip() != "2":
-                return {
-                    "success": False,
-                    "message": f"Unexpected template result: {test_result}",
-                    "status": "error"
-                }
-
-            # Try to fetch entities count for additional info
             try:
-                entities = await mcp_client.get_all_entities()
-                entity_count = len(entities)
-            except Exception:
-                entity_count = None
-
-            return {
-                "success": True,
-                "message": f"Successfully connected to Home Assistant at {ha_url}",
-                "status": "success",
-                "details": {
-                    "ha_url": ha_url,
-                    "connection_time_ms": connection_time_ms,
-                    "entity_count": entity_count,
-                    "api_test": "Template rendering successful"
+                # Test API connectivity with Template API
+                logger.info(f"Testing Home Assistant API connection to {ha_url}...")
+                start_time = time.time()
+
+                test_result = await mcp_client._render_template("{{ 1 + 1 }}")
+                connection_time_ms = int((time.time() - start_time) * 1000)
+
+                if str(test_result).strip() != "2":
+                    return {
+                        "success": False,
+                        "message": f"Unexpected template result: {test_result}",
+                        "status": "error"
+                    }
+
+                # Try to fetch entities count for additional info
+                try:
+                    entities = await mcp_client.fetch_entity_states()
+                    entity_count = len(entities)
+                except Exception:
+                    entity_count = None
+
+                return {
+                    "success": True,
+                    "message": f"Successfully connected to Home Assistant at {ha_url}",
+                    "status": "success",
+                    "details": {
+                        "ha_url": ha_url,
+                        "connection_time_ms": connection_time_ms,
+                        "entity_count": entity_count,
+                        "api_test": "Template rendering successful"
+                    }
                 }
-            }
+            finally:
+                await mcp_client.close()
🧰 Tools
🪛 Ruff (0.14.14)

671-671: Do not catch blind exception: Exception

(BLE001)


674-684: Consider moving this statement to an else block

(TRY300)


690-690: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py`
around lines 646 - 692, The test_connection block creates a HAMCPClient
(mcp_client) but never closes its internal httpx.AsyncClient; wrap the usage of
mcp_client in a try/finally (or async with) around the calls to _render_template
and get_all_entities, and ensure you call await mcp_client.close() in the
finally so the client is always closed even on errors; update the function
(test_connection) to reference mcp_client, _render_template, get_all_entities,
and close as the cleanup point.

Comment on lines +667 to +672
# Try to fetch entities count for additional info
try:
entities = await mcp_client.get_all_entities()
entity_count = len(entities)
except Exception:
entity_count = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for get_all_entities method definition
rg -n "def get_all_entities|async def get_all_entities" --type py

# Also search for where HAMCPClient is defined to understand its structure
rg -n "class HAMCPClient" --type py -A 20

Repository: SimpleOpenSoftware/chronicle

Length of output: 2448


🏁 Script executed:

#!/bin/bash
# Get the full HAMCPClient class definition to see all available methods
rg -n "class HAMCPClient" --type py -A 200 backends/advanced/src/advanced_omi_backend/plugins/homeassistant/mcp_client.py | head -250

# Also specifically search for any method containing "entities"
rg -n "def.*entities|async def.*entities" --type py

Repository: SimpleOpenSoftware/chronicle

Length of output: 7781


🏁 Script executed:

#!/bin/bash
# Get full view of HAMCPClient methods, especially discover_entities
rg -n "async def discover_entities|async def fetch_area_entities" --type py -A 10 backends/advanced/src/advanced_omi_backend/plugins/homeassistant/mcp_client.py

Repository: SimpleOpenSoftware/chronicle

Length of output: 764


Fix method call on line 669: get_all_entities() does not exist.

mcp_client.get_all_entities() will fail at runtime—the method doesn't exist on HAMCPClient. The exception handler silently catches the AttributeError and sets entity_count to None, making this dead code.

Replace with discover_entities(), which returns a dictionary of available entities:

♻️ Suggested fix
            try:
-                entities = await mcp_client.get_all_entities()
+                entities = await mcp_client.discover_entities()
                 entity_count = len(entities)
            except Exception:
                 entity_count = None
🧰 Tools
🪛 Ruff (0.14.14)

671-671: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/plugins/homeassistant/plugin.py`
around lines 667 - 672, The code calls a non-existent method
mcp_client.get_all_entities() which will raise at runtime; change the call to
mcp_client.discover_entities() and compute entity_count from the returned
dictionary (e.g., entity_count = len(entities_dict)) while keeping the existing
try/except behavior; update references to the symbols mcp_client,
get_all_entities(), discover_entities(), and entity_count so the plugin uses
discover_entities()'s dict result instead of the removed API.

Comment on lines +75 to +84
response = {
"job_id": job.id,
"status": status
}

# Include error information for failed jobs
if status == "failed" and job.exc_info:
response["error_message"] = str(job.exc_info)
response["exc_info"] = str(job.exc_info)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid exposing full stack traces to non‑admins.
exc_info can leak internal paths or secrets. Consider returning a short error summary for regular users and reserving full traces for admins.

🔒 Safer response pattern
         response = {
             "job_id": job.id,
             "status": status
         }

         # Include error information for failed jobs
         if status == "failed" and job.exc_info:
-            response["error_message"] = str(job.exc_info)
-            response["exc_info"] = str(job.exc_info)
+            error_text = str(job.exc_info)
+            # Provide a short summary to non-admins
+            response["error_message"] = (error_text.splitlines()[-1] if error_text else "Job failed")[:500]
+            # Full trace only for admins
+            if current_user.is_superuser:
+                response["exc_info"] = error_text
🤖 Prompt for AI Agents
In `@backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py`
around lines 75 - 84, The current response exposes job.exc_info (full stack
trace) when status == "failed"; replace this with a short safe summary (e.g.,
"error_summary" containing the exception message or the first line of
job.exc_info) for regular users and only include the full exc_info when the
requester is an admin—check a known admin flag (e.g., current_user.is_admin or
request.user.is_admin) before adding "exc_info" to the response; update the
block that builds response (where response = {"job_id": job.id, "status":
status} and the following failed-handling code) to produce a sanitized
"error_message"/"error_summary" and conditionally add full "exc_info" for admins
only.

Comment on lines +132 to +155
<label
className="flex items-center space-x-2 cursor-pointer"
onClick={(e) => {
e.stopPropagation()
onToggleEnabled(plugin.plugin_id, !plugin.enabled)
}}
>
<span className="text-xs text-gray-600 dark:text-gray-400">
{plugin.enabled ? 'Enabled' : 'Disabled'}
</span>
<div
className={`
relative inline-flex h-5 w-9 items-center rounded-full transition-colors
${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'}
`}
>
<span
className={`
inline-block h-4 w-4 transform rounded-full bg-white transition-transform
${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'}
`}
/>
</div>
</label>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a real, focusable toggle control.

The current toggle is a <label> with an onClick handler but no associated control, so it isn’t focusable or announced correctly to assistive tech.

♿ Proposed fix
-              <label
-                className="flex items-center space-x-2 cursor-pointer"
-                onClick={(e) => {
-                  e.stopPropagation()
-                  onToggleEnabled(plugin.plugin_id, !plugin.enabled)
-                }}
-              >
+              <button
+                type="button"
+                role="switch"
+                aria-checked={plugin.enabled}
+                className="flex items-center space-x-2 cursor-pointer"
+                onClick={(e) => {
+                  e.stopPropagation()
+                  onToggleEnabled(plugin.plugin_id, !plugin.enabled)
+                }}
+              >
                 <span className="text-xs text-gray-600 dark:text-gray-400">
                   {plugin.enabled ? 'Enabled' : 'Disabled'}
                 </span>
                 <div
                   className={`
                     relative inline-flex h-5 w-9 items-center rounded-full transition-colors
                     ${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'}
                   `}
                 >
                   <span
                     className={`
                       inline-block h-4 w-4 transform rounded-full bg-white transition-transform
                       ${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'}
                     `}
                   />
                 </div>
-              </label>
+              </button>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<label
className="flex items-center space-x-2 cursor-pointer"
onClick={(e) => {
e.stopPropagation()
onToggleEnabled(plugin.plugin_id, !plugin.enabled)
}}
>
<span className="text-xs text-gray-600 dark:text-gray-400">
{plugin.enabled ? 'Enabled' : 'Disabled'}
</span>
<div
className={`
relative inline-flex h-5 w-9 items-center rounded-full transition-colors
${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'}
`}
>
<span
className={`
inline-block h-4 w-4 transform rounded-full bg-white transition-transform
${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'}
`}
/>
</div>
</label>
<button
type="button"
role="switch"
aria-checked={plugin.enabled}
className="flex items-center space-x-2 cursor-pointer"
onClick={(e) => {
e.stopPropagation()
onToggleEnabled(plugin.plugin_id, !plugin.enabled)
}}
>
<span className="text-xs text-gray-600 dark:text-gray-400">
{plugin.enabled ? 'Enabled' : 'Disabled'}
</span>
<div
className={`
relative inline-flex h-5 w-9 items-center rounded-full transition-colors
${plugin.enabled ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'}
`}
>
<span
className={`
inline-block h-4 w-4 transform rounded-full bg-white transition-transform
${plugin.enabled ? 'translate-x-5' : 'translate-x-0.5'}
`}
/>
</div>
</button>
🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/components/plugins/PluginListSidebar.tsx` around
lines 132 - 155, The toggle in PluginListSidebar is implemented as a
non-focusable <label> with an onClick, so screen readers and keyboard users
cannot properly interact with it; replace it with a real focusable control
(e.g., a button or an input[type="checkbox"]) that calls
onToggleEnabled(plugin.plugin_id, !plugin.enabled), exposes aria-checked (or
checked) and role if needed, includes accessible text/aria-label reflecting
Enabled/Disabled, and keeps the same visual classes so behavior and styling
remain intact; update any onClick handlers on the wrapper to use
onKeyDown/keyboard activation patterns if you keep a custom element.

Comment on lines +98 to +119
// Extract current configuration from plugin metadata
const config: PluginConfig = {
orchestration: {
enabled: plugin.enabled || false,
events: [],
condition: { type: 'always' }
},
settings: {},
env_vars: {}
}

// Load settings with defaults
Object.keys(plugin.config_schema.settings || {}).forEach((key) => {
const schema = plugin.config_schema.settings[key]
config.settings[key] = schema.default ?? ''
})

// Load env vars (will be masked values from backend)
Object.keys(plugin.config_schema.env_vars || {}).forEach((key) => {
const schema = plugin.config_schema.env_vars[key]
config.env_vars[key] = schema.value ?? ''
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid overwriting existing plugin configuration with defaults on load.

Line 99-113 initializes orchestration/events/condition and settings from defaults only. If metadata already includes current values, saving from this UI will reset user changes. Prefer hydrating from existing values (e.g., schema.value or orchestration values) or fetch current config before editing.

🛠️ Suggested adjustment
-    const config: PluginConfig = {
-      orchestration: {
-        enabled: plugin.enabled || false,
-        events: [],
-        condition: { type: 'always' }
-      },
-      settings: {},
-      env_vars: {}
-    }
+    const orchestration = plugin.config_schema.orchestration || {}
+    const config: PluginConfig = {
+      orchestration: {
+        enabled: plugin.enabled || false,
+        events: orchestration.events ?? [],
+        condition: orchestration.condition ?? { type: 'always' }
+      },
+      settings: {},
+      env_vars: {}
+    }
...
-      config.settings[key] = schema.default ?? ''
+      config.settings[key] = schema.value ?? schema.default ?? ''
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Extract current configuration from plugin metadata
const config: PluginConfig = {
orchestration: {
enabled: plugin.enabled || false,
events: [],
condition: { type: 'always' }
},
settings: {},
env_vars: {}
}
// Load settings with defaults
Object.keys(plugin.config_schema.settings || {}).forEach((key) => {
const schema = plugin.config_schema.settings[key]
config.settings[key] = schema.default ?? ''
})
// Load env vars (will be masked values from backend)
Object.keys(plugin.config_schema.env_vars || {}).forEach((key) => {
const schema = plugin.config_schema.env_vars[key]
config.env_vars[key] = schema.value ?? ''
})
// Extract current configuration from plugin metadata
const orchestration = plugin.config_schema.orchestration || {}
const config: PluginConfig = {
orchestration: {
enabled: plugin.enabled || false,
events: orchestration.events ?? [],
condition: orchestration.condition ?? { type: 'always' }
},
settings: {},
env_vars: {}
}
// Load settings with defaults
Object.keys(plugin.config_schema.settings || {}).forEach((key) => {
const schema = plugin.config_schema.settings[key]
config.settings[key] = schema.value ?? schema.default ?? ''
})
// Load env vars (will be masked values from backend)
Object.keys(plugin.config_schema.env_vars || {}).forEach((key) => {
const schema = plugin.config_schema.env_vars[key]
config.env_vars[key] = schema.value ?? ''
})
🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/components/PluginSettingsForm.tsx` around lines
98 - 119, The current initialization in PluginSettingsForm creates a fresh
config that overwrites any existing plugin configuration; instead hydrate from
existing values: initialize orchestration using plugin.orchestration or
plugin.config?.orchestration (preserving enabled, events, condition) and for
settings/env_vars populate each key by first checking
plugin.config?.settings[key] (or plugin.config?.env_vars[key]) then falling back
to schema.value or schema.default; ensure you only use defaults when no current
value exists and keep existing events/condition intact (refer to the config
object, orchestration, settings, env_vars, and plugin.config_schema to locate
fields).

Comment on lines +166 to +171
try {
const response = await systemApi.testPluginConnection(selectedPluginId, {
orchestration: currentConfig.orchestration,
settings: currentConfig.settings,
env_vars: currentConfig.env_vars
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f "PluginSettingsForm.tsx" backends/advanced/webui/src/components/

Repository: SimpleOpenSoftware/chronicle

Length of output: 133


🏁 Script executed:

cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | sed -n '160,180p'

Repository: SimpleOpenSoftware/chronicle

Length of output: 851


🏁 Script executed:

cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | sed -n '1,50p'

Repository: SimpleOpenSoftware/chronicle

Length of output: 2021


🏁 Script executed:

wc -l "backends/advanced/webui/src/components/PluginSettingsForm.tsx"

Repository: SimpleOpenSoftware/chronicle

Length of output: 137


🏁 Script executed:

cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | grep -i -E "save|mask|••••" -A 3 -B 3

Repository: SimpleOpenSoftware/chronicle

Length of output: 2020


🏁 Script executed:

cat -n "backends/advanced/webui/src/components/PluginSettingsForm.tsx" | sed -n '100,200p'

Repository: SimpleOpenSoftware/chronicle

Length of output: 3671


Filter masked secrets before connection tests.

The handleTestConnection method (lines 166-171) sends env_vars without filtering masked values, but the handleSave method already implements the correct pattern (lines 200-208) that skips sending unchanged secrets. When env vars contain masked values (••••), sending them verbatim to the backend will cause the connection test to fail unexpectedly. Reuse the same filtering logic from handleSave:

🛠️ Suggested adjustment
     try {
+      const envVarsToSend: Record<string, any> = {}
+      Object.keys(currentConfig.env_vars).forEach((key) => {
+        const value = currentConfig.env_vars[key]
+        if (typeof value !== 'string' || !value.includes('••••')) {
+          envVarsToSend[key] = value
+        }
+      })
+
       const response = await systemApi.testPluginConnection(selectedPluginId, {
         orchestration: currentConfig.orchestration,
         settings: currentConfig.settings,
-        env_vars: currentConfig.env_vars
+        env_vars: Object.keys(envVarsToSend).length > 0 ? envVarsToSend : undefined
       })
🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/components/PluginSettingsForm.tsx` around lines
166 - 171, The connection test is sending masked secret values from
currentConfig.env_vars which causes false failures; update handleTestConnection
to reuse the same filtering used in handleSave (use selectedPluginId,
currentConfig.env_vars and originalPluginConfig.env_vars) — build a filtered
env_vars object that skips keys whose value is the mask (e.g., '••••' or
startsWith masked dots) or that are unchanged from
originalPluginConfig.env_vars, then pass that filtered env_vars to
systemApi.testPluginConnection so masked secrets are not sent to the backend.

Comment on lines +231 to +262

// Parse server messages
try {
const message = JSON.parse(event.data)

// Handle error messages from backend
if (message.type === 'error') {
const errorMsg = message.message || 'Unknown error from server'
console.error('❌ Server error:', errorMsg)

setError(errorMsg)
setCurrentStep('error')
setDebugStats(prev => ({
...prev,
lastError: errorMsg,
lastErrorTime: new Date()
}))

// Stop recording and cleanup
cleanup()
setIsRecording(false)
}

// Handle other message types (interim_transcript, etc.)
else if (message.type === 'interim_transcript') {
console.log('📝 Received interim transcript:', message.data)
}

} catch (e) {
// Not JSON, ignore
console.log('📨 Non-JSON message:', event.data)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Abort the startRecording flow when a server error arrives.

The new server-error handler calls cleanup() and flips state, but startRecording can still proceed and set currentStep/isRecording afterward. That can leave the UI in “recording” while resources are already torn down. Add an abort ref and check it between awaited steps.

✅ Suggested fix (guard start flow on server error)
@@
   const processorRef = useRef<ScriptProcessorNode | null>(null)
   const durationIntervalRef = useRef<ReturnType<typeof setInterval>>()
   const keepAliveIntervalRef = useRef<ReturnType<typeof setInterval>>()
   const chunkCountRef = useRef(0)
   const audioProcessingStartedRef = useRef(false)
+  const serverErrorRef = useRef<string | null>(null)
@@
           if (message.type === 'error') {
             const errorMsg = message.message || 'Unknown error from server'
             console.error('❌ Server error:', errorMsg)
+            serverErrorRef.current = errorMsg
@@
   const startRecording = useCallback(async () => {
     try {
+      serverErrorRef.current = null
       setError(null)
       setCurrentStep('mic')
@@
       const stream = await getMicrophoneAccess()
+      if (serverErrorRef.current) return
@@
       const ws = await connectWebSocket()
+      if (serverErrorRef.current) return
@@
       await sendAudioStartMessage(ws)
+      if (serverErrorRef.current) return
@@
       await startAudioStreaming(stream, ws)
+      if (serverErrorRef.current) return

Also applies to: 412-435

🤖 Prompt for AI Agents
In `@backends/advanced/webui/src/hooks/useSimpleAudioRecording.ts` around lines
231 - 262, The server-error handler calls cleanup() and flips state but
startRecording can still continue; add an abort ref (e.g., abortedRef or
startAbortRef) that the error handler sets to true when a server error is
received, then update startRecording to check that ref after any await points
and before setting setCurrentStep or setIsRecording so the flow bails out if
aborted; ensure cleanup() is still called once and the same abort guard is added
in the other startRecording path referenced in the comment (the second
startRecording block) so both flows respect the server-error abort.

Comment on lines +49 to +55
- api_family: mock
api_key: mock-key-not-used
description: Mock STT for testing (batch)
model_provider: mock
model_type: stt
model_url: http://localhost:9999
name: mock-stt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and read the file mentioned in the review
find . -name "mock-services.yml" -type f | head -5

Repository: SimpleOpenSoftware/chronicle

Length of output: 105


🏁 Script executed:

# Look at the structure and size of the file
if [ -f "tests/configs/mock-services.yml" ]; then
  wc -l tests/configs/mock-services.yml
fi

Repository: SimpleOpenSoftware/chronicle

Length of output: 106


🏁 Script executed:

# Read the entire mock-services.yml file to understand the context
if [ -f "tests/configs/mock-services.yml" ]; then
  cat -n tests/configs/mock-services.yml
fi

Repository: SimpleOpenSoftware/chronicle

Length of output: 3300


🏁 Script executed:

# Search for other references to mock-services or localhost in config/test files
rg "localhost.*9999|host\.docker\.internal" --type yaml -A 2 -B 2

Repository: SimpleOpenSoftware/chronicle

Length of output: 4131


🏁 Script executed:

# Check if there's a Docker Compose or similar setup file
fd -e "docker-compose" -e "Dockerfile" -e "compose.yml" -e "compose.yaml" | head -10

Repository: SimpleOpenSoftware/chronicle

Length of output: 54


🏁 Script executed:

# Look for any documentation or comments about mock service setup
rg "mock.*stt|localhost.*9999" --type-list | head -20

Repository: SimpleOpenSoftware/chronicle

Length of output: 461


🏁 Script executed:

# Search for where mock-services.yml is referenced or used
rg "mock-services\.yml" -A 3 -B 3

Repository: SimpleOpenSoftware/chronicle

Length of output: 3828


Fix inconsistent mock STT URL for Docker container networking.

Line 54 uses http://localhost:9999, but all other mock services in this file use host.docker.internal (see lines 27, 37, and 73 for mock-llm, mock-embed, and mock-stt-stream respectively). From inside containers, localhost refers to the container itself, not the host. Change to host.docker.internal to match the pattern and ensure the batch STT service is reachable during testing.

Fix
-  model_url: http://localhost:9999
+  model_url: http://host.docker.internal:9999
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- api_family: mock
api_key: mock-key-not-used
description: Mock STT for testing (batch)
model_provider: mock
model_type: stt
model_url: http://localhost:9999
name: mock-stt
- api_family: mock
api_key: mock-key-not-used
description: Mock STT for testing (batch)
model_provider: mock
model_type: stt
model_url: http://host.docker.internal:9999
name: mock-stt
🤖 Prompt for AI Agents
In `@tests/configs/mock-services.yml` around lines 49 - 55, The mock STT service
entry named "mock-stt" has model_url set to "http://localhost:9999", which is
inconsistent with other mock services and fails from inside Docker; update the
model_url value for the "mock-stt" service (the YAML block with name: mock-stt
and model_type: stt) to use "http://host.docker.internal:9999" so containerized
tests can reach the host-based mock STT batch service.

@github-actions
Copy link

⚠️ Robot Framework Test Results (No API Keys)

Status: ❌ Some tests failed

ℹ️ Note: This run excludes tests requiring external API keys (Deepgram, OpenAI).
Tests tagged with requires-api-keys will run on dev/main branches.

Metric Count
✅ Passed 87
❌ Failed 8
📊 Total 95

📊 View Reports

GitHub Pages (Live Reports):

Download Artifacts:


View full workflow run

@github-actions
Copy link

⚠️ Robot Framework Test Results (Label-Triggered Full Suite)

Status: ❌ Some tests failed

🏷️ Note: This run was triggered by the test-with-api-keys label.
All tests including API-dependent tests have been executed.

Metric Count
✅ Passed 119
❌ Failed 5
📊 Total 124

📊 View Reports

GitHub Pages (Live Reports):

Download Artifacts:


View full workflow run

@github-actions
Copy link

⚠️ Robot Framework Test Results (No API Keys)

Status: ❌ Some tests failed

ℹ️ Note: This run excludes tests requiring external API keys (Deepgram, OpenAI).
Tests tagged with requires-api-keys will run on dev/main branches.

Metric Count
✅ Passed 87
❌ Failed 8
📊 Total 95

📊 View Reports

GitHub Pages (Live Reports):

Download Artifacts:


View full workflow run

@github-actions
Copy link

⚠️ Robot Framework Test Results (Label-Triggered Full Suite)

Status: ❌ Some tests failed

🏷️ Note: This run was triggered by the test-with-api-keys label.
All tests including API-dependent tests have been executed.

Metric Count
✅ Passed 120
❌ Failed 4
📊 Total 124

📊 View Reports

GitHub Pages (Live Reports):

Download Artifacts:


View full workflow run

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants