-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Debug Kanban Memory & Add Sentry Monitoring #1380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Debug Kanban Memory & Add Sentry Monitoring #1380
Conversation
…lization state Added comprehensive debug logging to memory_manager.py to reveal if _ensure_initialized() is failing silently. This captures: - PRE-INIT STATE: Memory instance details before any save attempt - is_enabled, is_initialized, group_id - Internal component states (client, queries, search) - Config validation state (is_valid, providers, database) - State object details (initialized, episode_count, errors) - PRE-SAVE CHECK: Initialization state right before save method - Logs whether _ensure_initialized() will be called - POST-SAVE CHECK: State after save method returns - Confirms if initialization actually happened - Shows save result and component states This logging will reveal the root cause of kanban task memory save failures by showing exactly what state the memory system is in during the save flow. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… initialize() call - Made get_graphiti_memory() async function - Added await memory.initialize() call following GitHub pattern - Updated save_to_graphiti_async() to await the async helper - Added proper error handling for initialization failures Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…graphiti_memory Updated both get_graphiti_context() and save_session_memory() to properly await the now-async get_graphiti_memory() helper, which initializes the GraphitiMemory instance internally before returning. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…mory class - Import capture_exception from core.sentry - Add Sentry tracking to initialize() method for initialization failures - Add try/except with capture_exception to all save_* methods: - save_session_insights - save_codebase_discoveries - save_pattern - save_gotcha - save_task_outcome - save_structured_insights - Add try/except with capture_exception to all get_* methods: - get_relevant_context - get_session_history - get_similar_task_outcomes - get_patterns_and_gotchas - Include relevant context (component, operation, etc.) with each capture Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…eries class Track all episode save failures with operation type and content summary: - add_session_insight: tracks group_id, spec_id, session_number - add_codebase_discoveries: tracks group_id, spec_id, discovery_count - add_pattern: tracks group_id, spec_id, content_summary - add_gotcha: tracks group_id, spec_id, content_summary - add_task_outcome: tracks group_id, spec_id, task_id, success, content_summary - add_structured_insights: tracks group_id, spec_id, content_summary (insight types)
…arch class - Import capture_exception from core.sentry - Add Sentry tracking to get_relevant_context with query_summary and group_id - Add Sentry tracking to get_session_history with group_id context - Add Sentry tracking to get_similar_task_outcomes with query_summary and group_id - Add Sentry tracking to get_patterns_and_gotchas with query_summary and group_id - All exception handlers now include operation name for better error grouping Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…elpers.py Add Sentry error tracking to Graphiti helper functions: - Import capture_exception from core.sentry - Track get_graphiti_memory failures with spec_dir and project_dir context - Track save_to_graphiti_async failures with spec_dir, session_num context - Track connection close failures with context information Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughThis pull request adds comprehensive error handling and observability instrumentation across the Graphiti memory integration system. Changes include converting Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @AndyMik90, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses critical issues with the Kanban memory system's inability to save memories by implementing a two-pronged approach: robust error monitoring and enhanced debugging capabilities. The changes ensure that all failures within the Graphiti memory infrastructure are proactively captured via Sentry with rich contextual data, while also providing developers with detailed insights into the memory system's state through comprehensive logging. This significantly improves the reliability and diagnosability of the memory persistence mechanism. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses a memory saving failure in the Kanban system by ensuring the Graphiti memory instance is properly initialized asynchronously. The addition of comprehensive Sentry error monitoring across the entire memory and Graphiti stack is a significant improvement for production stability and future debugging. The new test script is also a valuable addition for verifying the fix and preventing regressions. My review includes a couple of suggestions to improve the maintainability of the new code by addressing temporary debug logging and repeated error-handling patterns.
| # ===== INVESTIGATION DEBUG LOGGING (subtask-1-1) ===== | ||
| # Capture detailed initialization state BEFORE any save attempt | ||
| # This reveals if _ensure_initialized() might fail silently | ||
| if is_debug_enabled(): | ||
| debug_detailed( | ||
| "memory", | ||
| "GraphitiMemory instance created", | ||
| "GraphitiMemory instance created - PRE-INIT STATE", | ||
| is_enabled=memory.is_enabled, | ||
| is_initialized=memory.is_initialized, | ||
| group_id=getattr(memory, "group_id", "unknown"), | ||
| spec_dir=str(spec_dir), | ||
| project_dir=str(project_dir), | ||
| has_client=memory._client is not None, | ||
| has_queries=memory._queries is not None, | ||
| has_search=memory._search is not None, | ||
| available_flag=memory._available, | ||
| ) | ||
|
|
||
| # Log config details that affect initialization | ||
| if hasattr(memory, 'config') and memory.config: | ||
| debug_detailed( | ||
| "memory", | ||
| "GraphitiMemory config state", | ||
| config_is_valid=memory.config.is_valid() if hasattr(memory.config, 'is_valid') else 'unknown', | ||
| llm_provider=memory.config.llm_provider if hasattr(memory.config, 'llm_provider') else 'unknown', | ||
| embedder_provider=memory.config.embedder_provider if hasattr(memory.config, 'embedder_provider') else 'unknown', | ||
| database=memory.config.database if hasattr(memory.config, 'database') else 'unknown', | ||
| ) | ||
|
|
||
| # Log state object details | ||
| if memory.state: | ||
| debug_detailed( | ||
| "memory", | ||
| "GraphitiMemory state object", | ||
| state_initialized=memory.state.initialized if hasattr(memory.state, 'initialized') else 'unknown', | ||
| episode_count=memory.state.episode_count if hasattr(memory.state, 'episode_count') else 'unknown', | ||
| last_session=memory.state.last_session if hasattr(memory.state, 'last_session') else 'unknown', | ||
| error_count=len(memory.state.error_log) if hasattr(memory.state, 'error_log') else 0, | ||
| ) | ||
| else: | ||
| debug("memory", "GraphitiMemory state object is None") | ||
| # ===== END INVESTIGATION DEBUG LOGGING ===== |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block adds a lot of very detailed debug logging. While this is clearly useful for the current investigation (as noted by the comment INVESTIGATION DEBUG LOGGING), it adds significant noise to the debug output for general-purpose debugging. To improve maintainability and log clarity, consider removing this verbose logging once the issue is fully resolved, or placing it behind a more specific, higher-level debug flag (e.g., if get_debug_level() > 1:) to keep the standard debug logs cleaner.
| try: | ||
| result = await self._queries.add_session_insight(session_num, insights) | ||
|
|
||
| if result and self.state: | ||
| self.state.last_session = session_num | ||
| self.state.episode_count += 1 | ||
| self.state.save(self.spec_dir) | ||
| if result and self.state: | ||
| self.state.last_session = session_num | ||
| self.state.episode_count += 1 | ||
| self.state.save(self.spec_dir) | ||
|
|
||
| return result | ||
| return result | ||
| except Exception as e: | ||
| logger.warning(f"Failed to save session insights: {e}") | ||
| self._record_error(f"save_session_insights failed: {e}") | ||
| capture_exception( | ||
| e, | ||
| component="graphiti", | ||
| operation="save_session_insights", | ||
| session_num=session_num, | ||
| ) | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This try...except block for handling exceptions, logging, and reporting to Sentry is repeated in many methods throughout this file (e.g., save_codebase_discoveries, save_pattern, get_relevant_context, etc.). This creates a lot of boilerplate and makes the code harder to maintain. To improve this, consider refactoring the common error-handling logic into a decorator or a private helper method. This would reduce code duplication and centralize the error handling logic, making future changes easier.
| context="verification_script", | ||
| ) | ||
| print("[INFO] Exception captured to Sentry") | ||
| except Exception: |
Check notice
Code scanning / CodeQL
Empty except Note test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 21 hours ago
In general, to fix “empty except” issues you either (a) narrow the exception type and handle it explicitly (log, recover, re-raise, etc.), or (b) add at least logging or an explanatory comment if the exception is intentionally ignored. Here, the inner except Exception: around Sentry capture should not simply pass; instead, it should emit a clear message so users know that Sentry reporting failed, but it should still avoid raising and interfering with the outer error handling.
Best fix with minimal behavior change:
- Replace the
except Exception:block at lines 283–284 with anexcept Exception as sentry_err:that prints or logs a short message mentioning that Sentry capture failed and includes the error. - Optionally, also print a brief note that the script will continue, preserving the original control flow: the function still returns
Falsefrom the outerexceptafter attempting (and possibly failing) Sentry capture. - This does not require new imports; the file already uses
printfor user-facing messages and importslogging, but given the surrounding code usesprintin this function, keepingprintis consistent.
Concretely, in apps/backend/scripts/test_memory_save.py, within test_memory_save_flow, change the two-line except Exception: pass block to a small handler that logs/prints the failure, e.g.:
except Exception as sentry_err:
print(f"[WARN] Failed to capture exception to Sentry: {sentry_err}")No additional methods or definitions are required.
-
Copy modified lines R283-R284
| @@ -280,8 +280,8 @@ | ||
| context="verification_script", | ||
| ) | ||
| print("[INFO] Exception captured to Sentry") | ||
| except Exception: | ||
| pass | ||
| except Exception as sentry_err: | ||
| print(f"[WARN] Failed to capture exception to Sentry: {sentry_err}") | ||
|
|
||
| return False | ||
|
|
AndyMik90
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤖 Auto Claude PR Review
Merge Verdict: 🔴 BLOCKED
🔴 Blocked - 2 CI check(s) failing. Fix CI before merge.
Blocked: 2 CI check(s) failing. Fix CI before merge.
Risk Assessment
| Factor | Level | Notes |
|---|---|---|
| Complexity | High | Based on lines changed |
| Security Impact | None | Based on security findings |
| Scope Coherence | Good | Based on structural review |
🚨 Blocking Issues (Must Fix)
- Branch Out of Date: PR branch is behind the base branch and needs to be updated
- CI Failed: Lint Complete
- CI Failed: Python (Ruff)
- Critical: [Potential] Missing await on async get_graphiti_memory() breaks codebase map (apps/backend/memory/codebase_map.py:67)
Findings Summary
- Critical: 1 issue(s)
- Medium: 2 issue(s)
- Low: 1 issue(s)
Generated by Auto Claude PR Review
Findings (4 selected of 4 total)
🔴 [b93018eb601c] [CRITICAL] [Potential] Missing await on async get_graphiti_memory() breaks codebase map
📁 apps/backend/memory/codebase_map.py:67
The function get_graphiti_memory() was changed from sync to async in this PR, but the caller at line 67 does NOT use await. This returns a coroutine object instead of a GraphitiMemory instance. The truthiness check if graphiti: will always be True (coroutines are truthy), and run_async(graphiti.save_codebase_discoveries()) will fail with AttributeError because graphiti is a coroutine, not a GraphitiMemory object.
Suggested fix:
The function `update_codebase_map` is synchronous, so either: (1) change `get_graphiti_memory` back to sync with explicit initialize(), or (2) use `graphiti = run_async(get_graphiti_memory(spec_dir))` to await in a sync context, or (3) make `update_codebase_map` async.
🟡 [6426ab78f883] [MEDIUM] [Potential] Repetitive try/except/Sentry pattern duplicated across 10 methods
📁 apps/backend/integrations/graphiti/queries_pkg/graphiti.py:223
The same error handling pattern (try/except with logger.warning, _record_error, capture_exception) is repeated in 10 methods: save_session_insights, save_codebase_discoveries, save_pattern, save_gotcha, save_task_outcome, save_structured_insights, get_relevant_context, get_session_history, get_similar_task_outcomes, get_patterns_and_gotchas. This creates ~200 lines of duplicated boilerplate.
Suggested fix:
Extract into a decorator or helper method to reduce duplication: `async def _with_error_handling(self, operation: str, fn: Callable, **context)`
🟡 [c6b0c9e6a792] [MEDIUM] [Potential] Inconsistent capture_exception keyword: 'function' vs 'operation'
📁 apps/backend/memory/graphiti_helpers.py:80
graphiti_helpers.py uses 'function=' keyword parameter for capture_exception (lines 80, 155, 172), but all other usages in the codebase use 'operation=' consistently (queries.py, search.py, graphiti.py, memory_manager.py).
Suggested fix:
Change 'function=' to 'operation=' in all three capture_exception calls in graphiti_helpers.py to match the established pattern.
🔵 [440d39ac8c04] [LOW] [Potential] Investigation debug logging should be removed after debugging
📁 apps/backend/agents/memory_manager.py:127
The PR adds verbose debug blocks marked with '===== INVESTIGATION DEBUG LOGGING (subtask-1-1) =====' comments. These are clearly temporary investigation aids (as noted in the comments) that should be removed after the Kanban memory bug is fixed. While properly guarded by is_debug_enabled(), they add visual noise and are not standard debug logging.
Suggested fix:
Add a TODO comment to remove these investigation blocks after the memory debugging is complete, or remove them now if the investigation is done.
This review was generated by Auto Claude.
…error-type-authentication
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/backend/memory/graphiti_helpers.py (1)
62-74: Return None when initialization fails.
initialize()can fail without raising; returning a non-initialized instance contradicts the docstring and can lead to downstream calls on an unavailable memory.🔧 Proposed fix
- # Initialize the memory instance (following GitHub pattern) - await memory.initialize() - - return memory + # Initialize the memory instance (following GitHub pattern) + initialized = await memory.initialize() + if not initialized: + logger.warning("Graphiti memory initialization failed") + return None + + return memoryapps/backend/integrations/graphiti/queries_pkg/graphiti.py (1)
11-24: Fix Ruff I001 import ordering to unblock CI.Ruff reports unsorted imports in this block. Reorder first‑party imports into a single group.
🧹 Suggested reorder
-from graphiti_config import GraphitiConfig, GraphitiState - -from core.sentry import capture_exception +from core.sentry import capture_exception +from graphiti_config import GraphitiConfig, GraphitiState
🤖 Fix all issues with AI agents
In `@apps/backend/scripts/test_memory_save.py`:
- Around line 217-233: The prints preceding and following the call to
save_session_memory use f-strings with no interpolations, which triggers Ruff
F541; update the print calls that reference subtask_id, session_num, success,
and the "Memory Save Result" prints to use regular string literals or include
proper format placeholders, e.g., replace f"...Saving test session memory..."
and f" Success: {result}" only keep f where interpolation is used (ensure
prints that actually interpolate keep the f and those that don't are plain
strings); look around the save_session_memory call and adjust the print
statements so only strings that include {variables} use the f prefix.
- Around line 45-98: Ruff flags unsorted imports inside the try blocks;
alphabetically sort each multi-name from-import list (e.g., in
agents.memory_manager: save_session_memory, get_graphiti_context,
debug_memory_system_status -> sort alphabetically), do the same for
memory.graphiti_helpers (get_graphiti_memory, is_graphiti_memory_enabled,
save_to_graphiti_async), graphiti_config (get_graphiti_status,
is_graphiti_enabled), core.sentry (capture_exception, capture_message,
init_sentry, is_enabled as sentry_is_enabled), and
integrations.graphiti.queries_pkg (GraphitiClient, GraphitiMemory,
GraphitiQueries, GraphitiSearch), and apply the same ordering fix to any other
import blocks in this file so all from-import name lists are alphabetically
ordered.
♻️ Duplicate comments (2)
apps/backend/scripts/test_memory_save.py (2)
115-133: Return success even when Graphiti is disabled.
Returning theenabledflag makes the test fail in valid configurations where Graphiti is intentionally off; this should report whether the status check ran successfully.🔧 Proposed fix
- return enabled + return True
258-268: Avoid empty except; log the Sentry capture failure.
A silent pass hides issues and triggers CodeQL’s empty-except warning.🔧 Proposed fix
- except Exception: - pass + except Exception as sentry_error: + logger.debug( + "Failed to capture exception to Sentry", + exc_info=sentry_error, + )
…error-type-authentication
AndyMik90
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤖 Auto Claude PR Review
Merge Verdict: 🔴 BLOCKED
🔴 Blocked - 2 CI check(s) failing. Fix CI before merge.
Blocked: 2 CI check(s) failing. Fix CI before merge.
Risk Assessment
| Factor | Level | Notes |
|---|---|---|
| Complexity | High | Based on lines changed |
| Security Impact | None | Based on security findings |
| Scope Coherence | Good | Based on structural review |
🚨 Blocking Issues (Must Fix)
- CI Failed: Lint Complete
- CI Failed: Python (Ruff)
- Critical: [Potential] Missing await on async get_graphiti_memory() call (apps/backend/memory/codebase_map.py:67)
Findings Summary
- Critical: 1 issue(s)
- Medium: 2 issue(s)
- Low: 1 issue(s)
Generated by Auto Claude PR Review
Findings (4 selected of 4 total)
🔴 [3692b052a75e] [CRITICAL] [Potential] Missing await on async get_graphiti_memory() call
📁 apps/backend/memory/codebase_map.py:67
The PR changed get_graphiti_memory() from sync to async in graphiti_helpers.py, but this caller was not updated. Line 67 calls the async function without await: graphiti = get_graphiti_memory(spec_dir). This returns a coroutine object instead of a GraphitiMemory instance. The truthy check if graphiti: will always pass (coroutines are truthy), then line 69 will fail with AttributeError when calling .save_codebase_discoveries() on a coroutine.
Suggested fix:
The sync function update_codebase_map() cannot use await directly. Either: (1) Change the function to async and await the call, or (2) Use run_async() to wrap the get_graphiti_memory() call: `graphiti = run_async(get_graphiti_memory(spec_dir))`
🟡 [29a932bfcb6a] [MEDIUM] [Potential] Repetitive try/except/capture_exception pattern (20+ occurrences)
📁 apps/backend/integrations/graphiti/queries_pkg/graphiti.py:241
The same error handling pattern is repeated 10 times in graphiti.py alone and 20+ times across all files in the PR. Each follows identical structure: try/except with logger.warning, self._record_error, and capture_exception. While explicit, this violates DRY and makes maintenance difficult.
Suggested fix:
Create a decorator or async context manager for consistent error handling:
@graphiti_error_handler(operation='save_session_insights', default_return=False)
async def save_session_insights(self, ...):
...
🟡 [777c6b735dc3] [MEDIUM] [Potential] Investigation debug logging should be removed after fix
📁 apps/backend/agents/memory_manager.py:127
The file contains 80+ lines of verbose 'INVESTIGATION DEBUG LOGGING (subtask-1-1)' blocks (lines 127-138, 362-403, 407-416, 431-442). These log internal state like has_client, has_queries, PRE-INIT/POST-INIT states. Comments explicitly mark these for the current investigation and should be removed once the issue is resolved.
Suggested fix:
Remove or significantly reduce the investigation-specific logging blocks once the memory issue is confirmed fixed. Keep only essential state logging.
🔵 [5d510b9846e8] [LOW] [Potential] Empty except block silently swallows Sentry capture failure
📁 apps/backend/scripts/test_memory_save.py:267
Lines 267-268 contain except Exception: pass that silently ignores exceptions when capturing to Sentry. While intentional (to avoid masking the original error), this makes debugging harder when Sentry capture itself fails.
Suggested fix:
Add minimal logging: `except Exception as capture_error: print(f'[WARN] Sentry capture failed: {capture_error}')`
This review was generated by Auto Claude.
- Fix import block sorting (I001) in graphiti.py and test_memory_save.py - Fix f-string without placeholders (F541) in test_memory_save.py - Apply ruff formatting to 4 files Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…error-type-authentication
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| spec_dir=str(spec_dir), | ||
| project_dir=str(project_dir) if project_dir else None, | ||
| ) | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing cleanup causes resource leak on initialization failure
Low Severity
When get_graphiti_memory catches an exception from initialize(), it returns None without closing the memory object. If initialize() partially succeeded (creating and initializing _client with open database connections) before throwing, those connections are leaked. The previous implementation didn't call initialize(), so no cleanup was needed. The new code creates resources that need cleanup on failure but lacks a finally block or cleanup in the exception handler.
| operation="save_session_insights", | ||
| session_num=session_num, | ||
| ) | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exception handlers can throw via _record_error call
Low Severity
The new exception handlers call _record_error() before capture_exception(). Since _record_error internally calls self.state.save() which can throw (e.g., filesystem errors), if it fails, capture_exception won't execute and the method will propagate an exception instead of returning the expected error value (False or []). This defeats the purpose of the try/except blocks and prevents Sentry from capturing the original error.
Additional Locations (2)
67a743f to
e83e445
Compare
…error-type-authentication
| await graphiti.save_gotcha(gotcha) | ||
|
|
||
| return result | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: The async function get_graphiti_memory is called without await in multiple locations, which will cause a runtime AttributeError when methods are called on the resulting coroutine object.
Severity: HIGH
Suggested Fix
Prefix the calls to get_graphiti_memory with the await keyword in agents/tools_pkg/tools/memory.py, memory/patterns.py, and memory/codebase_map.py. For example, change memory = get_graphiti_memory(...) to memory = await get_graphiti_memory(...).
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: apps/backend/memory/graphiti_helpers.py#L152
Potential issue: The function `get_graphiti_memory` in `graphiti_helpers.py` was
converted to be `async`, but several call sites were not updated with `await`. For
example, in `_save_to_graphiti_async`, the call `memory = get_graphiti_memory(...)` will
assign a coroutine object to the `memory` variable, not a `GraphitiMemory` instance.
Subsequent attempts to call methods on this variable, such as
`memory.save_codebase_discoveries(...)`, will result in a runtime `AttributeError:
'coroutine' object has no attribute 'save_codebase_discoveries'`. This same error
pattern exists in `memory/patterns.py` and `memory/codebase_map.py`.
Did we get this right? 👍 / 👎 to inform future reviews.
The memory system for kanban tasks (built on the Python backend using Claude Agent SDK runners) is failing to save memories, while the GitHub PR system successfully saves memories using the same underlying Graphiti infrastructure. This task involves debugging the memory save failure, fixing the root cause, and implementing comprehensive Sentry error monitoring across the memory system to catch future production issues.
Summary by CodeRabbit
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.