-
Notifications
You must be signed in to change notification settings - Fork 52
Fix errors across whole Clairvoyance #402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release
Are you sure you want to change the base?
Fix errors across whole Clairvoyance #402
Conversation
WalkthroughThis PR introduces comprehensive type annotations and defensive programming throughout the codebase. Changes include expanded type hints (Optional, Union, Dict, List), conversion of juspay analytics functions to async, public handler signature updates in websocket_bot.py, and widespread None-guarding across multiple modules to improve robustness. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Areas requiring extra attention:
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
app/agents/voice/automatic/services/fal/smart_turn_analyzer.py (2)
22-24: Return type annotation is imprecise.The return type specifies
Optional[object]for the second element, but the method actually returnsOptional[ClientSession](line 40). Since this PR focuses on comprehensive type annotations, the return type should be more specific.Apply this diff to fix the return type annotation:
async def create_analyzer( self, - ) -> Tuple[Optional[FalSmartTurnAnalyzer], Optional[object]]: + ) -> Tuple[Optional[FalSmartTurnAnalyzer], Optional[ClientSession]]: """
47-47: Add type annotation to thesessionparameter.The
sessionparameter lacks a type annotation. For consistency with this PR's objective of comprehensive type annotations throughout the codebase, it should be annotated asOptional[ClientSession].Apply this diff to add the type annotation:
- async def cleanup(self, session=None) -> None: + async def cleanup(self, session: Optional[ClientSession] = None) -> None: """app/services/redis/client.py (1)
257-265: Missingawaitcauses health check to always returnTrue.Without
await,ping()is never executed and the method always returnsTrueregardless of actual connection state. This defeats the purpose of the health check.async def ping(self) -> bool: """Test Redis connection""" try: client = await self.get_client() - client.ping() + await client.ping() return True except Exception as e: logger.error(f"Redis PING error: {e}") return Falseapp/agents/voice/automatic/tools/breeze/utils.py (1)
283-305: Overlap detection is incorrect for ranges with unlimited (None) upper boundsThe current overlap check misses real overlaps when either
existing_maxornew_maxisNone(unlimited). For example:
- Existing rule:
minimumOrderValue=0, maximumOrderValue=10- New rule:
minimumOrderValue=5, maximumOrderValue=None(unlimited)These clearly overlap, but the
elif new_max is Nonebranch only checksnew_min <= existing_min, so this case incorrectly reports “no overlap”.Similarly, when
existing_max is None, a new finite rule that straddlesexisting_minbut starts below it can also be missed.Because this function is used to validate surcharge rules, this can allow overlapping rules through while telling the caller everything is fine. That’s a functional correctness issue, not just a style concern.
A more robust approach is to normalize bounds to numbers and treat
Noneas “infinity” for the upper bound:- # Simple overlap check - overlap_detected = False - if existing_max is None: # Existing rule is unlimited - if new_min >= existing_min: - overlap_detected = True - elif new_max is None: # New rule is unlimited - if new_min <= existing_min: - overlap_detected = True - else: - # Both have limits - check if they overlap - if not (new_max < existing_min or new_min > existing_max): - overlap_detected = True + # Simple overlap check, handling unlimited (None) upper bounds safely + overlap_detected = False + try: + new_min_val = float(new_min) + existing_min_val = float(existing_min) + new_max_val = float("inf") if new_max is None else float(new_max) + existing_max_val = ( + float("inf") if existing_max is None else float(existing_max) + ) + except (TypeError, ValueError): + logger.warning( + "Invalid numeric boundaries while checking overlaps: " + f"new=({new_min}, {new_max}), " + f"existing=({existing_min}, {existing_max})" + ) + continue + + if not (new_max_val < existing_min_val or new_min_val > existing_max_val): + overlap_detected = TrueThis keeps the intent (“overlap if intervals intersect at all”) while correctly handling unlimited ranges and mixed int/float/string numerics coming from JSON.
🧹 Nitpick comments (41)
app/services/slack/alert.py (1)
48-57: blocks annotation is correct; consider a stronger Slack block type (optional)
blocks: List[Dict[str, Any]]correctly reflects the heterogeneous Slack block payload. If you want stricter typing later, you could introduce aTypedDictor alias (e.g.SlackBlock = Dict[str, Any]) and useList[SlackBlock]for readability, but it’s not required for this PR.app/agents/voice/breeze_buddy/workflows/order_confirmation/utils.py (1)
133-137: Defensive None-check looks fine; consider adding more context to the log (optional).The new guard cleanly short‑circuits and is consistent with the
OutputAudioRawFrame | Nonereturn type. GivenAudioSegment.raw_datais typicallybytes, this is mostly a defensive check, which is okay. You might optionally includeaudio_pathin the warning message to ease debugging when this ever triggers.app/agents/voice/breeze_buddy/stt/__init__.py (1)
138-146: Good defensive Soniox → Google fallback; consider avoiding duplicate Google configThe new guard on
SONIOX_API_KEYis a solid improvement and prevents misconfigured Soniox from breaking STT; returning a Google service with the same params as the default path keeps behavior predictable. One minor follow-up you might consider (non-blocking) is:
- Move the
if not SONIOX_API_KEYcheck before constructingsoniox_params/parsing context to avoid that work when we’ll immediately fall back, and/or- Factor out a small helper (e.g.
_google_stt_for_breeze_buddy()) so the two GoogleSTTService constructions share a single definition.app/agents/voice/automatic/features/charts/conversation.py (1)
260-267: Average duration calculation is fine but the None-filtering can be simplified
completed_turnsalready filters outduration_ms is None, so the additionalif turn.duration_ms is not Noneinvalid_durationsis redundant, and you’re also evaluating the property twice per turn. You can collapse this into a single list:- # Calculate average turn duration - completed_turns = [turn for turn in self.turns if turn.duration_ms is not None] - valid_durations = [ - turn.duration_ms for turn in completed_turns if turn.duration_ms is not None - ] - avg_turn_duration_ms = ( - sum(valid_durations) / len(valid_durations) if valid_durations else None - ) + # Calculate average turn duration + durations = [ + turn.duration_ms for turn in self.turns if turn.duration_ms is not None + ] + avg_turn_duration_ms = ( + sum(durations) / len(durations) if durations else None + )app/database/__init__.py (1)
73-74: Good defensive guard to prevent silent failures.This explicit check ensures that if
init_db_pool()returns without successfully creating the pool (e.g., missing env vars or KMS decryption failure), a clear RuntimeError is raised rather than allowing an AttributeError when attemptingpool.acquire()on line 76.Note: Static analysis flagged TRY003 (long exception message), but the message here is concise and appropriate. If you want to be extra pedantic, you could define a custom
DatabasePoolNotInitializedErrorexception class, but this is entirely optional.</review_comment_end -->
app/agents/voice/automatic/tts/__init__.py (1)
50-52: ELEVENLABS API key guard is a good fail‑fast; consider minor lint clean‑upThe explicit check and early
ValueErrorwhenELEVENLABS_API_KEYis missing is a solid defensive guard and pairs well with the log entry.Static analysis (TRY003) may still complain about the exception message; if you care about a clean lint run, you could either:
- shorten the message (e.g.,
"ELEVENLABS_API_KEY is required for RHEA voice") and rely on the log for the human‑friendly text, or- add a brief
# noqa: TRY003if you consider the current message acceptable.Functionally, this block looks good as‑is.
app/agents/voice/automatic/tools/internet/search.py (1)
45-46: Clarify variable naming.The variable
messagescontains just the query string, not a collection of messages. Consider renaming it tocontentor keeping it asqueryto better reflect what it contains.Apply this diff to improve clarity:
- # Use proper message format for Google API - messages = query + # Pass query directly as content for Google API + content = query config = GenerateContentConfig( tools=gemini_llm._tools, system_instruction=system_instruction, ) response = await gemini_llm._client.aio.models.generate_content_stream( model=gemini_llm._model_name, - contents=messages, + contents=content, config=config, )app/agents/voice/automatic/services/mem0/memory.py (3)
354-368: Simplify attribute access pattern.Per Ruff B009,
getattrwith a constant attribute provides no safety benefit over direct access. Additionally, thehasattr+getattrpattern can be simplified usinggetattrwith a default value.Apply this diff:
- # Add string parameters with proper type checking - if hasattr(self, "user_id") and getattr(self, "user_id"): - user_id_val = getattr(self, "user_id") - if isinstance(user_id_val, str): - add_kwargs["user_id"] = user_id_val - - if hasattr(self, "agent_id") and getattr(self, "agent_id"): - agent_id_val = getattr(self, "agent_id") - if isinstance(agent_id_val, str): - add_kwargs["agent_id"] = agent_id_val - - if hasattr(self, "run_id") and getattr(self, "run_id"): - run_id_val = getattr(self, "run_id") - if isinstance(run_id_val, str): - add_kwargs["run_id"] = run_id_val + # Add string parameters with proper type checking + user_id_val = getattr(self, "user_id", None) + if isinstance(user_id_val, str) and user_id_val: + add_kwargs["user_id"] = user_id_val + + agent_id_val = getattr(self, "agent_id", None) + if isinstance(agent_id_val, str) and agent_id_val: + add_kwargs["agent_id"] = agent_id_val + + run_id_val = getattr(self, "run_id", None) + if isinstance(run_id_val, str) and run_id_val: + add_kwargs["run_id"] = run_id_val
380-384: Useasyncio.get_running_loop()in async context.
asyncio.get_event_loop()is deprecated within running async functions (Python 3.10+). Since_store_messages_asyncis already an async method, useget_running_loop()which is guaranteed to return the currently running loop.# Run in an executor to prevent blocking - loop = asyncio.get_event_loop() + loop = asyncio.get_running_loop() await loop.run_in_executor( None, lambda: self.memory_client.add(**add_kwargs) )
370-378: Redundant import inside try block.
Memoryis already imported at the top of the file (line 24), and failure there raises an exception before this code runs. The local import and itsexcept ImportErrorfallback are unnecessary.- # Handle output_format for non-Memory clients - try: - from mem0 import Memory as Mem0Memory - - if not isinstance(self.memory_client, Mem0Memory): - add_kwargs["output_format"] = "v1.1" - except ImportError: - # If we can't import Memory, assume we need output_format - add_kwargs["output_format"] = "v1.1" + # Handle output_format for non-Memory clients + if not isinstance(self.memory_client, Memory): + add_kwargs["output_format"] = "v1.1"app/agents/voice/breeze_buddy/managers/calls.py (3)
114-119: Silent failure may cause channel tracking inconsistency.When
channelsis None, the function logs a warning but returns normally. The caller (process_backlog_leads) will proceed to use this number for a call without the channel being incremented. Although_get_available_numbershould filter out numbers with None channels, this silent failure pattern could cause issues if code paths change.Consider returning a boolean or raising an exception to indicate acquisition failure:
-async def _acquire_number(number: OutboundNumber): +async def _acquire_number(number: OutboundNumber) -> bool: """ Marks an outbound number as in use. + Returns True if successful, False otherwise. """ if number.provider == CallProvider.TWILIO: await update_outbound_number_status(number.id, OutboundNumberStatus.IN_USE) + return True elif number.provider == CallProvider.EXOTEL: if number.channels is not None: await update_outbound_number_channels(number.id, number.channels + 1) + return True else: logger.warning( f"Cannot update channels for number {number.id}: channels value is None" ) + return False + return False
129-137: Missing logging when outbound number is not found.The function handles the case when
channelsis None but silently ignores whenget_outbound_number_by_idreturns None. Consider adding a warning for this case:elif provider == CallProvider.EXOTEL: outbound_number = await get_outbound_number_by_id(number_id) if outbound_number and outbound_number.channels is not None: await update_outbound_number_channels( number_id, outbound_number.channels - 1 ) elif outbound_number: logger.warning( f"Cannot update channels for number {number_id}: channels value is None" ) + else: + logger.warning( + f"Cannot release number {number_id}: outbound number not found" + )
241-246: Consider logging when outbound number lookup fails in cleanup.The guard for
outbound_number_idis appropriate since stuck leads may not have an assigned number. However, ifoutbound_number_idexists butget_outbound_number_by_idreturns None, this fails silently which could make debugging harder.if locked_lead.outbound_number_id: outbound_number = await get_outbound_number_by_id( locked_lead.outbound_number_id ) if outbound_number: await _release_number(outbound_number.id, outbound_number.provider) + else: + logger.warning( + f"Could not find outbound number {locked_lead.outbound_number_id} for stuck lead cleanup" + )app/services/langfuse/tasks/score_monitor/score.py (1)
321-325: Same encapsulation concern as lines 112-116.This defensive guard has the same issue with accessing the private
_http_clientattribute. Consider using a public interface method from the client instead.app/helpers/automatic/process_pool.py (1)
303-316: Stdout guard is fine; consider treating EOF as termination to avoid tight loopThe new
stdoutNone-check with a warning +breakis consistent with other error paths in this monitor and will trigger the best-effort cleanup in thefinallyblock, which is fine.One follow-up improvement you might consider (non-blocking for this PR): when
readline()returnsb""(EOF), the current loop keeps callingreadline()and can spin. You can treat EOF as “monitor done” and exit the loop to avoid potential busy looping and rely on the existing cleanup logic:- line = await asyncio.wait_for( - voice_process.process.stdout.readline(), timeout=2.0 - ) - if line: - line = line.decode("utf-8", errors="replace").strip() - - if "SESSION_ENDED" in line: + line = await asyncio.wait_for( + voice_process.process.stdout.readline(), timeout=2.0 + ) + if not line: + # EOF: stop monitoring; finalizer below will handle cleanup checks. + break + + line = line.decode("utf-8", errors="replace").strip() + + if "SESSION_ENDED" in line: logger.info( f"Process {voice_process.process_id[:8]} session ended, returning to pool" ) await self._return_process_to_pool(voice_process) - elif line: - payload = (voice_process.process_id, line) + elif line: + payload = (voice_process.process_id, line)This keeps the new guard while making the monitor more robust under EOF conditions.
app/api/routers/systems.py (2)
57-71: Remove unnecessary defensive checks.The
isinstance(checks_dict, dict)guards on lines 61 and 69 are unnecessary sincehealth_status["checks"]is always a dict.Apply this diff:
- checks_dict = health_status["checks"] - if isinstance(checks_dict, dict): - checks_dict["set"] = { + health_status["checks"]["set"] = { "status": "ok", "latency_ms": round((time.time() - start) * 1000, 2), - } + }- checks_dict = health_status["checks"] - if isinstance(checks_dict, dict): - checks_dict["set"] = {"status": "failed", "error": str(e)} + health_status["checks"]["set"] = {"status": "failed", "error": str(e)}
74-101: Remove unnecessary defensive checks.The
isinstance(checks_dict, dict)guards on lines 80 and 99 are unnecessary sincehealth_status["checks"]is always a dict.Apply this diff:
- checks_dict = health_status["checks"] - if isinstance(checks_dict, dict): - if retrieved_value == test_value: - checks_dict["get"] = { - "status": "ok", - "latency_ms": latency, - "value_match": True, - } - else: - health_status["status"] = "degraded" - checks_dict["get"] = { - "status": "warning", - "latency_ms": latency, - "value_match": False, - "expected": test_value, - "actual": retrieved_value, - } + if retrieved_value == test_value: + health_status["checks"]["get"] = { + "status": "ok", + "latency_ms": latency, + "value_match": True, + } + else: + health_status["status"] = "degraded" + health_status["checks"]["get"] = { + "status": "warning", + "latency_ms": latency, + "value_match": False, + "expected": test_value, + "actual": retrieved_value, + }- checks_dict = health_status["checks"] - if isinstance(checks_dict, dict): - checks_dict["get"] = {"status": "failed", "error": str(e)} + health_status["checks"]["get"] = {"status": "failed", "error": str(e)}app/database/accessor/breeze_buddy/lead_call_tracker.py (1)
376-383: Repeated decode/filter pattern could be centralized for reuseThis function uses the same “iterate → decode → drop
None→ return list” pattern asget_leads_based_on_status_and_next_attemptandget_all_lead_call_trackers. To reduce duplication and keep behavior consistent, consider extracting a small helper, e.g.:def _decode_leads(result: List[asyncpg.Record]) -> List[LeadCallTracker]: return [ lead for row in result if (lead := decode_lead_call_tracker(row)) is not None ]and then:
if result: return _decode_leads(result) return []You could gradually adopt the same helper in other accessors in this file that currently don’t guard against a
Nonedecode, without changing their external signatures. Based on learnings, this can also be staged into a later cleanup PR if you don’t want to broaden the scope here.app/agents/voice/automatic/tools/dummy/analytics.py (1)
137-149: Consider using consistent naming with other analytics modules.Other files in this PR use
TIME_INPUT_PROPERTIESandTIME_INPUT_REQUIRED(uppercase constants), while this file usestime_propertiesandtime_required(lowercase). For consistency across the codebase, consider aligning the naming convention.-# Define the properties and required fields directly -time_properties = { +# Define the properties and required fields directly +TIME_INPUT_PROPERTIES = { "startTime": { "type": "string", "description": "Start time in ISO format (e.g., 2023-01-01T00:00:00Z)", }, "endTime": { "type": "string", "description": "End time in ISO format (e.g., 2023-01-01T01:00:00Z)", }, } -time_required = ["startTime", "endTime"] +TIME_INPUT_REQUIRED = ["startTime", "endTime"]app/agents/voice/automatic/tools/juspay/analytics.py (1)
601-602: Redundant validation -start_dateis already validated.
start_dateis extracted fromrequired_fieldsat line 519, and missing required fields are already checked at lines 490-496. If we reach line 601,start_dateshould already be validated as non-empty. This check is harmless but redundant.- if not start_date: - raise ValueError("Start date is required") start_date_ist = ist.localize( datetime.strptime(start_date, "%Y-%m-%d %H:%M:%S") )app/agents/voice/automatic/data/dummy/acme/breeze_parser.py (1)
17-19: Overly broad return type annotation.The return type
Optional[Union[Dict[str, Any], List[Any], str, bool, float]]is wider than what the function actually returns. Based onaggregate_complete_structureinaggregator.py(which returnsOptional[Dict[str, Any]]), this function only returnsDict[str, Any]orNone.Consider narrowing to match actual behavior:
def get_sales_breakdown( start_time: Optional[str] = None, end_time: Optional[str] = None -) -> Optional[Union[Dict[str, Any], List[Any], str, bool, float]]: +) -> Optional[Dict[str, Any]]:app/agents/voice/automatic/data/dummy/acme/juspay_parser.py (2)
44-53: Unnecessary index cast to int.The
indexvariable comes fromindiceswhich is returned byget_data_indices()and is alreadyList[int](seeaggregator.pylines 36-78). Theint(index)cast at line 45 is redundant.However, the defensive
isinstance(data, dict)check and numeric coercions (float(),int()) for dictionary values are good practices.for index in indices: - # Ensure index is an integer - index = int(index) if index < len(ACME_JUSPAY_DATA): data = ACME_JUSPAY_DATA[index]["overall_success_rate_data"] # Ensure data is a dictionary
58-67: Success rate averaging may be inaccurate for multi-day aggregation.The current calculation at line 59 uses a simple average of daily success rates. This doesn't account for volume weighting. For example, if day 1 has 100 attempts at 90% SR and day 2 has 1000 attempts at 80% SR, the simple average would be 85%, but the true aggregate SR should be ~81.8%.
For dummy data this may be acceptable, but for accurate metrics consider:
return { - "success_rate": round(sum(success_rates) / len(success_rates), 2), + "success_rate": round( + (successful_transactions / total_attempts * 100) if total_attempts > 0 else 0, 2 + ), "total_attempts": total_attempts,app/agents/voice/automatic/tools/system/utils.py (1)
2-2: Redundant cast on a list that's already typed.The
cast(list, standard_tools_list)on line 40 is unnecessary becausestandard_tools_listis already explicitly a list (defined on line 38). The cast doesn't add type safety here and can be removed.Apply this diff to remove the redundant cast:
-from typing import cast - from datetime import datetime +from typing import cast-tools = ToolsSchema(standard_tools=cast(list, standard_tools_list)) +tools = ToolsSchema(standard_tools=standard_tools_list)Also applies to: 40-40
app/agents/voice/automatic/features/charts/chart_tools.py (1)
132-140: LGTM - Early session validation improves robustness.The early session_id validation pattern prevents errors from propagating and ensures clear error reporting when session context is missing. The fallback to
get_current_session_id()provides good recovery capability.The same validation pattern is repeated in all 4 chart generation functions (lines 132-140, 225-233, 315-323, 438-446). Consider extracting this to a helper function to reduce duplication:
def _validate_session_id(params: FunctionCallParams, function_name: str) -> Optional[str]: """Extract and validate session_id from params.""" session_id = params.arguments.get("session_id") or get_current_session_id() if not session_id or not isinstance(session_id, str): return None return session_idThis would simplify all 4 functions and ensure consistent validation behavior.
app/agents/voice/automatic/tools/breeze/configuration.py (1)
339-340: Redundant None filtering in headers.Lines 339-340 filter out None values from headers, but the Authorization header was already conditionally added only when
AWS_VAYU_WRITE_API_KEYis not None (see the pattern on lines 257-258, 468-469). This filtering is redundant since "Content-Type" is always a string.Consider removing the redundant filter:
headers = { "Content-Type": "application/json", "Authorization": AWS_VAYU_WRITE_API_KEY, } - # Filter out None values from headers - headers = {k: v for k, v in headers.items() if v is not None}And instead use conditional insertion like elsewhere:
headers = { "Content-Type": "application/json", } + if AWS_VAYU_WRITE_API_KEY is not None: + headers["Authorization"] = AWS_VAYU_WRITE_API_KEYapp/agents/voice/automatic/tools/breeze/analytics.py (1)
97-99: Defensive but redundant given early guard.The conditional check is consistent with the
sessionIdpattern below, but functionally redundant since line 29's guard ensuresbreeze_tokenis truthy before reaching this code.app/agents/voice/automatic/features/summarizer/context_summarizer.py (1)
108-117: Redundant type check for content.Line 110 already verifies
isinstance(msg.get("content"), str), making the nested check at line 114 unnecessary.Apply this diff to simplify:
for msg in self._messages: if ( isinstance(msg, dict) and msg.get("role") == "system" and isinstance(msg.get("content"), str) and "Previous conversation summary:" in msg.get("content", "") ): content = msg.get("content", "") - if isinstance(content, str): - previous_summary = content.replace( - "Previous conversation summary:", "" - ).strip() + previous_summary = content.replace( + "Previous conversation summary:", "" + ).strip() breakapp/agents/voice/breeze_buddy/workflows/order_confirmation/websocket_bot.py (6)
273-275: Consider validating ELEVENLABS_API_KEY earlier in initialization.While the None check is good defensive programming, it occurs after substantial work (database queries, order validation, transport setup). If this is a required configuration, validating it at the start of
run()would prevent wasted resources on calls that will inevitably fail.Based on learnings, error handling improvements can be deferred to future refactoring efforts if needed.
293-296: Type cast may be unnecessary if dict structure matches expected type.The explicit
cast()toList[ChatCompletionMessageParam]might be redundant if the dictionary structure already conforms to the type. Consider whether the cast is strictly required by the OpenAI library, or if the type checker can infer it correctly without explicit casting.
477-491: Good defensive checks, but consider more specific exception handling.The None checks for
self.transportandtransport.output()are excellent defensive programming. However, the bareExceptioncatch at line 493 is overly broad. Consider catching more specific exceptions (e.g.,RuntimeError,IOError) to avoid masking unexpected errors.
463-469: Unusedargsparameter in handlers that don't extract values.These handlers accept
args: dictfor signature consistency but don't use it. If this uniformity is required by the flow manager interface, consider using_argsor_to signal intentionally unused parameters, or add a type ignore comment to suppress linter warnings.Also applies to: 498-498
982-995: Thorough validation with room for refinement.The defensive checks prevent cryptic downstream errors, which is excellent. Consider these optional improvements:
- Use
TypeErrorinstead ofValueErrorfor type validation (lines 984, 989, 994)- Consider extracting error messages to constants or using shorter inline messages
These are minor stylistic improvements and don't affect correctness.
1002-1016: Type casts may be excessive given runtime checks.The
isinstance()checks already ensure correct types at runtime. The subsequentcast()calls are primarily for the type checker but add verbosity. Consider whether the type annotations onNodeConfigcan be relaxed (e.g., usingList[Any]instead of specific types) to avoid the need for explicit casting.app/api/routers/breeze_buddy.py (2)
116-131: String-normalizingcall_sidand recording URL looks good; minor duplication you can optionally DRY upNormalizing
call_sidandprovider_recording_urltostrbefore enqueuingupdate_call_recordingis a good defensive move and matches theupdate_call_recording(call_id: str, provider_recording_url: str, provider: str)signature.If you want to reduce duplication between the GET and POST handlers, you could factor the shared logic into a tiny helper, e.g. something that takes the raw mapping and provider, extracts
CallSid/recording URL, logs, and returns normalized strings (orNone) for use inbackground_tasks.add_task. Not urgent, just a readability win.Also applies to: 153-168
417-429: Enum-based provider validation in WebSocket handler is solid; consider minor cleanupsUsing
CallProvider(service_provider.upper())to validateservice_providerand closing the WebSocket with code1003on invalid values is a clear improvement, and passing the enum intoget_voice_providermatches itsCallProvider-typed argument.Two minor, optional tweaks you might consider:
- Move
from app.schemas import CallProviderto the module top to avoid in-function imports and keep imports centralized.- If
handle_websocketonly cares about a plain string provider name, you could passcall_provider_enum.value(orstr(call_provider_enum)) instead of the enum instance for clarity, even thoughCallProvideris astrsubclass and will work as-is.Both are purely cosmetic; current behavior looks correct.
app/main.py (3)
144-146: Dynamic callback wiring withtype: ignoreis reasonable; consider future typing improvementsHooking
room_cleanup_callbackandsession_cleanup_callbackonto the pool at startup cleanly solves the circular‑import concern and matches the expected async callable shape, so behavior looks correct. If you control theVoiceAgentPooltype, consider adding these attributes (or aset_callbacks(...)method /Protocol) so you can drop the# type: ignoreand keep static typing stricter in the long run. Based on learnings, this can stay as-is for now and be tightened later if desired.
317-321:stdinguard is good; prefer a more specific exception and possibly discard bad workersThe explicit
stdincheck avoids aNonedereference and correctly drives the code into the fallback path when configuration fails. To make this failure mode easier to distinguish (and to align with TRY002/TRY003), consider raising a more specific exception instead of a bareException, e.g.:- else: - raise Exception("Process stdin is not available") + else: + raise RuntimeError("Voice process stdin is not available")You may also want to treat “no stdin” as a permanently bad process and have the pool discard/terminate that worker instead of returning it, so the same broken instance isn’t repeatedly checked out.
386-395: Command argument construction for list values looks solid; Ruff-suggested style tweak is optionalFiltering out
Noneentries and skipping empty lists makes the CLI args cleaner and safer without changing semantics. If you want to satisfy Ruff’s RUF005 suggestion and slightly simplify the extend call, you can do:- if filtered_value: - cmd.extend([arg_name] + filtered_value) + if filtered_value: + cmd.extend([arg_name, *filtered_value])This is purely stylistic; behavior remains the same.
app/agents/voice/automatic/services/smart_turn/local_smart_turn.py (1)
19-25: Clarify the intent of thecurrent_sessioncommentThe comment suggests “storing the session reference without type annotation to avoid override issues”, but the code only reads
_sessioninto a local variable and logs if it isNone. To avoid confusion for future readers, consider tightening the comment to match the actual behavior.For example:
- # Store the session reference without type annotation to avoid override issues - # This allows us to handle None values safely - current_session = getattr(self, "_session", None) + # Inspect the base-class session without assuming a concrete type; log if missing. + current_session = getattr(self, "_session", None)app/agents/voice/automatic/tools/breeze/utils.py (1)
405-452: Numeric guards in STEP 1/2 improve robustness; consider tightening a couple of edge casesThe added float conversion and try/except in STEP 1 and STEP 2 are good improvements, but there are two edge behaviors worth revisiting:
STEP 1 only guards
next_min, STEP 0 can still blow up on non‑numeric minsRight now:
first_rule_min = sorted_rules[0].get("minimumOrderValue", 0) if first_rule_min > 0: ...If
minimumOrderValuearrives as a string (e.g."500"from JSON),first_rule_min > 0will raise aTypeError. You already normalizenext_minwithfloat(next_min)in STEP 1; mirroring that here would make the behavior consistent:
- first_rule_min = sorted_rules[0].get("minimumOrderValue", 0)
- if first_rule_min > 0:
- first_rule_min = sorted_rules[0].get("minimumOrderValue", 0)
- try:
first_rule_min_val = float(first_rule_min)- except (TypeError, ValueError):
logger.warning(f"Invalid first_rule_min value: {first_rule_min}, treating as 0")first_rule_min_val = 0.0- if first_rule_min_val > 0:
# Create no-surcharge rule from ₹0 to just before first rule starts
no_surcharge_rule = surcharge_rule_template(
payment_type,
0,
first_rule_min - 0.01,
first_rule_min_val - 0.01,
STEP 2 treats an unparseable
original_maxas0.0, which can create odd rangesIn STEP 2, if
float(original_max)fails you log a warning but then:stored_max = 0.0 last_rule["maximumOrderValue"] = stored_max - 0.01 # -0.01 ... unlimited_rule = surcharge_rule_template(payment_type, stored_max, None, ...)This yields a last rule with
maximumOrderValue = -0.01and an unlimited rule starting at0.0, which is almost certainly not what the user intended and can introduce a negative range.A safer behavior in this failure case is to skip adjusting the last rule and not create the unlimited rule, effectively falling back to the user-specified boundaries:
if original_max is not None:# Store original max for creating the unlimited rule, ensuring it's a floattry:stored_max = float(original_max)except (ValueError, TypeError):logger.warning(f"Invalid original_max value: {original_max}, treating as 0")stored_max = 0.0# Adjust the last rule maxlast_rule["maximumOrderValue"] = stored_max - 0.01# Create unlimited rule for remaining range using same payment method settingsunlimited_rule = surcharge_rule_template(payment_type,stored_max,None,0,"AMOUNT",payment_method_type,)result.append(unlimited_rule)
if original_max is not None:# Store original max for creating the unlimited rule, ensuring it's a floattry:stored_max = float(original_max)except (ValueError, TypeError):logger.warning(f"Invalid original_max value: {original_max}, ""skipping auto-adjustment of the last rule")stored_max = Noneif stored_max is not None:# Adjust the last rule maxlast_rule["maximumOrderValue"] = stored_max - 0.01# Create unlimited rule for remaining range using same payment method settingsunlimited_rule = surcharge_rule_template(payment_type,stored_max,None,0,"AMOUNT",payment_method_type,)result.append(unlimited_rule)These changes keep the new defensive behavior but avoid surprising negative ranges or crashes if the incoming rule data is slightly malformed.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (43)
app/agents/voice/automatic/__init__.py(7 hunks)app/agents/voice/automatic/data/dummy/acme/breeze_parser.py(5 hunks)app/agents/voice/automatic/data/dummy/acme/juspay_parser.py(10 hunks)app/agents/voice/automatic/features/charts/chart_tools.py(4 hunks)app/agents/voice/automatic/features/charts/conversation.py(1 hunks)app/agents/voice/automatic/features/hitl/hitl.py(4 hunks)app/agents/voice/automatic/features/llm_wrapper.py(2 hunks)app/agents/voice/automatic/features/summarizer/context_summarizer.py(7 hunks)app/agents/voice/automatic/services/fal/smart_turn_analyzer.py(2 hunks)app/agents/voice/automatic/services/filters/krisp/noise.py(7 hunks)app/agents/voice/automatic/services/mem0/memory.py(6 hunks)app/agents/voice/automatic/services/smart_turn/local_smart_turn.py(1 hunks)app/agents/voice/automatic/stt/__init__.py(2 hunks)app/agents/voice/automatic/tools/__init__.py(2 hunks)app/agents/voice/automatic/tools/breeze/analytics.py(3 hunks)app/agents/voice/automatic/tools/breeze/configuration.py(10 hunks)app/agents/voice/automatic/tools/breeze/utils.py(3 hunks)app/agents/voice/automatic/tools/dummy/acme_analytics.py(2 hunks)app/agents/voice/automatic/tools/dummy/analytics.py(1 hunks)app/agents/voice/automatic/tools/internet/search.py(2 hunks)app/agents/voice/automatic/tools/juspay/analytics.py(11 hunks)app/agents/voice/automatic/tools/system/utils.py(2 hunks)app/agents/voice/automatic/tts/__init__.py(2 hunks)app/agents/voice/automatic/utils/conversation_manager.py(2 hunks)app/agents/voice/breeze_buddy/managers/calls.py(11 hunks)app/agents/voice/breeze_buddy/services/telephony/base_provider.py(2 hunks)app/agents/voice/breeze_buddy/stt/__init__.py(1 hunks)app/agents/voice/breeze_buddy/workflows/order_confirmation/utils.py(1 hunks)app/agents/voice/breeze_buddy/workflows/order_confirmation/websocket_bot.py(22 hunks)app/api/routers/breeze_buddy.py(7 hunks)app/api/routers/systems.py(2 hunks)app/core/logger/__init__.py(2 hunks)app/core/security/jwt.py(3 hunks)app/database/__init__.py(1 hunks)app/database/accessor/breeze_buddy/lead_call_tracker.py(5 hunks)app/database/queries/__init__.py(1 hunks)app/helpers/automatic/daily_room_pool.py(1 hunks)app/helpers/automatic/process_pool.py(2 hunks)app/main.py(3 hunks)app/services/aws/utils.py(2 hunks)app/services/langfuse/tasks/score_monitor/score.py(3 hunks)app/services/redis/client.py(6 hunks)app/services/slack/alert.py(3 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: Devansh-1218
Repo: juspay/clairvoyance PR: 255
File: app/core/config.py:218-224
Timestamp: 2025-09-26T06:08:27.011Z
Learning: In the clairvoyance project, Devansh-1218 prefers to defer error handling and fail-fast patterns to future refactoring efforts rather than implementing them immediately, focusing first on core functionality integration.
📚 Learning: 2025-09-18T07:34:19.608Z
Learnt from: YasmeenOgo
Repo: juspay/clairvoyance PR: 229
File: app/agents/voice/automatic/features/llm_wrapper.py:36-91
Timestamp: 2025-09-18T07:34:19.608Z
Learning: The HITL (Human-in-the-Loop) implementation in app/agents/voice/automatic/features/llm_wrapper.py is working correctly and doesn't need modifications to the callback handling or error message formatting.
Applied to files:
app/agents/voice/automatic/features/llm_wrapper.pyapp/agents/voice/automatic/features/hitl/hitl.py
🧬 Code graph analysis (14)
app/agents/voice/automatic/features/llm_wrapper.py (1)
app/agents/voice/automatic/features/summarizer/context_summarizer.py (1)
ContextSummarizer(15-217)
app/agents/voice/breeze_buddy/services/telephony/base_provider.py (2)
app/agents/voice/breeze_buddy/services/telephony/twilio/twilio.py (1)
make_call(81-108)app/agents/voice/breeze_buddy/services/telephony/exotel/exotel.py (1)
make_call(53-125)
app/agents/voice/automatic/utils/conversation_manager.py (1)
app/agents/voice/automatic/features/charts/conversation.py (1)
current_turn(202-204)
app/api/routers/systems.py (1)
app/services/redis/client.py (2)
ping(257-265)delete(237-245)
app/api/routers/breeze_buddy.py (3)
app/agents/voice/breeze_buddy/managers/calls.py (1)
update_call_recording(502-585)app/schemas.py (1)
CallProvider(16-18)app/agents/voice/breeze_buddy/services/telephony/utils.py (1)
get_voice_provider(13-20)
app/main.py (1)
app/helpers/automatic/session_manager.py (1)
session_cleanup_callback(59-79)
app/database/accessor/breeze_buddy/lead_call_tracker.py (1)
app/database/decoder/breeze_buddy/lead_call_tracker.py (1)
decode_lead_call_tracker(18-45)
app/agents/voice/automatic/data/dummy/acme/breeze_parser.py (1)
app/agents/voice/automatic/data/dummy/acme/aggregator.py (3)
aggregate_complete_structure(169-225)get_data_indices(37-79)get_nested_value(143-166)
app/agents/voice/automatic/data/dummy/acme/juspay_parser.py (1)
app/agents/voice/automatic/data/dummy/acme/aggregator.py (1)
get_data_indices(37-79)
app/agents/voice/breeze_buddy/managers/calls.py (6)
app/database/accessor/breeze_buddy/outbound_number.py (2)
update_outbound_number_channels(129-161)get_outbound_number_by_id(73-95)app/database/accessor/breeze_buddy/lead_call_tracker.py (1)
release_lock_on_lead_by_id(142-164)app/agents/voice/breeze_buddy/services/telephony/utils.py (1)
get_voice_provider(13-20)app/agents/voice/breeze_buddy/services/telephony/base_provider.py (1)
make_call(26-31)app/agents/voice/breeze_buddy/services/telephony/twilio/twilio.py (1)
make_call(81-108)app/agents/voice/breeze_buddy/services/telephony/exotel/exotel.py (1)
make_call(53-125)
app/agents/voice/automatic/__init__.py (1)
app/core/config/dynamic.py (1)
ENABLE_FAL_SMART_TURN(8-10)
app/agents/voice/automatic/tools/breeze/configuration.py (2)
app/agents/voice/automatic/tools/utils.py (1)
_paisa_to_rupees(49-53)app/agents/voice/automatic/tools/breeze/utils.py (2)
format_announcement_html(199-221)patch_shop_config(147-196)
app/agents/voice/breeze_buddy/workflows/order_confirmation/websocket_bot.py (3)
app/agents/voice/breeze_buddy/workflows/order_confirmation/utils.py (1)
load_audio(112-153)app/services/redis/client.py (1)
get(189-197)app/agents/voice/breeze_buddy/analytics/tracing_setup.py (1)
auto_trace(57-106)
app/agents/voice/automatic/tools/juspay/analytics.py (2)
app/agents/voice/automatic/tools/dummy/analytics.py (1)
get_sr_success_rate_by_time(22-24)app/services/redis/client.py (1)
get(189-197)
🪛 Ruff (0.14.7)
app/database/__init__.py
74-74: Avoid specifying long messages outside the exception class
(TRY003)
app/agents/voice/automatic/services/smart_turn/local_smart_turn.py
33-33: Do not call setattr with a constant attribute value. It is not any safer than normal property access.
Replace setattr with assignment
(B010)
app/agents/voice/automatic/utils/conversation_manager.py
499-499: Avoid specifying long messages outside the exception class
(TRY003)
501-501: Avoid specifying long messages outside the exception class
(TRY003)
app/helpers/automatic/process_pool.py
263-265: Avoid specifying long messages outside the exception class
(TRY003)
app/api/routers/systems.py
114-114: Do not catch blind exception: Exception
(BLE001)
app/agents/voice/automatic/stt/__init__.py
251-251: Avoid specifying long messages outside the exception class
(TRY003)
app/api/routers/breeze_buddy.py
246-246: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
app/services/redis/client.py
58-58: Avoid specifying long messages outside the exception class
(TRY003)
app/main.py
321-321: Abstract raise to an inner function
(TRY301)
321-321: Create your own exception
(TRY002)
321-321: Avoid specifying long messages outside the exception class
(TRY003)
393-393: Consider [arg_name, *filtered_value] instead of concatenation
Replace with [arg_name, *filtered_value]
(RUF005)
app/agents/voice/automatic/tts/__init__.py
52-52: Avoid specifying long messages outside the exception class
(TRY003)
app/agents/voice/automatic/services/mem0/memory.py
355-355: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
356-356: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
360-360: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
361-361: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
365-365: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
366-366: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
app/agents/voice/automatic/features/hitl/hitl.py
206-206: Consider moving this statement to an else block
(TRY300)
app/agents/voice/automatic/services/filters/krisp/noise.py
43-43: Do not catch blind exception: Exception
(BLE001)
app/agents/voice/breeze_buddy/workflows/order_confirmation/websocket_bot.py
463-463: Unused method argument: args
(ARG002)
469-469: Unused method argument: args
(ARG002)
475-475: Unused method argument: flow_manager
(ARG002)
475-475: Unused method argument: args
(ARG002)
493-493: Do not catch blind exception: Exception
(BLE001)
498-498: Unused method argument: args
(ARG002)
984-984: Avoid specifying long messages outside the exception class
(TRY003)
989-989: Prefer TypeError exception for invalid type
(TRY004)
989-989: Avoid specifying long messages outside the exception class
(TRY003)
994-994: Prefer TypeError exception for invalid type
(TRY004)
994-994: Avoid specifying long messages outside the exception class
(TRY003)
1019-1019: Unused method argument: args
(ARG002)
1026-1026: Unused method argument: args
(ARG002)
1057-1057: Unused method argument: args
(ARG002)
1063-1063: Unused method argument: args
(ARG002)
1071-1071: Unused method argument: args
(ARG002)
1173-1173: Unused method argument: args
(ARG002)
app/agents/voice/automatic/tools/juspay/analytics.py
602-602: Abstract raise to an inner function
(TRY301)
602-602: Avoid specifying long messages outside the exception class
(TRY003)
| f"SMART_TURN: Failed to initialize LocalSmartTurnAnalyzer: {e}" | ||
| ) | ||
| elif dynamic.ENABLE_FAL_SMART_TURN: | ||
| elif dynamic.ENABLE_FAL_SMART_TURN(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Missing await on async function call.
The code calls dynamic.ENABLE_FAL_SMART_TURN() without await, but according to the context snippet from app/core/config/dynamic.py (lines 7-9), this is an async function that returns bool. This will cause the condition to always evaluate to a truthy coroutine object rather than the actual boolean value.
Apply this diff to fix the missing await:
- elif dynamic.ENABLE_FAL_SMART_TURN():
+ elif await dynamic.ENABLE_FAL_SMART_TURN():📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif dynamic.ENABLE_FAL_SMART_TURN(): | |
| elif await dynamic.ENABLE_FAL_SMART_TURN(): |
🤖 Prompt for AI Agents
In app/agents/voice/automatic/__init__.py around line 323, the condition calls
the async function dynamic.ENABLE_FAL_SMART_TURN() without awaiting it, causing
a coroutine to be evaluated instead of a boolean; update the condition to await
dynamic.ENABLE_FAL_SMART_TURN() so it resolves to the actual bool, and if this
file’s containing function is not async, make that function async (or otherwise
ensure the call site supports awaiting).
| # Convert Dict messages to proper ChatCompletionMessageParam if needed | ||
| converted_messages: List[ChatCompletionMessageParam] = [] | ||
| if messages: | ||
| for msg in messages: | ||
| converted_messages.append(msg) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misleading comment: no actual conversion is performed.
The comment states "Convert Dict messages to proper ChatCompletionMessageParam if needed," but the loop simply appends messages without transformation. This is just a copy operation with type annotation changes.
Consider either:
- Removing the misleading comment, or
- Updating it to reflect reality: "Collect messages into typed list"
- # Convert Dict messages to proper ChatCompletionMessageParam if needed
+ # Collect messages into typed list
converted_messages: List[ChatCompletionMessageParam] = []
if messages:
for msg in messages:
converted_messages.append(msg)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Convert Dict messages to proper ChatCompletionMessageParam if needed | |
| converted_messages: List[ChatCompletionMessageParam] = [] | |
| if messages: | |
| for msg in messages: | |
| converted_messages.append(msg) | |
| # Collect messages into typed list | |
| converted_messages: List[ChatCompletionMessageParam] = [] | |
| if messages: | |
| for msg in messages: | |
| converted_messages.append(msg) |
🤖 Prompt for AI Agents
In app/agents/voice/automatic/features/summarizer/context_summarizer.py around
lines 30-35, the comment claims messages are being converted to
ChatCompletionMessageParam but the code only appends them unchanged; update the
comment to accurately state the behavior (e.g., "Collect messages into typed
list") or remove the misleading comment, or if real conversion was intended
implement the proper dict-to-ChatCompletionMessageParam transformation before
appending so the list contains correctly typed/converted message objects.
| # Handle tools parameter properly | ||
| tools_param = NotGiven() | ||
| if tools is not None: | ||
| if isinstance(tools, list): | ||
| tools_param = ( | ||
| ToolsSchema(standard_tools=[]) if not tools else NotGiven() | ||
| ) | ||
| else: | ||
| tools_param = tools | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tools parameter handling logic is inverted and will not pass tools to the API.
Lines 40-42 create an empty ToolsSchema when the tools list is empty, but use NotGiven() when the list contains tools. This prevents non-empty tools from being sent to client.messages.create() on line 46.
The correct behavior: when tools is a non-empty list, it should be passed as ToolsSchema(standard_tools=tools); when empty, use NotGiven() to omit the parameter entirely.
Apply this fix:
# Handle tools parameter properly
tools_param = NotGiven()
if tools is not None:
if isinstance(tools, list):
- tools_param = (
- ToolsSchema(standard_tools=[]) if not tools else NotGiven()
- )
+ tools_param = (
+ ToolsSchema(standard_tools=tools) if tools else NotGiven()
+ )
else:
tools_param = tools🤖 Prompt for AI Agents
In app/agents/voice/automatic/features/summarizer/context_summarizer.py around
lines 36 to 45, the tools parameter handling is inverted: the current logic
creates a ToolsSchema when the incoming list is empty and uses NotGiven when the
list contains tools, causing non-empty tools to be omitted from
client.messages.create(). Fix it by making tools_param =
ToolsSchema(standard_tools=tools) when tools is a non-empty list, tools_param =
NotGiven() when tools is an empty list, and keep passing through tools unchanged
when it is not a list.
| if KRISP_AVAILABLE and ENABLE_KRISP_FILTER and krisp_audio is not None: | ||
| try: | ||
| krisp_audio.globalInit("", log_callback, krisp_audio.LogLevel.Off) | ||
| SDK_VERSION = krisp_audio.getVersion() | ||
| logger.info( | ||
| f"Krisp Audio Python SDK Version: {SDK_VERSION.major}." | ||
| f"{SDK_VERSION.minor}.{SDK_VERSION.patch}" | ||
| ) | ||
| SAMPLE_RATES = { | ||
| 8000: krisp_audio.SamplingRate.Sr8000Hz, | ||
| 16000: krisp_audio.SamplingRate.Sr16000Hz, | ||
| 24000: krisp_audio.SamplingRate.Sr24000Hz, | ||
| 32000: krisp_audio.SamplingRate.Sr32000Hz, | ||
| 44100: krisp_audio.SamplingRate.Sr44100Hz, | ||
| 48000: krisp_audio.SamplingRate.Sr48000Hz, | ||
| } | ||
| except Exception as e: | ||
| logger.warning(f"Failed to initialize Krisp: {e}") | ||
| SAMPLE_RATES = { | ||
| 8000: 8000, | ||
| 16000: 16000, | ||
| 24000: 24000, | ||
| 32000: 32000, | ||
| 44100: 44100, | ||
| 48000: 48000, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Track initialization state to prevent using uninitialized SDK.
Two concerns with the current implementation:
-
Broad exception catch (line 43): While catching
Exceptionprovides graceful degradation, it may mask specific initialization failures. Consider catching more specific exceptions if the Krisp SDK documentation identifies them. -
Missing initialization state tracking: If
globalInit()fails but the import succeeded, the code will still attempt to create Krisp sessions instart()(line 97). This occurs because_krisp_functional(line 88) only checksKRISP_AVAILABLEandENABLE_KRISP_FILTER, not whether initialization actually succeeded.
Consider adding a class-level flag to track successful initialization:
class NoiseFilterFromKrisp(BaseAudioFilter):
# Initialize krisp-specific constants only if available and enabled
+ KRISP_INITIALIZED = False
if KRISP_AVAILABLE and ENABLE_KRISP_FILTER and krisp_audio is not None:
try:
krisp_audio.globalInit("", log_callback, krisp_audio.LogLevel.Off)
SDK_VERSION = krisp_audio.getVersion()
logger.info(
f"Krisp Audio Python SDK Version: {SDK_VERSION.major}."
f"{SDK_VERSION.minor}.{SDK_VERSION.patch}"
)
+ KRISP_INITIALIZED = True
SAMPLE_RATES = {Then update __init__:
- self._krisp_functional = KRISP_AVAILABLE and ENABLE_KRISP_FILTER
+ self._krisp_functional = KRISP_AVAILABLE and ENABLE_KRISP_FILTER and self.KRISP_INITIALIZEDCommittable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.7)
43-43: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In app/agents/voice/automatic/services/filters/krisp/noise.py around lines
27-52, the Krisp init block needs an initialization-state flag so other code
knows whether globalInit() actually succeeded; add a module- or class-level
boolean (e.g. _krisp_initialized = False), set it to True after successful
globalInit() and SDKVersion logging, and set it to False inside the except block
(and keep SAMPLE_RATES fallback there). Then update the _krisp_functional check
(around line 88) to also require _krisp_initialized before allowing session
creation. Also, if the Krisp SDK documents specific init exceptions, catch those
instead of a bare Exception (or narrow the exception if possible) and ensure the
logger includes the exception details.
| # Initialize params to None to avoid unbound variable error | ||
| params = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Error logging references unused params variable.
The code initializes params = None but never assigns to it—actual parameters are in add_kwargs. The error logging will always output "Failed params were: None", providing no debugging value.
Apply this diff to log the correct variable:
- # Initialize params to None to avoid unbound variable error
- params = None
-
try:
# Create properly typed parameters for Memory.add()
add_kwargs: Dict[str, Any] = {}- try:
- logger.error(f"Failed params were: {params}")
- except NameError:
- logger.error("Failed params were: params not created")
+ logger.error(f"Failed add_kwargs were: {add_kwargs}")Also applies to: 402-405
🤖 Prompt for AI Agents
In app/agents/voice/automatic/services/mem0/memory.py around lines 335-336 (and
similarly at 402-405), the error log references an unassigned params variable
(initialized to None) instead of the actual parameters held in add_kwargs;
update the logging calls to use add_kwargs (or the correct variable that
contains the request parameters) so the error message shows useful data, and
remove the unused params assignment if it is no longer needed.
| try: | ||
| start = time.time() | ||
| deleted_count = await client.delete(test_key) | ||
| health_status["checks"]["delete"] = { | ||
| "status": "ok", | ||
| "latency_ms": round((time.time() - start) * 1000, 2), | ||
| "keys_deleted": deleted_count, | ||
| } | ||
| checks_dict = health_status["checks"] | ||
| if isinstance(checks_dict, dict): | ||
| checks_dict["delete"] = { | ||
| "status": "ok", | ||
| "latency_ms": round((time.time() - start) * 1000, 2), | ||
| "keys_deleted": deleted_count, | ||
| } | ||
| except Exception as e: | ||
| health_status["status"] = "unhealthy" | ||
| health_status["checks"]["delete"] = {"status": "failed", "error": str(e)} | ||
| checks_dict = health_status["checks"] | ||
| if isinstance(checks_dict, dict): | ||
| checks_dict["delete"] = {"status": "failed", "error": str(e)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing raise statement for consistent error handling.
The DELETE operation error handler (lines 115-118) does not raise the exception, unlike the PING (line 54), SET (line 71), and GET (line 101) operations. This inconsistency means that if the DELETE operation fails, the health check will still return a 200 status code instead of 503, incorrectly indicating the system is healthy.
Additionally, the isinstance(checks_dict, dict) guards on lines 108 and 117 are unnecessary since health_status["checks"] is always a dict.
Apply this diff to fix the error handling:
except Exception as e:
health_status["status"] = "unhealthy"
- checks_dict = health_status["checks"]
- if isinstance(checks_dict, dict):
- checks_dict["delete"] = {"status": "failed", "error": str(e)}
+ health_status["checks"]["delete"] = {"status": "failed", "error": str(e)}
+ raiseAlso simplify the success case:
- checks_dict = health_status["checks"]
- if isinstance(checks_dict, dict):
- checks_dict["delete"] = {
+ health_status["checks"]["delete"] = {
"status": "ok",
"latency_ms": round((time.time() - start) * 1000, 2),
"keys_deleted": deleted_count,
- }
+ }🧰 Tools
🪛 Ruff (0.14.7)
114-114: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In app/api/routers/systems.py around lines 104-118, the DELETE block fails to
re-raise exceptions (unlike other operations) and has redundant isinstance
guards; update the error handler to set health_status["status"] = "unhealthy",
record the failure in health_status["checks"]["delete"] with the error string,
and then re-raise the exception so the request returns 503; also remove the
unnecessary isinstance(...) checks and simplify the success case to directly
assign health_status["checks"]["delete"] = {"status":"ok","latency_ms":...,
"keys_deleted": deleted_count}.
| if result and get_row_count(result) > 0: | ||
| decoded_result = decode_lead_call_tracker(result[0]) | ||
| logger.info(f"Lock acquired successfully for lead: {decoded_result.id}") | ||
| return decoded_result | ||
| if decoded_result is not None: | ||
| logger.info(f"Lock acquired successfully for lead: {decoded_result.id}") | ||
| return decoded_result | ||
| else: | ||
| logger.error("Failed to decode lead call tracker result") | ||
|
|
||
| logger.info(f"Lead {lead_id} is already locked or does not exist") | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid misleading “already locked or does not exist” log on decode failure
If decode_lead_call_tracker(result[0]) returns None, you log an error and then still log Lead {lead_id} is already locked or does not exist, which misattributes the cause in logs/alerts. Restructuring with an else avoids this confusion:
- if result and get_row_count(result) > 0:
- decoded_result = decode_lead_call_tracker(result[0])
- if decoded_result is not None:
- logger.info(f"Lock acquired successfully for lead: {decoded_result.id}")
- return decoded_result
- else:
- logger.error("Failed to decode lead call tracker result")
-
- logger.info(f"Lead {lead_id} is already locked or does not exist")
- return None
+ if result and get_row_count(result) > 0:
+ decoded_result = decode_lead_call_tracker(result[0])
+ if decoded_result is not None:
+ logger.info(f"Lock acquired successfully for lead: {decoded_result.id}")
+ return decoded_result
+ logger.error("Failed to decode lead call tracker result")
+ else:
+ logger.info(f"Lead {lead_id} is already locked or does not exist")
+
+ return None🧰 Tools
🪛 Ruff (0.14.7)
135-135: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In app/database/accessor/breeze_buddy/lead_call_tracker.py around lines 126 to
135, the current flow logs an error when decode_lead_call_tracker(result[0])
returns None but then falls through to log "Lead {lead_id} is already locked or
does not exist", which is misleading; change the control flow so that if
decode_lead_call_tracker returns None you log the error and immediately return
None (or raise), and only log "Lead {lead_id} is already locked or does not
exist" in the branch where no result was found (i.e., outside the
result-and-row-count block); ensure there is no fall-through that produces both
messages for a decode failure.
| if result and get_row_count(result) > 0: | ||
| decoded_result = decode_lead_call_tracker(result[0]) | ||
| logger.info(f"Lock released successfully for lead: {decoded_result.id}") | ||
| return decoded_result | ||
| if decoded_result is not None: | ||
| logger.info(f"Lock released successfully for lead: {decoded_result.id}") | ||
| return decoded_result | ||
| else: | ||
| logger.error("Failed to decode lead call tracker result") | ||
|
|
||
| logger.warning(f"Lead {lead_id} not found for lock release") | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mirror the logging fix on lock release path
release_lock_on_lead_by_id has the same pattern: a decode failure logs an error and then falls through to Lead {lead_id} not found for lock release, which is inaccurate for that case. Suggest the same restructuring:
- if result and get_row_count(result) > 0:
- decoded_result = decode_lead_call_tracker(result[0])
- if decoded_result is not None:
- logger.info(f"Lock released successfully for lead: {decoded_result.id}")
- return decoded_result
- else:
- logger.error("Failed to decode lead call tracker result")
-
- logger.warning(f"Lead {lead_id} not found for lock release")
- return None
+ if result and get_row_count(result) > 0:
+ decoded_result = decode_lead_call_tracker(result[0])
+ if decoded_result is not None:
+ logger.info(f"Lock released successfully for lead: {decoded_result.id}")
+ return decoded_result
+ logger.error("Failed to decode lead call tracker result")
+ else:
+ logger.warning(f"Lead {lead_id} not found for lock release")
+
+ return None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if result and get_row_count(result) > 0: | |
| decoded_result = decode_lead_call_tracker(result[0]) | |
| logger.info(f"Lock released successfully for lead: {decoded_result.id}") | |
| return decoded_result | |
| if decoded_result is not None: | |
| logger.info(f"Lock released successfully for lead: {decoded_result.id}") | |
| return decoded_result | |
| else: | |
| logger.error("Failed to decode lead call tracker result") | |
| logger.warning(f"Lead {lead_id} not found for lock release") | |
| return None | |
| if result and get_row_count(result) > 0: | |
| decoded_result = decode_lead_call_tracker(result[0]) | |
| if decoded_result is not None: | |
| logger.info(f"Lock released successfully for lead: {decoded_result.id}") | |
| return decoded_result | |
| logger.error("Failed to decode lead call tracker result") | |
| else: | |
| logger.warning(f"Lead {lead_id} not found for lock release") | |
| return None |
🧰 Tools
🪛 Ruff (0.14.7)
160-160: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In app/database/accessor/breeze_buddy/lead_call_tracker.py around lines 151 to
160, the release_lock_on_lead_by_id path currently logs an error when
decode_lead_call_tracker returns None but then falls through to a warning saying
the lead was not found, which is misleading; change the control flow so that
after logging the decode failure you immediately return None (and do not emit
the "not found" warning for that case), mirroring the fix applied on the other
lock-release path so that decode errors produce an error log and an immediate
return.
|
|
||
| # Test connection | ||
| await self._cluster_client.ping() | ||
| self._cluster_client.ping() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing await breaks connection verification.
RedisCluster.ping() from redis.asyncio.cluster is an async method. Without await, the coroutine is created but never executed, so the connection test is silently skipped and failures won't be detected at startup.
# Test connection
- self._cluster_client.ping()
+ await self._cluster_client.ping()
logger.info("Redis cluster connection established successfully")Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In app/services/redis/client.py around line 87, the call to
self._cluster_client.ping() is missing an await which means the async ping
coroutine is never executed; make the call await self._cluster_client.ping() so
the ping runs, and if the surrounding function is not async, convert it to async
(or explicitly run the coroutine via asyncio.run/loop.run_until_complete) so the
await is valid and connection verification actually occurs.
|
|
||
| # Test connection | ||
| await self._single_client.ping() | ||
| self._single_client.ping() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing await breaks connection verification.
Same issue as the cluster path: Redis.ping() from redis.asyncio is async. The connection test is silently skipped without await.
# Test connection
- self._single_client.ping()
+ await self._single_client.ping()
logger.info("Redis single-node connection established successfully")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self._single_client.ping() | |
| await self._single_client.ping() |
🤖 Prompt for AI Agents
In app/services/redis/client.py around line 104, the call to
self._single_client.ping() is missing an await so the async ping is never
executed; change it to await self._single_client.ping() (and keep or add
appropriate exception handling/logging as used elsewhere so connection failures
are caught and handled consistently).
Summary by CodeRabbit
Bug Fixes
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.