Skip to content

Conversation

@aarav-shukla07
Copy link
Contributor

Description

This PR resolves the Output validation failed warnings that appear when running the online_research_agent in mock mode.

Previously, the MockLLMProvider returned a generic JSON response that lacked specific keys required by downstream nodes (e.g., fetch-content requires source_urls, write-report requires citations). While the agent would complete execution, the logs were cluttered with validation errors and "Cleaning failed" messages.

This change updates the MockLLMProvider to return a comprehensive "super-schema" that satisfies the input requirements of all nodes in the agent graph.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)

Related Issues

Fixes #1223

Changes Made

  • Updated MockLLMProvider.complete in agent.py to return a fully populated JSON object.
  • Added missing keys required by the graph schema:
    • research_focus
    • source_urls & fetched_sources
    • key_aspects & source_analysis
    • ranked_sources
    • key_findings & themes
    • report_content, source_citations, & references
    • final_report
  • Ensured the LLMResponse object wraps the JSON string correctly.

Testing

  • Manual testing performed

Manual Verification: Ran the agent in mock mode:

PYTHONPATH=core:exports python3 -m online_research_agent run --topic "AI" --mock

Result

  • Execution completed with "success": true.

  • Logs are clean of Output validation failed warnings.

  • "Haiku formatting failed" warnings may still appear (expected without API key), but schema validation now passes.

Screenshots (if applicable)

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: MockLLMProvider returns invalid schema causing validation warnings in Online Research Agent

1 participant