An advanced AI-powered research system combining multi-agent architecture with cutting-edge reasoning techniques
Features β’ Architecture β’ Installation β’ Usage β’ Deployment β’ Screenshots
- Overview
 - Features
 - System Architecture
 - Technology Stack
 - Installation
 - Configuration
 - Usage
 - API Documentation
 - Screenshots
 - Deployment
 - Project Structure
 - Contributing
 - License
 
Deep Research AI is a sophisticated research automation system that leverages advanced AI reasoning techniques to conduct comprehensive, multi-dimensional research on any topic. Built with a multi-agent architecture, it performs parallel web searches, cross-validates sources, and generates detailed, citation-rich reports.
- π€ Multi-Agent Architecture: Deploys 3+ specialized agents working in parallel
 - π§ Advanced Reasoning: Implements Chain of Thought, Self-Consistency, and Tree of Thoughts
 - β Source Verification: Cross-validates information across multiple sources
 - π Comprehensive Reports: Generates executive summaries with full citations
 - π¨ Beautiful UI: Modern, responsive interface with real-time progress tracking
 - π Production Ready: Fully functional and deployable system
 
- 
π Intelligent Query Decomposition
- Chain of Thought reasoning for query analysis
 - Automatic breakdown into research dimensions
 - Strategic search query generation
 
 - 
π€ Multi-Agent Parallel Execution
- 3 specialized agents per research task
 - Asynchronous parallel web searches
 - Real-time progress tracking
 
 - 
β Self-Consistency Validation
- Cross-validation of sources
 - Confidence scoring based on repeated mentions
 - Deduplication and aggregation
 
 - 
π Advanced Report Generation
- AI-powered executive summaries
 - Detailed sectional analysis
 - Proper citation formatting
 - Export to Markdown and JSON
 
 - 
β Source Verification
- High-confidence source identification
 - Cross-reference tracking
 - Verification badges
 
 
- FastAPI backend with async support
 - React-based responsive frontend
 - RESTful API architecture
 - Real-time progress updates
 - Comprehensive error handling
 - Configurable research depth (3-10 tasks)
 
%%{init: {
  "theme": "base",
  "themeVariables": {
    "primaryColor": "#ffffff",
    "primaryTextColor": "#1f2937",
    "lineColor": "#9ca3af",
    "edgeLabelBackground":"#ffffff",
    "fontSize": "14px",
    "fontFamily": "Inter, sans-serif",
    "tertiaryColor": "#f3f4f6"
  }
}}%%
graph TB
    Start(["π User Research Query"]) --> Step1
    %% STEP 1
    subgraph Step1["Step 1 βΈ Query Analysis & Planning"]
        A["π§© User Query Input"] --> B["π§  Chain of Thought<br/>Decomposition"]
        B --> C["π Query Analysis"]
        C --> D["ποΈ Generate Research Dimensions"]
        D --> E["π³ Create Sub-Queries (Tree of Thoughts)"]
    end
    Step1 --> Step2
    %% STEP 2
    subgraph Step2["Step 2 βΈ Multi-Agent Execution"]
        E --> F1["π€ Agent 1<br/>Market Analysis"]
        E --> F2["π€ Agent 2<br/>Competition Research"]
        E --> F3["π€ Agent 3<br/>Technical Analysis"]
        
        F1 --> G1["π Web Search 1"]
        F1 --> G2["π Web Search 2"]
        F1 --> G3["π Web Search 3"]
        
        F2 --> H1["π Web Search 1"]
        F2 --> H2["π Web Search 2"]
        F2 --> H3["π Web Search 3"]
        
        F3 --> I1["π Web Search 1"]
        F3 --> I2["π Web Search 2"]
        F3 --> I3["π Web Search 3"]
        
        G1 --> J1["π Aggregated Results (Agent 1)"]
        H1 --> J2["π Aggregated Results (Agent 2)"]
        I1 --> J3["π Aggregated Results (Agent 3)"]
    end
    Step2 --> Step3
    %% STEP 3
    subgraph Step3["Step 3 βΈ Synthesis & Validation"]
        J1 --> K["π Aggregate All Results"]
        J2 --> K
        J3 --> K
        K --> L["π§© Self-Consistency Validation"]
        L --> M["π§Ή Deduplicate Sources"]
        M --> N["π Confidence Scoring"]
        N --> O["π Cross-Validation"]
        O --> P["π‘ Extract Key Insights"]
        P --> Q["π§  LLM Synthesis (Sequential Revision)"]
        Q --> R["ποΈ Generate Executive Summary"]
        R --> S["π Create Detailed Sections"]
    end
    Step3 --> Step4
    %% STEP 4
    subgraph Step4["Step 4 βΈ Final Output"]
        S --> T["π§Ύ Format Citations"]
        T --> U["β
 Verification Report"]
        U --> V["π Complete Research Report"]
        V --> W1["β¬οΈ Download Markdown"]
        V --> W2["β¬οΈ Download JSON"]
        V --> W3["π View in Browser"]
    end
    W3 --> End(["π― User Receives Results"])
    %% STYLE DEFINITIONS
    style Start fill:#e0f2fe,stroke:#0284c7,stroke-width:2px,color:#0c4a6e
    style End fill:#dcfce7,stroke:#16a34a,stroke-width:2px,color:#064e3b
    style Step1 fill:#e0f2fe,stroke:#0284c7,stroke-width:2px,color:#0c4a6e
    style Step2 fill:#ede9fe,stroke:#7c3aed,stroke-width:2px,color:#312e81
    style Step3 fill:#f0fdf4,stroke:#22c55e,stroke-width:2px,color:#064e3b
    style Step4 fill:#fff7ed,stroke:#f97316,stroke-width:2px,color:#7c2d12
    The system implements multiple state-of-the-art AI reasoning approaches:
- Chain of Thought (CoT) - Step-by-step problem decomposition
 - Tree of Thoughts (ToT) - Exploration of multiple reasoning paths
 - Self-Consistency - Multiple sampling with majority voting
 - Sequential Revision - Iterative refinement of outputs
 - Search & Verify - Solution space exploration with verification
 
- Framework: FastAPI 0.104+
 - Language: Python 3.8+
 - Async: asyncio, aiohttp
 - API Integration:
- OpenRouter API (LLM)
 - SerpAPI (Web Search)
 
 
- Library: React 18
 - Styling: Tailwind CSS + Custom CSS
 - Build: Babel (in-browser compilation)
 - HTTP: Fetch API
 
- OpenRouter: GPT-4o-mini, Claude 3.5 Sonnet, Llama 3.1
 - SerpAPI: Google search results
 
- Python 3.8 or higher
 - pip (Python package manager)
 - OpenRouter API key (Get here)
 - SerpAPI key (Get here)
 
# Clone the repository
git clone https://github.com/AdiityaRaj/deep-research-ai.git
cd deep-research-ai
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
cd backend
pip install -r requirements.txt
# Create .env file
cp .env.example .env
# Edit .env and add your API keys# Navigate to frontend directory
cd ../frontend
# No build required! Just open index.html
# Or use a local server:
python -m http.server 8080
           or
python -m http.server 8080 --bind 127.0.0.1 Create a .env file in the backend/ directory:
# OpenRouter Configuration
OPENROUTER_API_KEY=sk-or-v1-your-key-here
MODEL_NAME=openai/gpt-4o-mini
# SerpAPI Configuration
SERPAPI_KEY=your-serpapi-key-here
# Optional: Advanced Settings
MAX_PARALLEL_AGENTS=3
SEARCHES_PER_AGENT=3
CONFIDENCE_THRESHOLD=2Edit frontend/js/utils.js to configure API endpoint:
const API_CONFIG = {
    baseUrl: 'http://localhost:8000'  // Change for production
};1. Start Backend:
cd backend
uvicorn main:app --reloadBackend runs at: http://localhost:8000
2. Start Frontend:
cd frontend
python -m http.server 8080Frontend runs at: http://localhost:8080
Via Web Interface:
- Open 
http://localhost:8080in browser - Enter research query or select an example
 - Adjust research depth (3-10 tasks)
 - Click "Start Research"
 - Wait 30-90 seconds for results
 - Download report (Markdown or JSON)
 
Via API:
curl -X POST http://localhost:8000/run \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Analyze AI agent trends in 2024",
    "max_tasks": 5
  }'Try these sample queries:
- "Analyze the competitive landscape of AI agents in 2024"
 - "Latest developments in quantum computing"
 - "Electric vehicle market trends 2024"
 - "Impact of generative AI on software development"
 - "Future of renewable energy technologies"
 
Health check endpoint
{
  "status": "operational",
  "version": "4.0",
  "features": [...]
}Detailed system status
{
  "status": "healthy",
  "openrouter_configured": true,
  "serpapi_configured": true,
  "model": "openai/gpt-4o-mini"
}Execute deep research
Request:
{
  "query": "Your research question",
  "max_tasks": 5
}Response:
{
  "query": "...",
  "planning": {
    "dimensions": [...],
    "total_searches": 9
  },
  "executive_summary": "...",
  "sections": [...],
  "all_sources": [...],
  "verification": [...],
  "metadata": {
    "total_sources": 42,
    "high_confidence_sources": 5,
    "techniques_used": [...]
  }
}Once backend is running, visit:
- Swagger UI: 
http://localhost:8000/docs - ReDoc: 
http://localhost:8000/redoc 
Beautiful landing page with example queries and animated gradient background
Real-time progress tracking with stage updates and animated progress bar
Comprehensive results with metrics, agent strategy, and sections
AI-generated comprehensive summary of research findings
Expandable sections with key findings and source citations
Sources with confidence scores and verification badges
Multi-agent deployment showing parallel research dimensions
Multi-agent deployment showing full web page of deep research ai
Railway (Recommended):
cd backend
railway login
railway init
railway upRender:
- Connect GitHub repository
 - Select 
backend/as root directory - Build command: 
pip install -r requirements.txt - Start command: 
uvicorn main:app --host 0.0.0.0 
Fly.io:
cd backend
fly launch
fly deployVercel (Recommended):
cd frontend
vercelNetlify:
- Drag and drop 
frontend/folder to Netlify - Or connect GitHub repository
 
GitHub Pages:
cd frontend
git subtree push --prefix frontend origin gh-pagesUpdate .env in backend and js/utils.js in frontend with production URLs.
Deep-Research-AI/
βββ backend/
β   βββ main.py                    # FastAPI application
β   βββ research_agent.py          # Main orchestrator
β   βββ query_decomposer.py        # Chain of Thought
β   βββ multi_agent_executor.py    # Multi-agent execution
β   βββ synthesizer.py             # Self-Consistency & synthesis
β   βββ citation_manager.py        # Citation formatting
β   βββ config.py                  # Configuration
β   βββ requirements.txt           # Python dependencies
β   βββ .env                       # Environment variables
β
βββ frontend/
β   βββ index.html                 # Main HTML
β   βββ css/
β   β   βββ styles.css            # Custom styles
β   βββ js/
β   β   βββ utils.js              # Utilities
β   β   βββ components.js         # React components
β   β   βββ app.js                # Main app
β   βββ assets/                    # Images/icons
β
βββ docs/
β   βββ shscreenots/              # UI screenshots
β   βββ architecture.md           # Architecture docs
β
βββ .gitignore
βββ LICENSE
βββ README.md                      # This file
cd backend
python test_system.pyExpected output:
β
 Health check passed
β
 Configuration check passed
β
 Research completed in 85.44s
β
 ALL TESTS PASSED!
- Backend starts without errors
 - Frontend loads with animations
 - Can submit research query
 - Progress bar animates
 - Results display correctly
 - Sources show confidence scores
 - Download functionality works
 - Mobile responsive
 
Backend won't start:
- Check Python version (3.8+)
 - Verify all dependencies installed
 - Check 
.envfile exists with valid API keys 
Frontend shows blank page:
- Check browser console for errors
 - Verify all 5 files created correctly
 - Check backend is running
 
API connection failed:
- Verify backend URL in 
js/utils.js - Check CORS is enabled in backend
 - Ensure backend is running on correct port
 
Search results empty:
- Verify SerpAPI key is valid
 - Check API quota/limits
 - Try different query
 
Contributions are welcome! Please follow these steps:
- Fork the repository
 - Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
 
- Follow PEP 8 for Python code
 - Use meaningful variable names
 - Add comments for complex logic
 - Test before submitting PR
 - Update documentation as needed
 
- Query Processing: 30-90 seconds average
 - Parallel Searches: 9 searches per query (3 agents Γ 3 searches)
 - Source Analysis: 40+ sources per research
 - API Calls: ~12 total (9 SerpAPI + 3 OpenRouter)
 
- Reduce 
max_tasksfor faster results - Use caching for repeated queries
 - Adjust 
CONFIDENCE_THRESHOLDfor stricter validation - Configure 
MAX_PARALLEL_AGENTSbased on API limits 
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenRouter - LLM API access
 - SerpAPI - Web search functionality
 - FastAPI - Backend framework
 - React - Frontend library
 - Tailwind CSS - Styling framework
 
- Issues: GitHub Issues
 - Discussions: GitHub Discussions
 - Email: rajaditya2424@gmail.com
 
- WebSocket support for real-time updates
 - User authentication and query history
 - Custom agent configuration
 - PDF export functionality
 - Batch research processing
 
- Local LLM support (Ollama)
 - Advanced visualization charts
 - Collaborative research features
 - Mobile app (React Native)
 - Plugin system for extensibility
 
If you find this project useful, please consider giving it a star!
Built with β€οΈ by Aditya Raj







