Skip to content

Google ADK Conversational Research System with Brain → Planner → Conversationalist architecture

Notifications You must be signed in to change notification settings

Snoowp/conversational-agent

Repository files navigation

Google ADK Conversational Research System

A sophisticated conversational AI system built with Google's Agent Development Kit (ADK) featuring a Brain → Planner → Conversationalist sequential architecture for intelligent, context-aware knowledge extraction interviews.

🏗️ Architecture

This system implements a three-stage sequential pipeline that processes each user input through specialized agents:

  1. 🧠 Brain Agent: Analyzes user input, logs structured insights, and maps them to research topics
  2. 📋 Planner Agent: Reviews insights, manages research goals systematically, and creates strategic guidance
  3. 💬 Conversationalist Agent: Conducts targeted interviews based on strategic guidance

✨ Key Features

  • Dynamic Goal Detection: Automatically sets research objectives based on conversation context
  • Stateful Agent Memory: Each agent maintains persistent context within conversations
  • Configurable Research Areas: JSON-based goal configuration for different domains
  • Topic Coordination: Brain-Planner communication through shared topic identification
  • Sequential Processing: Each agent builds on the previous agent's work
  • Structured Insights: Brain agent categorizes findings and maps to research goals
  • Goal-Driven Conversations: Planner systematically works through research objectives
  • Shared State Communication: Agents communicate through session state
  • Function Tools: Structured data operations for reliable agent coordination
  • Modular Prompts: Separate prompt files for easy customization
  • Enhanced Debug Mode: Complete visibility into goal progression and agent decisions

📁 Project Structure

conversational-agent/
├── main.py                    # Main application with agent pipeline
├── test_system.py             # Test script for automated testing
├── requirements.txt           # Python dependencies
├── .env                      # API credentials (create this)
├── config/                   # Configuration management
│   ├── __init__.py           # Configuration exports
│   ├── goal_detection.py     # Context detection and goal loading
│   └── research_goals.json   # Research goal definitions by domain
├── prompts/                  # Modular prompt engineering
│   ├── __init__.py           # Prompt exports
│   ├── brain_prompt.py       # Brain agent instructions
│   ├── planner_prompt.py     # Planner agent instructions
│   └── conversationalist_prompt.py  # Conversationalist instructions
├── context7-context.md       # ADK patterns reference
├── README.md                 # This file
└── venv/                     # Virtual environment (created during setup)

🚀 Quick Start

Prerequisites

  • Python 3.8+
  • Google AI API key

Installation

  1. Clone and navigate to the project:
cd conversational-agent
  1. Create virtual environment:
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Set up API credentials: Create a .env file:
echo "GOOGLE_API_KEY=your_api_key_here" > .env
  1. Run the system:
python main.py

Quick Test

python test_system.py

💬 Usage

Interactive Mode

python main.py
  • Type your questions or requests
  • The system processes through Brain → Planner → Conversationalist
  • Add --debug to your input to see internal processing details
  • Type quit to exit

Example Conversation

👤 You: Hi lets get to the bottom of my clients copilot usage at the moment.

[CONFIG] Detected context: copilot_usage, loaded 6 research goals

🤖 Assistant: I'd be happy to help you understand your clients' Copilot usage. 
To begin our deep dive, can you describe a typical workday for your client 
involving Copilot? Let's focus on a specific task or project...

👤 You: They're scared of the output being wrong or their data landing somewhere --debug

🧠 Brain Output: {"insights_found": [{"type": "pain_point", "details": "Concerns about accuracy and data security", "topic": "Copilot Adoption Barriers"}], "summary": "Identified trust and security concerns as primary adoption barriers."}
📋 Research Goals:
   🔄 Copilot Adoption Barriers [in_progress]
   ⏳ AI Trust and Security Concerns [pending]
🎯 Current Guidance: {'type': 'clarification', 'recommendations': 'Ask for specific examples of accuracy concerns'}

🤖 Assistant: Those are valid concerns that many organizations have. Can you share 
a specific example where your clients worried about Copilot's accuracy? What exactly 
led them to believe the output might be incorrect?

🔧 Context & Memory

Dynamic Goal Detection

The system automatically detects conversation context and loads appropriate research goals:

  • Copilot Keywords: "copilot", "ai", "artificial intelligence" → copilot_usage goals
  • Teams Keywords: "teams", "channels", "collaboration" → teams_collaboration goals
  • Default: General productivity and tool adoption goals

How Context Builds

  • Session Persistence: Conversation history maintained across turns
  • Goal-Driven Progress: Planner systematically works through research objectives
  • Insight Accumulation: Brain insights build up and map to specific topics
  • Topic Coordination: Brain signals current topic to Planner via current_turn_topic
  • Strategic Continuity: Planner tracks goal status and strategic decisions

Enhanced State Management

session.state = {
    'current_turn_topic': 'Copilot Adoption Barriers',
    'agent_states': {
        'brain': {
            'cumulative_insights': [
                {'type': 'pain_point', 'details': 'Trust concerns', 'topic': 'Copilot Adoption Barriers'},
                {'type': 'opportunity', 'details': 'Training needs', 'topic': 'Copilot Training'}
            ]
        },
        'planner': {
            'research_goals': [
                {'topic': 'Copilot Adoption Barriers', 'status': 'in_progress'},
                {'topic': 'AI Trust and Security Concerns', 'status': 'pending'}
            ],
            'strategic_history': [
                {'guidance_type': 'clarification', 'turn': 1}
            ]
        },
        'conversationalist': {
            'topics_discussed': ['copilot_concerns', 'data_security']
        }
    }
}

🎨 Configuration & Customization

Research Goals Configuration (config/research_goals.json)

Define research objectives for different domains:

{
  "copilot_usage": [
    {"topic": "Copilot Adoption Barriers", "status": "pending"},
    {"topic": "AI Trust and Security Concerns", "status": "pending"}
  ],
  "teams_collaboration": [
    {"topic": "Initial User Onboarding Experience", "status": "pending"},
    {"topic": "Collaboration Patterns within Teams Channels", "status": "pending"}
  ]
}

Context Detection (config/goal_detection.py)

Customize how the system detects conversation context:

  • Keywords: Add new domain-specific keywords
  • Logic: Modify detection algorithms
  • Fallbacks: Set default goal sets

Prompt Engineering

Each agent's behavior is defined in separate, modular prompt files:

Brain Agent (prompts/brain_prompt.py)

  • Maps insights to research topics
  • Categories: pain_point, opportunity, technical_requirement, research_gap, context_clue, success_story
  • JSON output format for structured analysis

Planner Agent (prompts/planner_prompt.py)

  • Manages research goal progression
  • Coordinates with Brain via current_turn_topic
  • Guidance types: clarification, deep_dive, solution_oriented, comparative, educational

Conversationalist Agent (prompts/conversationalist_prompt.py)

  • Conducts targeted research interviews
  • References goal progress and agent insights
  • Maintains conversational flow while pursuing objectives

Making Changes

  1. Goals: Edit config/research_goals.json to add new research domains
  2. Context Detection: Modify config/goal_detection.py for new keywords/logic
  3. Prompts: Edit relevant prompt files for behavior changes
  4. Test: Run python test_system.py to validate changes

🛠️ Design Patterns

Based on Google ADK documentation patterns:

  • Hierarchical Agents: Multi-agent system with specialized roles
  • Sequential Agent Pipeline: Ordered processing chain
  • Agent Specialization: Single responsibility per agent
  • Shared State Communication: Indirect agent communication via state
  • Function Tools: Structured operations for reliable data exchange
  • ToolContext Injection: Proper state access in tools
  • Async Session Management: Non-blocking conversation handling

📊 System Flow

User Input: "Hi lets get to the bottom of my clients copilot usage"
                                    ↓
                         [CONFIG] Context Detection
                    "copilot" keyword → copilot_usage goals
                                    ↓
┌─────────────┐    ┌──────────────┐    ┌─────────────────────┐
│ Brain Agent │ →  │ Planner Agent│ →  │ Conversationalist   │
└─────────────┘    └──────────────┘    └─────────────────────┘
      │                     │                       │
      ▼                     ▼                       ▼
   Analyzes &           Reviews goals &         Conducts targeted
   maps insights        manages progress       research interview
   to topics                                        
      │                     │                       │
      ▼                     ▼                       ▼
current_turn_topic:    update_goal_status:      "I'd be happy to help
"Copilot Usage"       "pending → in_progress"   understand your clients'
                                                Copilot usage. Can you
                                                describe a typical..."

🐛 Troubleshooting

Common Issues

ImportError: cannot import name 'LlmAgent'

  • Check import structure in main.py
  • Ensure google-adk is properly installed

ValueError: Missing key inputs argument

  • Verify .env file exists with GOOGLE_API_KEY
  • Check API key is valid

400 INVALID_ARGUMENT (function_declarations)

  • Ensure function parameters don't use list types
  • Use str parameters and parse within functions

EOFError when running main.py

  • This is expected when using timeout or Ctrl+C
  • Use python test_system.py for automated testing

Debug Mode

Add --debug to any input to see:

  • Brain analysis output
  • Planner strategy details
  • Complete insight history
  • Guidance decisions
  • Topic coverage status

🧪 Testing

Automated Testing

python test_system.py

Manual Testing

python main.py
# Then try various inputs:
# - Technical questions
# - Business problems
# - Multi-turn conversations
# - Debug mode with --debug

📚 Advanced Configuration

Model Selection

Modify in main.py:

brain_agent = LlmAgent(
    name="BrainAgent",
    model="gemini-1.5-pro-latest",  # Change model here
    instruction=BRAIN_INSTRUCTION,
    # ...
)

Custom Tools

Add new FunctionTools to agents:

def custom_tool(param: str, tool_context: ToolContext) -> dict:
    # Your custom logic
    return {"status": "success"}

brain_agent = LlmAgent(
    # ...
    tools=[FunctionTool(log_insight), FunctionTool(custom_tool)]
)

Session Configuration

Modify session initialization in main.py:

session = await session_service.create_session(
    app_name=app_name,
    user_id=user_id,
    session_id=session_id,
    state={
        'brain_insights': [],
        'topic_coverage': {},
        'custom_state': {}  # Add custom state
    }
)

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make changes to prompts or core logic
  4. Test with python test_system.py
  5. Submit a pull request

📖 References

📄 License

MIT License - See LICENSE file for details


Built with ❤️ using Google Agent Development Kit

About

Google ADK Conversational Research System with Brain → Planner → Conversationalist architecture

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages