A sophisticated conversational AI system built with Google's Agent Development Kit (ADK) featuring a Brain → Planner → Conversationalist sequential architecture for intelligent, context-aware knowledge extraction interviews.
This system implements a three-stage sequential pipeline that processes each user input through specialized agents:
- 🧠 Brain Agent: Analyzes user input, logs structured insights, and maps them to research topics
- 📋 Planner Agent: Reviews insights, manages research goals systematically, and creates strategic guidance
- 💬 Conversationalist Agent: Conducts targeted interviews based on strategic guidance
- Dynamic Goal Detection: Automatically sets research objectives based on conversation context
- Stateful Agent Memory: Each agent maintains persistent context within conversations
- Configurable Research Areas: JSON-based goal configuration for different domains
- Topic Coordination: Brain-Planner communication through shared topic identification
- Sequential Processing: Each agent builds on the previous agent's work
- Structured Insights: Brain agent categorizes findings and maps to research goals
- Goal-Driven Conversations: Planner systematically works through research objectives
- Shared State Communication: Agents communicate through session state
- Function Tools: Structured data operations for reliable agent coordination
- Modular Prompts: Separate prompt files for easy customization
- Enhanced Debug Mode: Complete visibility into goal progression and agent decisions
conversational-agent/
├── main.py # Main application with agent pipeline
├── test_system.py # Test script for automated testing
├── requirements.txt # Python dependencies
├── .env # API credentials (create this)
├── config/ # Configuration management
│ ├── __init__.py # Configuration exports
│ ├── goal_detection.py # Context detection and goal loading
│ └── research_goals.json # Research goal definitions by domain
├── prompts/ # Modular prompt engineering
│ ├── __init__.py # Prompt exports
│ ├── brain_prompt.py # Brain agent instructions
│ ├── planner_prompt.py # Planner agent instructions
│ └── conversationalist_prompt.py # Conversationalist instructions
├── context7-context.md # ADK patterns reference
├── README.md # This file
└── venv/ # Virtual environment (created during setup)
- Python 3.8+
- Google AI API key
- Clone and navigate to the project:
cd conversational-agent- Create virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Set up API credentials:
Create a
.envfile:
echo "GOOGLE_API_KEY=your_api_key_here" > .env- Run the system:
python main.pypython test_system.pypython main.py- Type your questions or requests
- The system processes through Brain → Planner → Conversationalist
- Add
--debugto your input to see internal processing details - Type
quitto exit
👤 You: Hi lets get to the bottom of my clients copilot usage at the moment.
[CONFIG] Detected context: copilot_usage, loaded 6 research goals
🤖 Assistant: I'd be happy to help you understand your clients' Copilot usage.
To begin our deep dive, can you describe a typical workday for your client
involving Copilot? Let's focus on a specific task or project...
👤 You: They're scared of the output being wrong or their data landing somewhere --debug
🧠 Brain Output: {"insights_found": [{"type": "pain_point", "details": "Concerns about accuracy and data security", "topic": "Copilot Adoption Barriers"}], "summary": "Identified trust and security concerns as primary adoption barriers."}
📋 Research Goals:
🔄 Copilot Adoption Barriers [in_progress]
⏳ AI Trust and Security Concerns [pending]
🎯 Current Guidance: {'type': 'clarification', 'recommendations': 'Ask for specific examples of accuracy concerns'}
🤖 Assistant: Those are valid concerns that many organizations have. Can you share
a specific example where your clients worried about Copilot's accuracy? What exactly
led them to believe the output might be incorrect?
The system automatically detects conversation context and loads appropriate research goals:
- Copilot Keywords: "copilot", "ai", "artificial intelligence" →
copilot_usagegoals - Teams Keywords: "teams", "channels", "collaboration" →
teams_collaborationgoals - Default: General productivity and tool adoption goals
- Session Persistence: Conversation history maintained across turns
- Goal-Driven Progress: Planner systematically works through research objectives
- Insight Accumulation: Brain insights build up and map to specific topics
- Topic Coordination: Brain signals current topic to Planner via
current_turn_topic - Strategic Continuity: Planner tracks goal status and strategic decisions
session.state = {
'current_turn_topic': 'Copilot Adoption Barriers',
'agent_states': {
'brain': {
'cumulative_insights': [
{'type': 'pain_point', 'details': 'Trust concerns', 'topic': 'Copilot Adoption Barriers'},
{'type': 'opportunity', 'details': 'Training needs', 'topic': 'Copilot Training'}
]
},
'planner': {
'research_goals': [
{'topic': 'Copilot Adoption Barriers', 'status': 'in_progress'},
{'topic': 'AI Trust and Security Concerns', 'status': 'pending'}
],
'strategic_history': [
{'guidance_type': 'clarification', 'turn': 1}
]
},
'conversationalist': {
'topics_discussed': ['copilot_concerns', 'data_security']
}
}
}Define research objectives for different domains:
{
"copilot_usage": [
{"topic": "Copilot Adoption Barriers", "status": "pending"},
{"topic": "AI Trust and Security Concerns", "status": "pending"}
],
"teams_collaboration": [
{"topic": "Initial User Onboarding Experience", "status": "pending"},
{"topic": "Collaboration Patterns within Teams Channels", "status": "pending"}
]
}Customize how the system detects conversation context:
- Keywords: Add new domain-specific keywords
- Logic: Modify detection algorithms
- Fallbacks: Set default goal sets
Each agent's behavior is defined in separate, modular prompt files:
Brain Agent (prompts/brain_prompt.py)
- Maps insights to research topics
- Categories: pain_point, opportunity, technical_requirement, research_gap, context_clue, success_story
- JSON output format for structured analysis
Planner Agent (prompts/planner_prompt.py)
- Manages research goal progression
- Coordinates with Brain via
current_turn_topic - Guidance types: clarification, deep_dive, solution_oriented, comparative, educational
Conversationalist Agent (prompts/conversationalist_prompt.py)
- Conducts targeted research interviews
- References goal progress and agent insights
- Maintains conversational flow while pursuing objectives
- Goals: Edit
config/research_goals.jsonto add new research domains - Context Detection: Modify
config/goal_detection.pyfor new keywords/logic - Prompts: Edit relevant prompt files for behavior changes
- Test: Run
python test_system.pyto validate changes
Based on Google ADK documentation patterns:
- Hierarchical Agents: Multi-agent system with specialized roles
- Sequential Agent Pipeline: Ordered processing chain
- Agent Specialization: Single responsibility per agent
- Shared State Communication: Indirect agent communication via state
- Function Tools: Structured operations for reliable data exchange
- ToolContext Injection: Proper state access in tools
- Async Session Management: Non-blocking conversation handling
User Input: "Hi lets get to the bottom of my clients copilot usage"
↓
[CONFIG] Context Detection
"copilot" keyword → copilot_usage goals
↓
┌─────────────┐ ┌──────────────┐ ┌─────────────────────┐
│ Brain Agent │ → │ Planner Agent│ → │ Conversationalist │
└─────────────┘ └──────────────┘ └─────────────────────┘
│ │ │
▼ ▼ ▼
Analyzes & Reviews goals & Conducts targeted
maps insights manages progress research interview
to topics
│ │ │
▼ ▼ ▼
current_turn_topic: update_goal_status: "I'd be happy to help
"Copilot Usage" "pending → in_progress" understand your clients'
Copilot usage. Can you
describe a typical..."
ImportError: cannot import name 'LlmAgent'
- Check import structure in main.py
- Ensure
google-adkis properly installed
ValueError: Missing key inputs argument
- Verify
.envfile exists withGOOGLE_API_KEY - Check API key is valid
400 INVALID_ARGUMENT (function_declarations)
- Ensure function parameters don't use
listtypes - Use
strparameters and parse within functions
EOFError when running main.py
- This is expected when using timeout or Ctrl+C
- Use
python test_system.pyfor automated testing
Add --debug to any input to see:
- Brain analysis output
- Planner strategy details
- Complete insight history
- Guidance decisions
- Topic coverage status
python test_system.pypython main.py
# Then try various inputs:
# - Technical questions
# - Business problems
# - Multi-turn conversations
# - Debug mode with --debugModify in main.py:
brain_agent = LlmAgent(
name="BrainAgent",
model="gemini-1.5-pro-latest", # Change model here
instruction=BRAIN_INSTRUCTION,
# ...
)Add new FunctionTools to agents:
def custom_tool(param: str, tool_context: ToolContext) -> dict:
# Your custom logic
return {"status": "success"}
brain_agent = LlmAgent(
# ...
tools=[FunctionTool(log_insight), FunctionTool(custom_tool)]
)Modify session initialization in main.py:
session = await session_service.create_session(
app_name=app_name,
user_id=user_id,
session_id=session_id,
state={
'brain_insights': [],
'topic_coverage': {},
'custom_state': {} # Add custom state
}
)- Fork the repository
- Create a feature branch
- Make changes to prompts or core logic
- Test with
python test_system.py - Submit a pull request
- ADK Documentation: Google Agent Development Kit
- Context Reference:
context7-context.md- Comprehensive ADK patterns - Gemini API: Google AI for Developers
MIT License - See LICENSE file for details
Built with ❤️ using Google Agent Development Kit