AI-powered Slack bot that classifies inbound leads from HubSpot and researches promising ones.
When HubSpot posts a lead to your Slack channel, Leads Agent:
- Parses contact info (name, email, company)
- Does a fast go/no-go triage (promising vs ignore)
- If promising, researches the company/contact and produces a 1β5 score + recommended action
- Posts a threaded reply with the decision and context
git clone https://github.com/yourusername/leads-agent.git
cd leads-agent
uv venv && source .venv/bin/activate
uv pip install -e .
leads-agent init # Interactive setup
leads-agent run # Start serverleads-agent init# Slack (Socket Mode)
export SLACK_BOT_TOKEN="xoxb-..." # Bot User OAuth Token
export SLACK_APP_TOKEN="xapp-..." # App-Level Token (connections:write)
export SLACK_CHANNEL_ID="C..." # Production channel
export SLACK_TEST_CHANNEL_ID="C..." # Optional: for test mode
# LLM (OpenAI by default)
export OPENAI_API_KEY="sk-..."
export LLM_MODEL_NAME="gpt-5-nano" # Optional
# Behavior
export DRY_RUN="true" # Set to "false" to post repliesleads-agent config- Go to api.slack.com/apps
- Click Create New App β From an app manifest
- Paste
slack-app-manifest.yml - Install to workspace
Get credentials:
| Credential | Location |
|---|---|
SLACK_BOT_TOKEN |
OAuth & Permissions β Bot User OAuth Token |
SLACK_APP_TOKEN |
Basic Information β App-Level Tokens β Generate (scope: connections:write) |
SLACK_CHANNEL_ID |
Right-click channel β View details β Copy ID |
Invite the bot:
/invite @Leads Agent
The bot uses Socket Mode (outbound WebSocket) β no public URL or HTTPS setup required.
leads-agent init # Setup wizard (includes prompt config)
leads-agent config # Show configuration
leads-agent prompts # Show prompt configuration
leads-agent prompts --full # Show rendered prompts
leads-agent run # Start bot (Socket Mode)
# Classification
leads-agent classify "message" # Triage; if promising, auto research + score
# Event Collection & Testing
leads-agent collect --keep 20 # Collect raw Socket Mode events
leads-agent backtest events.json # Test classifier on collected events
leads-agent test # Listen via Socket Mode, post to test channel
# Debugging
leads-agent pull-history --limit 10 --print| Command | Description | Output |
|---|---|---|
run |
Production mode - reply in threads | Production (threads) |
test |
Test mode - post to test channel | Test channel (main) |
backtest |
Offline testing from collected events | Console only |
- Collect events:
leads-agent collect --keep 20captures raw Socket Mode events - Backtest offline:
leads-agent backtest collected_events.jsontests classifier - Test live:
leads-agent testlistens for real events, posts to test channel - Go live:
leads-agent runproduction mode with thread replies
| Option | Description |
|---|---|
--limit, -n |
Number of leads/events to process |
--max-searches |
Limit web searches per lead (default: 4) |
--dry-run / --live |
Override DRY_RUN config |
--debug, -d |
Show agent steps and token usage |
--verbose, -v |
Show full message history |
# Collect events for testing
leads-agent collect --keep 10 --output hubspot_events.json
# Backtest on collected events
leads-agent backtest hubspot_events.json --debug
# Test mode - live events to test channel
leads-agent test --channel C0TEST123
# Production mode (thread replies)
leads-agent runexport OPENAI_API_KEY="sk-..."
export LLM_MODEL_NAME="gpt-5.2" # optional, defaults to gpt-5-nanoollama serve
ollama pull llama3.1:8b
export LLM_BASE_URL="http://localhost:11434/v1"
export LLM_MODEL_NAME="llama3.1:8b"Any OpenAI-compatible API works β set LLM_BASE_URL, LLM_MODEL_NAME, and OPENAI_API_KEY.
Customize the classification behavior for your deployment without modifying code. Configure:
- Company context β Your company name and services
- Ideal Client Profile (ICP) β Target industries, company sizes, roles
- Qualifying questions β Custom criteria for lead evaluation
- Research focus areas β What to look for when enriching leads
Create prompt_config.json in your project root (copy from prompt_config.example.json):
cp prompt_config.example.json prompt_config.json
# Edit with your company's ICP, questions, etc.Or use leads-agent init to create it interactively.
The file is auto-discovered from the current directory. To use a different location:
export PROMPT_CONFIG_PATH=/path/to/my-config.json{
"company_name": "Acme Consulting",
"services_description": "AI/ML consulting and custom software development",
"icp": {
"description": "Mid-market B2B SaaS companies",
"target_industries": ["SaaS", "FinTech", "HealthTech"],
"target_company_sizes": ["SMB", "Mid-Market"],
"target_roles": ["CTO", "VP Engineering", "Head of Data"]
},
"qualifying_questions": [
"Does this look like a real business need?",
"Is there budget indication or enterprise context?"
]
}leads-agent prompts # Show configuration summary
leads-agent prompts --full # Show full rendered prompts
leads-agent prompts --json # Output as JSON| Issue | Solution |
|---|---|
| "Missing SLACK_APP_TOKEN" | Generate App-Level Token with connections:write scope |
| No classifications happening | Verify bot is invited to channel; check HubSpot is posting |
| Backtest shows no leads | Run pull-history --print to verify HubSpot messages exist |
| LLM errors | Check OPENAI_API_KEY; for Ollama ensure server is running |
Lead processing is wrapped in a single Logfire span (lead.process) so the triage/research/scoring agent traces are grouped under one lead.
In Slack-driven flows, the span uses the Slack thread_ts as the lead_id for easy correlation.
- Architecture Guide β Data flow, Slack manifest, classification system
- Deployment Guide - Deployment
MIT β See LICENSE