Skip to content

Conversation

@Devasy
Copy link
Contributor

@Devasy Devasy commented Jan 4, 2026

Summary

Implements per-agent token tracking to provide granular visibility into token consumption by each CrewAI agent in the workflow.

Closes #39

Changes Made

1. Updated crew.py

  • After crew.kickoff() completes, capture token usage from each agent's LLM using agent.llm.get_token_usage_summary()
  • Added per-agent metrics collection for all 4 agents:
    • Step Planner Agent
    • Element Identifier Agent
    • Code Assembler Agent
    • Code Validator Agent
  • Return per_agent_metrics dictionary along with existing return values

2. Updated workflow_service.py

  • Capture the new per_agent_metrics return value from run_crew()
  • Log per-agent breakdown for debugging visibility
  • Populate token_usage field in WorkflowMetrics for historical persistence

3. Added Documentation

  • Created comprehensive implementation guide (PER_AGENT_TOKEN_TRACKING_GUIDE.md)

Benefits

Cost Attribution: Know exactly which agent consumes the most tokens
Optimization Opportunities: Identify agents that need context pruning
Debugging: Track token usage anomalies per agent
Analytics: Historical trends for each agent's efficiency

Example Output

After this implementation, logs will show:

📊 Per-agent token breakdown:
   • step_planner: 645 tokens (prompt: 512, completion: 133)
   • element_identifier: 823 tokens (prompt: 598, completion: 225)
   • code_assembler: 754 tokens (prompt: 465, completion: 289)
   • code_validator: 325 tokens (patch: 248, completion: 77)

Technical Approach

This implementation follows the same pattern used in CrewAI PR #4132:

  • Each agent has its own LLM instance (created via get_llm() in agents.py)
  • The LLM instance maintains cumulative token counters via get_token_usage_summary()
  • By querying this after workflow completion, we get each agent's total consumption
  • Metrics are stored in the existing WorkflowMetrics.token_usage field

Testing

  • No syntax errors in modified files
  • Existing functionality not impacted
  • Per-agent metrics properly captured and logged
  • Metrics persisted in WorkflowMetrics model

Breaking Changes

None - This is a purely additive feature that enhances existing metrics tracking.

- Capture token usage from each agent's LLM after workflow completion
- Add per_agent_metrics return value from run_crew()
- Log per-agent token breakdown in workflow_service
- Populate token_usage field in WorkflowMetrics for persistence
- Add implementation guide documentation

This enables granular visibility into token consumption by:
- Step Planner Agent
- Element Identifier Agent
- Code Assembler Agent
- Code Validator Agent

Closes monkscode#39
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 4, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Jan 4, 2026

@Devasy
Copy link
Contributor Author

Devasy commented Jan 4, 2026

@monkscode , this PR will start working as expected after this bug is fixed: crewAIInc/crewAI#4172

@monkscode monkscode deleted the branch monkscode:develop January 19, 2026 12:17
@monkscode monkscode closed this Jan 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants