[mcp-analysis] MCP Structural Analysis - February 2, 2026 #13261
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it expired on 2026-02-09T11:22:23.210Z.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This analysis evaluates GitHub MCP tool responses for both quantitative metrics (token size/context efficiency) and qualitative assessment (structural usefulness for autonomous agents). After 30 days of tracking across 239 data points, we've identified patterns that help agents optimize their tool selection.
Key Findings
Executive Summary
What's Working Well:
Critical Issues:
get_mefails with 403 permission errors, making it completely unusable for workflow automation.perPage=1due to deeply nested repository objects in head/base branches.Full Structural Analysis Report
Response Size Analysis
Average Token Count by Toolset
Efficiency Champions (Green Zone: <1K tokens):
Moderate Size (Orange Zone: 1K-10K tokens):
Bloat Zone (Red Zone: >10K tokens):
Usefulness Rating for Agentic Work
30-Day Trend Analysis
Key Observations:
list_releasesreturning 1.1M chars for single release query - catastrophic bloatperPage=1)Token Efficiency vs Usefulness
Quadrant Analysis
🌟 Sweet Spot (Low tokens, High value):
get_label(30 tokens, 5/5): Perfect minimal responselist_branches(50 tokens, 5/5): Essential git data onlylist_tags(90 tokens, 5/5): Clean release referencesget_file_contents(150 tokens, 5/5): Efficient file operationslist_discussions(280 tokens, 5/5): Well-structured community datasearch_repositories(320 tokens, 4/5): Good discovery toollist_commits(420 tokens, 4/5): Balanced commit history📦 Data Rich (High tokens, High value):
list_workflows(5.8K tokens, 5/5): Comprehensive but worth it for 30 workflowslist_issues(5.2K tokens, 4/5): Rich data, can be verboselist_pull_requests(13.5K tokens, 3/5): Useful but bloated🚫 Bloat Zone (High tokens, Low value):
list_code_scanning_alerts(17.6K tokens, 2/5): Severe bloat, needs optimizationget_me(40 tokens, 1/5): Fails with permission errorSchema Structure Analysis
Excellent Structures (Rating 5/5)
get_file_contents (repos)
content,shaget_label (labels)
name,color,description,idlist_branches (repos)
name,sha,protectedlist_workflows (actions)
total_count,workflows[]list_discussions (discussions)
discussions[],pageInfo,totalCount,categoryGood Structures (Rating 4/5)
list_issues (issues)
issues[],pageInfo,totalCount, nesteduser,labelslist_commits (repos)
sha,commit,author,committer,html_urlsearch_repositories (search)
total_count,items[],incomplete_resultsAdequate Structures (Rating 3/5)
list_pull_requests (pull_requests)
number,title,state,body,user,head,baseheadandbasebranchesperPage=1is excessiveLimited Structures (Rating 2/5)
list_code_scanning_alerts (code_security)
number,rule,state,tool,most_recent_instancePoor Structures (Rating 1/5)
get_me (context)
Tool-by-Tool Analysis (Latest Data)
Recommendations
For Agent Developers
1. Prefer High-Efficiency Tools
When you have a choice, use these sweet-spot tools:
get_label,list_branches,list_tagsfor git operations (30-90 tokens)list_discussions,search_repositoriesfor discovery (280-320 tokens)list_commitsfor history (420 tokens)2. Use Pagination Parameters Wisely
list_pull_requests: Always useperPage=1or minimal values, as responses are extremely nestedlist_issues: Be aware that issue bodies can be very large (20KB+ for team status issues)list_workflows: Reasonable to use higher limits, well-optimized3. Avoid These Tools
get_me: Fails with 403 errors, unusablelist_code_scanning_alertsfor list queries: Use only when you need detailed rule documentationlist_pull_requests: Use sparingly, consider getting just PR numbers firstFor MCP Server Developers
1. Add Summary/Detail Modes
Tools with severe bloat should support two modes:
2. Reduce Redundant Nesting
list_pull_requestsshould use repository references instead of full objects:3. Fix Permission Documentation
get_meconsistently fails with 403 errors. Either:For Optimal Context Usage
Context Budget Guidelines:
Sample Agentic Workflow Context Budget:
For a typical 200K context window:
Historical Patterns (30-Day Window)
Data Coverage:
Notable Events:
list_releasescatastrophic bloat (282K tokens for single release with 14 binary assets)perPage=1)Trend Direction:
Next Steps
get_meandlist_code_scanning_alertsin agentic workflowsThis analysis will continue tracking MCP structural efficiency daily. Historical data is maintained in a 30-day rolling window at
/tmp/gh-aw/cache-memory/mcp_analysis.jsonl.References:
Beta Was this translation helpful? Give feedback.
All reactions