Skip to content

Pivot proposal: Focus on human-in-the-loop constraint curation for agent.md #3

@SscSPs

Description

@SscSPs

Hey! Thanks for building this.

I’ve been diving into how to best structure agent.md files and came across a bunch of recent research that dropped this week. Thought I'd share it because it might be a really cool direction for this tool to head.

TL;DR: It turns out that auto-generated codebase summaries actually hurt agent performance and drive up costs. The models get distracted by stuff they can easily just grep on the fly.

The consensus from these papers/threads is that agents do way better with short, human-curated lists of strict constraints, basically telling the AI what not to do, or highlighting weird repo quirks, rather than explaining the whole architecture.
Thought it might be an interesting pivot for your repo! Instead of summarizing the codebase, maybe the tool could focus on helping devs easily curate these specific rules, warn them if the agent.md gets too long, or even pull "lessons learned" from recent failed PRs.

Would love to hear your thoughts! Here are the links I was looking at:
Theo's breakdown: Https://youtu.be/GcNu6wrLTJc?si=ludpsYT8810LJ5yT
HN Discussion: https://news.ycombinator.com/item?id=47034087
The data/papers: https://arxiv.org/abs/2602.11988 & https://arxiv.org/pdf/2602.12670

Cheers,
Sahil Soni

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions