You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A common failure mode in autonomous agents: the LLM decides to call the same tool with the same arguments over and over. It gets a result, doesn't know what to do with it, and tries again. And again. And again.
This is different from a retry — the agent genuinely thinks each call is a new idea. The model has no memory that it already tried this.
What a loop looks like
Your agent is supposed to research a topic. Instead:
Calls web_search("AI safety research papers") → gets results
Calls web_search("AI safety research papers") → same results
Calls web_search("AI safety research papers") → same results
... burns through your API budget
Catching it at runtime
AgentGuard's LoopGuard watches for this pattern and kills the agent before it compounds:
fromagentguardimportLoopGuard, LoopDetected# Trigger after 3 identical calls in a window of 6guard=LoopGuard(max_repeats=3, window=6)
# In your agent loop, check each tool call:fortool_name, tool_argsinagent_tool_calls:
try:
guard.check(tool_name, tool_args)
exceptLoopDetectedase:
print(f"Loop detected: {e}")
break# ... execute the tool call
The sliding window means the agent can call the same tool multiple times with different arguments (that's productive work). It only triggers when the exact same (tool, args) pair repeats.
Fuzzy loops
Sometimes the agent varies the arguments slightly — search("weather NYC") then search("NYC weather"). For this, FuzzyLoopGuard tracks tool-level repetition regardless of arguments:
fromagentguardimportFuzzyLoopGuard# Stop if same tool called 5+ times, or A-B-A-B pattern detectedguard=FuzzyLoopGuard(max_tool_repeats=5, max_alternations=3, window=10)
Using with LangChain
If you're on LangChain, the callback handler does this automatically:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
A common failure mode in autonomous agents: the LLM decides to call the same tool with the same arguments over and over. It gets a result, doesn't know what to do with it, and tries again. And again. And again.
This is different from a retry — the agent genuinely thinks each call is a new idea. The model has no memory that it already tried this.
What a loop looks like
Your agent is supposed to research a topic. Instead:
web_search("AI safety research papers")→ gets resultsweb_search("AI safety research papers")→ same resultsweb_search("AI safety research papers")→ same resultsCatching it at runtime
AgentGuard's
LoopGuardwatches for this pattern and kills the agent before it compounds:The sliding window means the agent can call the same tool multiple times with different arguments (that's productive work). It only triggers when the exact same
(tool, args)pair repeats.Fuzzy loops
Sometimes the agent varies the arguments slightly —
search("weather NYC")thensearch("NYC weather"). For this,FuzzyLoopGuardtracks tool-level repetition regardless of arguments:Using with LangChain
If you're on LangChain, the callback handler does this automatically:
Zero dependencies, MIT licensed: https://github.com/bmdhodl/agent47
What loop patterns have you seen in your agents? I've seen exact repeats, A-B-A-B alternation, and slow drift. Curious what others encounter.
Beta Was this translation helpful? Give feedback.
All reactions