How to Detect Tool Call Loops in AI Agents (Python) #201
bmdhodl
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
AI agents that call tools can get stuck in loops — calling the same tool with the same arguments repeatedly, burning through tokens while doing nothing useful.
This happens with every LLM provider (OpenAI, Anthropic, Google) and every framework. The root cause is the model not knowing how to proceed, so it retries the same action.
Three types of loops
1. Exact repeats: Same tool, same arguments, over and over.
2. Fuzzy repeats: Same tool, slightly different arguments.
3. Alternation (A-B-A-B): Two tools calling each other in a cycle.
Detecting exact loops
LoopGuarduses a sliding window. It tracks the last N tool calls and triggers when the same (tool, args_hash) pair appears more thanmax_repeatstimes.Detecting fuzzy loops and alternation
FuzzyLoopGuardignores arguments — it only looks at tool names. This catches the case where the model paraphrases its input on each retry.Automatic detection with LangChain
If you're using LangChain, the callback handler does the checking for you:
Install
Zero dependencies, MIT licensed, Python 3.9+.
GitHub: https://github.com/bmdhodl/agent47
What loop patterns have you seen in your agents? I'm collecting failure modes to improve detection — drop a comment with your experience.
Beta Was this translation helpful? Give feedback.
All reactions