🎯 Goal
Reduce false positives in code reviews by improving the system prompt with specific guidelines.
📊 Complexity
Quick Win (1-2 hours)
🔍 Problem
The LLM is generating false positives like:
- Reporting
${{ secrets.X }} as hardcoded secrets (it's correct GitHub Actions syntax)
- Flagging
os.chmod on config files as performance issues (it's security best practice)
- Reporting existing try-except blocks as "missing error handling"
- Suggesting premature optimizations for small lists (< 10 items)
✅ Solution
Add specific guidelines to the system prompt in iara/prompt.py:
IMPORTANT GUIDELINES:
- GitHub Actions: ${{ secrets.X }} is CORRECT, NOT hardcoded
- os.chmod on config files: Security best practice, NOT performance issue
- Existing try-except blocks: NOT missing error handling
- Small lists (< 10 items): O(n) vs O(1) is negligible, NOT worth optimizing
- Only report REAL problems that would cause bugs, security issues, or significant performance degradation
- When in doubt, DON'T report
📝 Implementation Steps
- Read current system prompt in
iara/prompt.py
- Add new section with false positive guidelines
- Test with known false positive cases
- Verify reduction in false positive rate
🎁 Expected Impact
- 50-70% reduction in false positives
- More focused and actionable reviews
- Better user trust in the tool
Related to Groq provider integration (#67) where false positives were identified.
🎯 Goal
Reduce false positives in code reviews by improving the system prompt with specific guidelines.
📊 Complexity
Quick Win (1-2 hours)
🔍 Problem
The LLM is generating false positives like:
${{ secrets.X }}as hardcoded secrets (it's correct GitHub Actions syntax)os.chmodon config files as performance issues (it's security best practice)✅ Solution
Add specific guidelines to the system prompt in
iara/prompt.py:📝 Implementation Steps
iara/prompt.py🎁 Expected Impact
Related to Groq provider integration (#67) where false positives were identified.