An AI-powered developer assistant that analyzes code repositories, detects quality issues, generates actionable reports, visualizes trends, and even answers natural-language questions about your codebase.
It goes beyond simple linting by combining:
- AST-based static analysis
- Large Language Model (Gemini) for semantic reasoning
- Retrieval-Augmented Generation (RAG) for handling large repos
- Developer-friendly visualizations (charts, dependency graphs, trends)
- 🔍 Code Quality Analysis – Scan Python & JavaScript repos for quality issues
- 📑 Reports – Summaries with severity scoring & actionable fixes
- 🕸️ Dependency Visualization – Graphs showing file/module relationships
- 📊 Trend Tracking – Track regressions or improvements over time
- 💬 Interactive Chat – Natural-language Q&A powered by RAG + LLM
- 🖥️ CLI Interface – Easy commands:
analyze,report,trend,chat - 🌐 REST API (FastAPI) – Endpoints:
/analyze,/report,/trend,/chat - ☁️ Cloud Deployment – Hosted on Railway with Swagger UI
The agent follows a hybrid pipeline:
- Loader → Reads files (Python/JS) from local path or GitHub repo
- Analyzer → AST parsing + static analysis + Gemini LLM evaluation
- Reporter → Generates summaries, charts, dependency graphs, and trend logs
- Trend Tracker → Compares previous runs for progress tracking
- Chat (RAG) → Conversational Q&A over repo chunks
- FastAPI Server → Exposes CLI features as REST endpoints
Clone the repo:
git clone https://github.com/Anas255-exe/solid-meme
cd code-quality-agentCreate virtual environment:
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtAdd Gemini API key:
echo "GEMINI_API_KEY=your-key" > .env# Analyze a repo
python main.py analyze <path-to-code>
# Generate reports
python main.py report
# Compare trends
python main.py trend
# Ask questions
python main.py chatRun locally:
uvicorn app:app --reload --host 0.0.0.0 --port 8000Swagger UI: http://localhost:8000/docs
Deployed on Railway: https://solid-meme-production.up.railway.app/docs
Endpoints:
POST /analyze– Analyze repo (local/GitHub)GET /report– Get summarized reportGET /trend– Compare last two runsPOST /chat– Ask natural-language questions
- Hybrid AST + LLM → AST for structural precision, LLM for semantic reasoning
- RAG for large repos → prevents token overflow, keeps analysis scalable
- Visualizations →
matplotlib+networkxfor graphs and charts - Deployment on Railway → lightweight FastAPI + Uvicorn for cloud access
- Large repo handling → solved with chunking + RAG retrieval
- Free-tier constraints → optimized Gemini API calls, lightweight outputs
- Balancing speed & accuracy → static analysis (fast) + LLM reasoning (deep)
- Cloud limits → tuned FastAPI for Railway resource caps
- The live Railway API is available for 30 days or until the $5 free credits run out.
- Gemini API usage is rate-limited; heavy queries may cause bottlenecks.
- Best tested on small-to-medium repos for smooth performance.
- If the live API is unavailable, you can run the agent locally with the CLI.
- Python 3.10+
- LLM: Google Gemini API (
google-generativeai) - Frameworks: FastAPI, Typer, LangChain (RAG)
- Visualization: matplotlib, networkx, pandas
- Parsing/Analysis: Python
ast,radon,pylint, JSeslint - Deployment: Railway (FastAPI + Uvicorn)