An AI-powered code analysis tool that provides instant explanations, security analysis, and improvement suggestions for your code. Built with Next.js and powered by OpenAI and Groq.
- Instant Explanations: Get detailed, AI-powered explanations of any code snippet
- Security Analysis: Automatic detection of vulnerabilities (SQL injection, XSS, CSRF, etc.)
- Performance Metrics: Identify bottlenecks and optimization opportunities
- Quality Assessment: Code maintainability, readability, and best practices evaluation
- Conditional Suggestions: Only shows improvement button when issues are detected
- Detailed Recommendations: Get specific, actionable improvement suggestions
- Before/After Comparison: See improved code with explanations of changes
- Persistent Caching: File-based cache system that works across all users
- Rate Limiting: Built-in rate limiting for API calls using Bottleneck
- Idempotent Operations: Prevents duplicate API calls for identical code
- 24-Hour Cache TTL: Reduces API costs while maintaining freshness
- Syntax Highlighting: Beautiful code editor with Prism.js
- Responsive Design: Works seamlessly on desktop, tablet, and mobile
- Auto-Scroll: Automatically scrolls to results after analysis
- Real-time Feedback: Loading states and progress indicators
Supports 20+ programming languages:
- JavaScript, TypeScript, Python, Java, Go
- Solidity, SQL, C, C++, Rust
- And many more...
- Node.js 18+
- npm or yarn
- OpenAI API key or Groq API key
- Clone the repository
git clone https://github.com/yourusername/codeexplainer.git
cd codeexplainer
- Install dependencies
npm install
- Set up environment variables
Create a .env.local
file in the root directory:
# AI Provider Configuration (choose: openai or groq)
NEXT_PUBLIC_DEFAULT_AI_PROVIDER=groq
# API Keys (get from respective providers)
OPENAI_API_KEY=your_openai_api_key_here
GROQ_API_KEY=your_groq_api_key_here
- Run the development server
npm run dev
- Open your browser Navigate to http://localhost:3000
- Framework: Next.js 15.5.4 (Pages Router)
- Language: TypeScript
- Styling: Tailwind CSS v4
- AI Integration:
- OpenAI API (GPT-4)
- Groq API (Llama 3.3 70B)
- Code Editor: react-simple-code-editor
- Syntax Highlighting: Prism.js
- Rate Limiting: Bottleneck
codeexplainer/
βββ src/
β βββ pages/
β β βββ api/
β β β βββ analyze.ts # Code analysis endpoint
β β β βββ explain.ts # Code explanation endpoint
β β β βββ improve.ts # Code improvement endpoint
β β βββ _app.tsx # App wrapper
β β βββ _document.tsx # HTML document structure
β β βββ index.tsx # Main application page
β βββ styles/
β β βββ globals.css # Global styles
β βββ utils/
β β βββ cache.ts # Persistent caching system
β β βββ rateLimiter.ts # API rate limiting
β βββ types/
β βββ prismjs.d.ts # TypeScript declarations
βββ public/
β βββ favicon.svg # App icon
β βββ favicon.ico # Fallback icon
βββ .cache/ # Persistent cache directory
βββ .gitignore
βββ package.json
βββ tsconfig.json
βββ tailwind.config.ts
βββ next.config.ts
- Get your API key from OpenAI Platform
- Add to
.env.local
:OPENAI_API_KEY=sk-...
- Set provider:
NEXT_PUBLIC_DEFAULT_AI_PROVIDER=openai
- Get your API key from Groq Console
- Add to
.env.local
:GROQ_API_KEY=gsk_...
- Set provider:
NEXT_PUBLIC_DEFAULT_AI_PROVIDER=groq
Groq (Llama 3.3 70B):
- 30 requests per minute (RPM)
- 1,000 requests per day (RPD)
- 12,000 tokens per minute (TPM)
- 100,000 tokens per day (TPD)
OpenAI (GPT-4):
- 100 requests per minute (RPM)
- Varies by tier
- Cross-User Benefits: All users benefit from cached results
- File-Based Storage: Survives server restarts
- SHA256 Hashing: Secure, collision-resistant cache keys
- Automatic Cleanup: Removes expired and excess cache files
- Cost Optimization: Dramatically reduces API calls
- Security Scoring: 0-100 score with grade (A-F)
- Performance Metrics: Identifies bottlenecks and inefficiencies
- Quality Assessment: Evaluates maintainability and readability
- Risk Classification: Low, medium, high risk indicators
- Improvement button only appears when issues are detected
- Clear visual indicators for code quality
- Color-coded metrics (green, yellow, red)
- Push to GitHub
git init
git add .
git commit -m "Initial commit"
git push -u origin main
- Deploy on Vercel
- Go to vercel.com
- Import your GitHub repository
- Add environment variables
- Deploy!
npm i -g vercel
vercel
vercel --prod
// Paste this into CodeExplainer
function calculateTotal(items) {
let total = 0;
for (let i = 0; i < items.length; i++) {
total += items[i].price;
}
return total;
}
Get:
- Detailed explanation of the code
- Security analysis (no issues)
- Performance suggestions (use reduce)
- Improved version with modern syntax
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Next.js - The React Framework
- OpenAI - GPT-4 API
- Groq - Fast LLM inference
- Tailwind CSS - Utility-first CSS
- Prism.js - Syntax highlighting
For questions or feedback, please open an issue on GitHub.
Built with β€οΈ using Next.js and AI