This application provides an intelligent, context-aware chatbot for legal document analysis using Retrieval Augmented Generation (RAG) with Pinecone vector storage and Google Gemini AI.
- 📄 Upload and process legal documents
- 🔍 Semantic search through document content
- 💬 AI-powered query answering
- 🧠 Contextual understanding of legal documents
- Next.js
- TypeScript
- Pinecone Vector Database
- Google Gemini AI
- Hugging Face Inference
- Vector Embedding
- Node.js (v18+)
- Pinecone Account
- Google AI Studio Account
- Hugging Face Account
- Copy
.env.example
to.env
- Fill in the values with your actual credentials
- Do not commit your
.env
file
PINECONE_API_KEY=your_pinecone_api_key
HF_TOKEN=your_huggingface_token
GEMINI_API_KEY=your_google_ai_studio_key
- Clone the repository
git clone https://your-repo-url.git
- Install dependencies
npm install
- Run the development server
npm run dev
- Handles embedding generation
- Queries Pinecone vector store
- Retrieves contextually relevant document sections
- Processes user queries
- Integrates document context
- Streams AI-generated responses
Uses mixedbread-ai/mxbai-embed-large-v1
for high-quality semantic embeddings
- Ensure vector index is pre-populated
- Configure proper environment variables
- Use serverless/edge runtime compatible deployment
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.