Gemini Pro LLM and Pinecone Vector Database for fast and performant Retrieval Augmented Generation (RAG) with LlamaIndex
-
Updated
Jul 23, 2024 - Jupyter Notebook
Gemini Pro LLM and Pinecone Vector Database for fast and performant Retrieval Augmented Generation (RAG) with LlamaIndex
A real-time, AI-powered interview assistant that connects with platforms like Google Meet and Zoom to transcribe live interviews using Deepgram’s SST API (~100ms latency). It extracts questions, retrieves context from user-uploaded resumes or documents (via Gemini + Pinecone), and generates accurate responses using Claude.
A search engine for research papers, built by students for students
🛫 Journey is a chatbot for travelers
A FastAPI-based Retrieval-Augmented Generation (RAG) service that combines local file ingestion (Qdrant vectorstore) and web search (Tavily API) for context retrieval.
Click below to checkout the web application
'SpeechGrade' is an innovative educational platform designed to streamline speech assessment and enhance teaching by integrating the MERN stack, machine learning models, large language models (LLMs), and Google's Generative AI.
Add a description, image, and links to the gemini-embeddings topic page so that developers can more easily learn about it.
To associate your repository with the gemini-embeddings topic, visit your repo's landing page and select "manage topics."