Generate & Ship UI with minimal effort - Open Source Generative UI with natural language
-
Updated
Nov 23, 2024 - TypeScript
Generate & Ship UI with minimal effort - Open Source Generative UI with natural language
Build a RAG preprocessing pipeline
Quickest way to production grade RAG UI.
Production-ready Chainlit RAG application with Pinecone pipeline offering all Groq and OpenAI Models, to chat with your documents.
Search for a holiday and get destination advice from an LLM. Observability by Dynatrace.
This repo is for advanced RAG systems, each branch will represent a project based on RAG.
AI-driven prompt generation and evaluation system, designed to optimize the use of Language Models (LLMs) in various industries. The project consists of both frontend and backend components, facilitating prompt generation, automatic evaluation data generation, and prompt testing.
Demo LLM (RAG pipeline) web app running locally using docker-compose. LLM and embedding models are consumed as services from OpenAI.
Learn Retrieval-Augmented Generation (RAG) from Scratch using LLMs from Hugging Face and Langchain or Python
RAG enhances LLMs by retrieving relevant external knowledge before generating responses, improving accuracy and reducing hallucinations.
Using MLflow to deploy your RAG pipeline, using LLamaIndex, Langchain and Ollama/HuggingfaceLLMs/Groq
Powerful framework for building applications with Large Language Models (LLMs), enabling seamless integration with memory, agents, and external data sources.
Chat-with-Your-Documents is an AI-powered document chatbot using RAG, FastAPI, and React.js for local PDF question answering.
A GenAI based search system that scans numerous fashion product descriptions to recommend suitable options based on user queries.
It's an AI chatbot based on RAG pipeline for answering queries related to Sitare University.
This is a production-ready applications using RAG-based Language Model.
Git Your Code implements a cutting-edge Retrieval-Augmented Generation (RAG) architecture designed for deep semantic analysis of GitHub repositories. The system leverages vector embeddings, natural language processing, and machine learning to provide intelligent code comprehension and query capabilities.
Retrieval-Augmented Generation (RAG) Model for a Question Answering (QA) bot that interacts with financial data, specifically Profit & Loss (P&L) tables extracted from PDF documents.
WebScraperAI is a powerful tool that enables users to perform question-answering on website content using web scraping and retrieval-augmented generation (RAG) with LlamaIndex. It supports multiple LLMs, including OpenAI GPT-3.5, GPT-4, Gemini Pro, Gemini Ultra, and DeepSeek.
This project implements document ingestion, embedding generation, and retrieval-augmented generation (RAG). If you are looking for a small project to understand the implementation of basic RAG then this project is good to go.
Add a description, image, and links to the rag-pipeline topic page so that developers can more easily learn about it.
To associate your repository with the rag-pipeline topic, visit your repo's landing page and select "manage topics."