Skip to content

Mando-03/Layl-Cafe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

☕ Layl Chatbot – AI-Powered Cafe Assistant

Welcome to the Layl Cafe Chatbot — an AI-driven customer support and conversational commerce platform. Built using an MVC architecture, it combines powerful LLMs, a multi-agent system, and Retrieval-Augmented Generation (RAG) to deliver a smart, interactive, and personalized user experience on a sleek, responsive React frontend.

The chatbot can handle complex conversational orders, answer nuanced menu questions, and provide intelligent product recommendations.


✨ Key Features

  • Conversational Commerce: Users can place, modify, and inquire about their orders in natural language.
  • Contextual RAG Q&A: Fetches accurate, up-to-date information from a Pinecone vector DB to answer specific questions about the cafe and its products.
  • Smart Recommendations: Utilizes market basket analysis to suggest relevant, complementary items based on cart contents.
  • Multi-Agent Workflow: A robust agent-based system intelligently routes user requests to the appropriate specialized tool.
  • Full Admin Panel: A FastAPI-based interface allowing full CRUD (Create, Read, Update, Delete) operations for products, orders, and chat history.
  • Persistent & Unified Sessions: Manages user state across the entire site (shopping cart, chatbot) using a centralized, cookie-based session system with MongoDB for data persistence.
  • Advanced Text Chunking: Strategically groups related information for more effective document retrieval in the RAG pipeline.

🧠 Agent-Based Architecture

The chatbot uses modular agents, each with a dedicated role, collaborating in a pipeline:

Guard → Classification → Specialized Agents (Order / Details / Recommendation / Specialist)

Agent Role
Guard Agent Ensures user messages are on-topic and safe, blocking irrelevant or harmful queries.
Classification Agent Acts as the central router, detecting user intent (e.g., ordering, asking a question) to activate the correct downstream agent.
Order Agent Manages the entire conversational order workflow using chain-of-thought logic to collect and structure order details.
Details Agent Analyzes the query to search the most relevant collection in the Pinecone VectorDB, using the retrieved chunks to generate a factual answer.
Recommendation Agent Suggests complementary products based on the user's current context and order history.
Specialist Agent Handles general conversational messages, such as greetings, goodbyes, and thank yous, for a more natural interaction.

🚀 Getting Started

Follow these instructions to set up and run the project locally.

Prerequisites

  • Python 3.9+
  • Node.js 18.x and npm/yarn
  • Docker and Docker Compose
  • Git

1. Clone the Repository

git clone [https://github.com/Mando-03/Mini-RAG-App.git](https://github.com/Mando-03/Mini-RAG-App.git)
cd Mini-RAG-App

2. Environment Setup

The backend requires API keys and a database connection string.

  1. Navigate to the backend directory: cd backend
  2. Create a new file named .env by copying the example file:
    cp .env.example .env
  3. Open the .env file and add your credentials:
    MONGO_URI="your_mongodb_connection_string"
    PINECONE_API_KEY="your_pinecone_api_key"
    GEMINI_API_KEY="your_google_gemini_api_key"
    

3. Backend Setup

# From the backend/ directory
# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Run the FastAPI server
uvicorn main:app --reload --host 0.0.0.0 --port 5000

4. Frontend Setup

# Open a new terminal and navigate to the frontend/ directory
cd frontend

# Install dependencies
npm install

# Start the React development server
npm run dev

The application should now be running, with the frontend at http://localhost:3200 and the backend at http://localhost:5000.


🐳 Docker Deployment

For a production-like setup, you can build and run the entire application using Docker Compose.

  1. Make sure your .env file is correctly configured in the backend/ directory.
  2. From the project's root directory, run:
    docker-compose up --build

This will build the images for both the frontend and backend services and run them together.


🖥️ Frontend – React Cafe Website

A modern, user-friendly web interface that integrates directly with the chatbot.

⚙️ Built With

  • React 18 – Fast, responsive UI
  • Vite – Ultra-fast dev server & build tool
  • TailwindCSS – Utility-first styling
  • Redux Toolkit – State management
  • React Router v6 – Routing

🧰 Backend & AI Stack

  • FastAPI, Uvicorn – Asynchronous Backend API
  • MongoDB + Motor – Async NoSQL Database
  • Pinecone – Vector Database for RAG
  • SentenceTransformers, Gemini, LangChain – NLP & LLMs
  • Docker – Containerization and Deployment

About

Intelligent AI Chatbot for ordering and recommendation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published