This project is a real-time chatbot built with React, powered by Ollama running locally with the DeepSeek LLM. The entire application, including the LLM model, runs on your local machine, ensuring full control over data and performance.
- 💬 Real-time Chat with AI responses
- 🤖 DeepSeek LLM model for intelligent conversations
- 🧠 Thinking & Typing Indicators for better UX
- 🌟 Smooth Animations using Framer Motion
- 🎨 Bootstrap UI for a modern chat interface
- 🏠 Runs Fully Locally – No external API calls required!
To run the chatbot locally, you'll need Ollama, a lightweight LLM server.
Download Ollama from the official website: 🔗 https://ollama.com/
Follow the installation steps based on your OS:
- Windows: Run the installer
- Mac: Install via
brew install ollama
- Linux: Follow the Linux-specific installation guide
We are using DeepSeek-r1:1.5b. Pull and serve it with:
ollama pull deepseek-r1:1.5b
ollama serve
Once running, Ollama will be available at http://127.0.0.1:11434/api/chat
.
Ensure you have Node.js and npm installed. Then, clone the repo and install dependencies:
git clone <repo-url>
cd <project-directory>
npm install
npm start
Your chatbot should now be running at http://localhost:3000/
!
- If you're using a different model, update the
model
parameter infetch
requests insideChatApp.js
. - Modify UI elements using Bootstrap classes and Framer Motion for animations.