A professional, clean web-based chatbot frontend for Ollama endpoints with thinking models support
Deploy your own: Follow the GitHub Deployment Guide to get your chatbot live in minutes!
Configure with your own Ollama endpoint to start chatting!
- π₯οΈ Professional UI: Clean, modern interface with intuitive design
- π Ollama Integration: Direct connection to any Ollama endpoint
- π Thinking Models: Special support for reasoning models (o1, etc.)
- π Custom Prompts: Personalized system prompts for different use cases
- πΎ Save Conversations: Export chat history to text files
- π Dark/Light Theme: Automatic theme detection
- π± Responsive Design: Works perfectly on desktop and mobile
- Automatic Detection: Recognizes
<think>tags in responses - Collapsible Interface: Thinking process hidden by default
- π€ One-Click Expand: View AI reasoning on demand
- Clean Display: Main response shown prominently
- Zero Dependencies: Pure vanilla HTML, CSS, and JavaScript
- Local Storage: Automatic configuration persistence
- Real-time Status: Connection monitoring with visual feedback
- Error Handling: Comprehensive error management
- Accessibility: Screen reader friendly with ARIA support
- Cross-browser: Works on all modern browsers
# Clone the repository
git clone https://github.com/3eekeeper/cr-chatbot.git
cd cr-chatbot
# Start local server
npm run dev
# or
cd cr-chatbot && python3 -m http.server 9000
# Open http://localhost:9000 in your browser# Using docker-compose (easiest)
git clone https://github.com/3eekeeper/cr-chatbot.git
cd cr-chatbot
docker-compose up -d
# Access at http://localhost:8080Or using Docker directly:
# Build and run
docker build -t cr-chatbot .
docker run -d -p 8080:80 --name cr-chatbot cr-chatbot
# Access at http://localhost:8080- Download the latest release
- Extract and open
cr-chatbot/index.htmlin any HTTP server - Configure your Ollama endpoint and start chatting
- Ollama: Running instance (default:
http://localhost:11434) - Model: At least one model installed (
ollama pull llama3.2) - Browser: Any modern web browser
- Open Settings: Click the βοΈ gear icon
- Set Endpoint: Enter your Ollama URL (e.g.,
http://localhost:11434) - Choose Model: Specify model name (e.g.,
llama3.2,mistral) - System Prompt: Customize AI behavior (optional)
- Test Connection: Verify setup works
- Start Chatting: Send your first message!
User: Hello! How are you today?
AI: Hello! I'm doing well, thank you for asking. How can I assist you today?
[π€ Thinking process] β Click to expand
AI: Based on your question, here's my response...
The thinking process is automatically detected and hidden in a collapsible section, keeping the interface clean while providing access to the AI's reasoning when desired.
- Bold text:
**bold**β bold - Italic text:
*italic*β italic Code:\code`βcode`- Code blocks: ```code``` β formatted blocks
# Clone repository
git clone https://github.com/3eekeeper/cr-chatbot.git
cd cr-chatbot
# Start with docker-compose
docker-compose up -d
# Access at http://localhost:8080# Build image
docker build -t cr-chatbot .
# Run container
docker run -d -p 8080:80 --name cr-chatbot cr-chatbot
# Access at http://localhost:8080To run both CR Chatbot and Ollama together:
# Uncomment ollama service in docker-compose.yml
# Then run:
docker-compose up -d
# CR Chatbot: http://localhost:8080
# Ollama API: http://localhost:11434# Build for production
docker build -t cr-chatbot:prod .
# Run with restart policy
docker run -d \
--name cr-chatbot-prod \
--restart unless-stopped \
-p 80:80 \
cr-chatbot:prodcr-chatbot/
βββ index.html # Main application file
βββ styles.css # Complete styling and theming
βββ script.js # Core JavaScript functionality
βββ README.md # Detailed documentation
βββ chatbot/ # Saved conversations directory
βββ THINKING_MODELS_FEATURE.md # Thinking models documentation
βββ PROJECT_COMPLETION.md # Development completion summary
| Browser | Version | Status |
|---|---|---|
| Chrome | 80+ | β Fully Supported |
| Firefox | 75+ | β Fully Supported |
| Safari | 13+ | β Fully Supported |
| Edge | 80+ | β Fully Supported |
| Mobile | Modern | β Responsive Design |
The chatbot communicates with Ollama using the standard API:
// POST /api/chat
{
"model": "llama3.2",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": false
}For models that support reasoning:
<think>
This is the AI's internal reasoning process...
</think>
This is the final response shown to the user.
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and test thoroughly
- Commit:
git commit -m 'Add amazing feature' - Push:
git push origin feature/amazing-feature - Open a Pull Request
# Clone your fork
git clone https://github.com/yourusername/cr-chatbot.git
cd cr-chatbot
# Start development server
npm run dev
# Make changes and test at http://localhost:9000- Conversation history persistence
- Multiple model support in single interface
- Message export to different formats (JSON, Markdown)
- Keyboard shortcut customization
- Plugin system for extensions
- Voice input/output support
- Multi-language UI
- Advanced model parameter controls
- Conversation templates
- Integration with cloud AI services
"Connection Failed"
- Ensure Ollama is running:
ollama serve - Check endpoint URL in settings
- Verify firewall allows connections
"Model Not Found"
- List available models:
ollama list - Pull required model:
ollama pull llama3.2
Interface Not Loading
- Use HTTP server (not file:// protocol)
- Check browser console for errors
- Try different browser or incognito mode
- π Documentation
- π Issue Tracker
- π¬ Discussions
- Memory Usage: ~2-5MB typical
- Storage: Configuration < 1KB
- Load Time: < 1 second on modern connections
- Responsiveness: 60fps animations
- Local Communication: All data stays between browser and Ollama
- No External Services: No third-party data collection
- Local Storage: Configuration stored locally only
- HTTPS Ready: Works with secure connections
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama Team: For the excellent local AI platform
- Contributors: Everyone who helps improve CR Chatbot
- Community: Users who provide feedback and suggestions
Made with β€οΈ by 3eekeeper
A simple, professional chatbot interface that just works.