Create a fully functional, local chatbot using Flowise and Ollama. This project is designed to provide a low-code, privacy-friendly solution for building intelligent conversational bots. 🚀 This was made as part of a blog that you can find here
- Low-Code Workflow: Build chatbots visually without heavy coding.
- Local Hosting: Keeps your data private and secure.
- Customisable: Fully adjustable to your needs.
- Powered by Open-Source Models: Utilizes Ollama's LLMs for AI capabilities.
- ChatOllama: Provides AI responses using Ollama's models.
- Buffer Memory: Retains chat history for continuity.
- Conversation Chain: Integrates ChatOllama and Buffer Memory to enable interactive conversations.
- Docker (for running Ollama locally)
- Node.js and npm (for Flowise)
- A compatible LLM model (e.g.,
SARA-llama3.2
)
- Clone the repository:
git clone https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise.git cd local-low-code-chatbot-ollama-flowise
- Start Ollama:
docker run -p 11434:11434 ollama/server
- Install Flowise:
npm install -g flowise flowise start
- Import the provided Flowise JSON (
basic-local-chatbot-flowise.json
) into Flowise.
- Access the Flowise editor at
http://localhost:3000
. - Load and activate the workflow.
- Start chatting with the bot in the interface.