Think of a ChatGPT which you can use without Internet!!!! yea you heard it right - You very own ChatGPT on your local machine -> that's KnowFlow A modern, feature-rich AI assistant platform built with FastAPI, Gemini, and Tavily, offering multiple interaction modes and advanced capabilities.
- Powered by Tavily Search API - optimized for AI agents
- Advanced content extraction from multiple sources
- Real-time information gathering
- Automatic summarization and content processing
- Clean, relevant results without ads or clutter
- Real-time conversation with Google's Gemini AI
- Context-aware responses
- Streaming text output
- Markdown support with syntax highlighting
- Integration with Ollama for local model inference
- Meta's Llama 3.2 3B model for offline processing
- Lightweight yet powerful responses
- Privacy-focused local computation
- Markdown formatting and code highlighting
- Generate images from text descriptions
- High-quality image output (1024x1024)
- Powered by FLUX.1-schnell model
- Download generated images
- Upload and process documents for context-aware responses
- Supports multiple file formats (PDF, DOCX, TXT, etc.)
- Semantic search using ChromaDB
- Context-aware responses based on uploaded documents
- Interactive drawing canvas
- Real-time AI assistance
- Color picker and drawing tools
- Screen capture integration
- Voice interaction with audio feedback
- Ask questions about images using voice
- Multilingual support (including Hindi and Spanish)
- Real-time camera integration
- Voice responses using Google Text-to-Speech
- Backend: FastAPI, Python 3.11
- AI Models:
- Google Gemini 2.0 Flash for text generation
- FLUX.1-schnell for image generation
- Tavily API for web search and content extraction
- Meta's Llama 3.2 3B via Ollama for local processing
- Database: ChromaDB for vector storage
- Frontend: JavaScript, TailwindCSS
- Tavily Search API: Advanced web search and content extraction
- Google Gemini API: State-of-the-art language model
- Google Cloud TTS: Text-to-speech capabilities
- HuggingFace: Embedding models and transformers
- Ollama: Local model inference
- Git
- For Windows: PowerShell with administrator privileges
- For macOS/Linux: Terminal with sudo access
Create a .env file with:
GEMINI_API_KEY=your_gemini_api_key
# Get from Google AI Studio: https://makersuite.google.com/app/apikey
TAVILY_API_KEY=your_tavily_api_key
# Get from Tavily AI Dashboard: https://tavily.com/dashboard
HUGGINGFACE_API_KEY=your_huggingface_api_key
# Get from HuggingFace Settings: https://huggingface.co/settings/tokens
LLAMA_CLOUD_API_KEY=your_llama_api_key
# Get from LlamaCloud Dashboard: https://cloud.llamaindex.ai/
GOOGLE_APPLICATION_CREDENTIALS=credentials/google-cloud-credentials.json
# (dont change this path)
# Get from Google Cloud Console: https://console.cloud.google.com/apis/credentials
# Download JSON and save in credentials/ folder which is already there, just paste it.- Clone the repository:
git clone [repository-url]
cd knowflow- Choose the appropriate setup method for your operating system:
# Run PowerShell as Administrator
.\setup.ps1The Windows setup script will automatically:
- Install Python 3.11 (skipped if already installed)
- Create a Python virtual environment
- Activate the virtual environment
- Install uv package manager in the virtual environment
- Install project requirements using uv
- Install and configure Ollama
- Pull the Llama 3.2 model
Note: The script checks for existing installations and skips steps that aren't needed. If you already have Python 3.11 installed, it will skip the installation step and proceed with creating the virtual environment.
# Make the setup script executable
chmod +x setup.sh
# Run the setup script
./setup.shThe macOS/Linux setup script will automatically:
- Install Homebrew (macOS only, skipped if already installed)
- Install Python 3.11 (skipped if already installed)
- Create a Python virtual environment
- Activate the virtual environment
- Install uv package manager in the virtual environment
- Install project requirements using uv
- Install and configure Ollama (skipped if already installed)
- Pull the Llama 3.2 model
Note: The script checks for existing installations and skips steps that aren't needed. For example:
- On macOS: If you already have Homebrew and Python 3.11 installed, it will skip those steps
- On Linux: If Python 3.11 is already installed via your package manager (apt, dnf, etc.), it will skip that step
- For both: If Ollama is already installed, it will skip the installation step
Important Note: After running the setup script, you need to activate the virtual environment in a new terminal session before starting the application.
- First, activate the virtual environment:
.\venv\Scripts\Activate.ps1source venv/bin/activateYou should see (venv) appear at the beginning of your terminal prompt, indicating the virtual environment is active.
- Start the FastAPI server:
uvicorn main:app --reload- Access the web interface at:
http://localhost:8000
Every time you open a new terminal to work on the project:
- Navigate to the project directory
- Activate the virtual environment (using the commands above)
- Then you can run the server or other commands
To stop the server:
- Press
Ctrl+Cin the PowerShell window (recommended) - If the server doesn't stop:
# Kill the process taskkill /F /IM python.exe
Note: The kill command forcefully terminates processes. Only use it if Ctrl+C doesn't work.
- Press
Ctrl+Cin the terminal (recommended) - If the server doesn't stop:
# First suspend the process Ctrl+Z # Then find and kill the process on port 8000 lsof -ti :8000 | xargs kill -9
Note: The kill commands forcefully terminate processes. Only use them if Ctrl+C doesn't work.
- Type your question in the chat input
- The system will:
- Search the web using Tavily API
- Extract relevant content from top sources
- Generate a comprehensive answer using Gemini
- Stream the response in real-time
- Click the microchip icon to enable local model mode
- Type your question
- Get responses from the Gemma 2B model running locally
- Enjoy privacy-focused, offline processing
- Click the gallery icon
- Enter your image description
- Wait for the image to be generated
- Download using the download button
- Toggle RAG mode
- Upload documents
- Ask questions about the documents
- Click the pencil icon
- Use drawing tools
- Speak to get AI assistance
- Click the video icon
- Allow camera and microphone access
- Ask questions about what you see
- Get voice responses