- autonomous mode for research that can search and read websites
- real-time communication via websocket
- cool cyberpunk ui
- python 3.8+ (python 3.13 has issues with multiprocessing - use 3.8-3.11 for best compatibility)
- access to an llm api endpoint like kluster.ai
- create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate - install dependencies:
pip install -r requirements.txt- create a
.envfile in the project root:
llm_base_url=your_llm_api_endpoint
llm_api_key=your_api_key
host=0.0.0.0
port=8000
debug=false- run the application:
for development (with auto-reload):
python -m src.backend.api.appfor production:
# single worker mode (most stable)
uvicorn src.backend.api.app:app --host 0.0.0.0 --port 8000
# multi-worker mode (if using python <3.12)
uvicorn src.backend.api.app:app --workers 4 --host 0.0.0.0 --port 8000note: if using python 3.13+, stick to single worker mode due to multiprocessing limitations.
the application will be available at http://localhost:8000
the frontend is built with:
- tailwindcss for styling
- marked.js for markdown rendering
- websocket for real-time communication
to modify the frontend:
- edit files in
src/frontend/ - the changes will be reflected immediately (no build step required)
the backend uses:
- fastapi for the web framework
- langgraph for llm interaction
- duckduckgo for web searches
- beautifulsoup for web scraping
to modify the backend:
- edit files in
src/backend/ - the server will auto-reload on changes when running in debug mode
all configuration is handled through environment variables:
llm_base_url: base url for the llm api endpointllm_api_key: api key for llm accesshost: host to bind the server toport: port to run the server ondebug: enable debug mode
