A fork of Open WebUI extended with HatCat real-time concept interpretability and steering controls.
HatCat UI extends the OpenAI-compatible chat interface with:
- Colored token highlighting based on divergence scores (green → red)
- Hover popups showing activation probes, text classifiers, and top diverging concepts
- Per-token metadata streamed in real-time during generation
- Steering panel with adjustable strength sliders (-1.0 to +1.0)
- Recent concepts chips for quick steering activation
- Multiple active steerings can be combined
The fork extends the standard OpenAI streaming response format to include interpretability metadata:
{
"choices": [{
"delta": {
"content": "token",
"hatcat_metadata": {
"divergence": 0.237,
"top_concepts": ["Demonstrating", "PhysicalSystem"],
"activations": {"Demonstrating": 0.741, "PhysicalSystem": 0.836}
}
}
}]
}This UI requires the HatCat server to function. The standard Open WebUI backend will not provide the interpretability features.
# Start HatCat server (from HatCat repo)
cd ../HatCat
.venv/bin/python src/ui/openwebui/server.py --port 8765
# Start this UI
npm install
npm run devConfigure the UI to connect to your HatCat server endpoint.
| File | Changes |
|---|---|
src/lib/components/chat/Messages/Markdown.svelte |
Token highlighting, divergence visualization |
src/lib/components/chat/Messages/ResponseMessage.svelte |
Metadata extraction from stream |
src/lib/components/chat/Chat.svelte |
Steering panel integration |
src/lib/components/chat/MessageInput.svelte |
Steering controls |
This fork tracks the dev branch of Open WebUI and periodically merges upstream changes. HatCat-specific commits are kept minimal and isolated to the components listed above.
- Open WebUI - The excellent base interface this fork extends
- HatCat - The interpretability and steering backend
MIT (same as Open WebUI)
For documentation on the base Open WebUI features, see the Open WebUI Documentation.
