Skip to content

B-A-M-N/Salience

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Salience

Salience is an experimental AI-powered chat aggregator and noise filter designed for high-volume live streams (Twitch, Kick, TikTok, YouTube, Whatnot).

It solves "chat blindness" by using local LLMs and vector embeddings to group spam, highlight meaningful interactions, and route messages based on their semantic importance.

🧠 Core Architecture

Salience operates using two primary engines:

1. The Gravity Engine (src/brain.py)

  • Semantic Grouping: Instead of a waterfall of text, messages are treated as "particles."
  • Vector Embeddings: Uses FastEmbed to convert messages into vectors.
  • Gravity Cells: Similar messages (e.g., "LMAO", "lol", "hahaha", "ROFL") are magnetically pulled together into a single "Gravity Cell" with an increasing mass.
  • Platform Specifics: Includes logic for auction-based platforms like Whatnot to handle bid spam and timer events.
  • Visual Outcome: 50 people typing "F" becomes one large, pulsing "F" on screen, rather than 50 lines of text.

2. The Salience Engine (src/salience_engine.py)

  • Cognitive Filtering: A local LLM (default: phi4 via Ollama) analyzes unique messages.
  • Importance Scoring: Every unique message is scored (1-10) based on context.
    • Low Score (<3): Routed to background/ambient noise.
    • High Score (>7): Routed to the "Focus Feed" (e.g., questions, high-value bets, technical help).
  • Emotional Tagging: Messages are tagged with sentiment/emotion for visualization.

🛠️ Stack

  • Language: Python 3.10+
  • Local LLM: Ollama (running phi4 or similar)
  • Vector DB: ChromaDB (Ephemeral/In-Memory)
  • Embeddings: FastEmbed
  • UI: NiceGUI (for the dashboard)
  • Ingestion: Custom adapters (Playwright/Stealth for difficult platforms).

🚀 Getting Started

Prerequisites

  1. Python 3.10+ installed.
  2. Ollama installed and running.
    ollama run phi4

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/salience.git
    cd salience
  2. Install dependencies:

    pip install -r requirements.txt

Usage

To run the interactive demo (Mock Mode):

python src/main.py

To run the full engine (requires configured .env and running Ollama):

  1. Edit src/main.py to set mock_mode=False.
  2. Run python src/main.py.

⚠️ Disclaimer

This project uses browser automation (Playwright) to ingest chat from certain platforms. This methods is fragile and subject to breakage if platform DOM structures change. Use with caution.

About

Experimental streaming platform aggregator to reduce chat blindness to multi-platform streamers, aggregating tik-tok, twitch, youtube, whatnot, and kick

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors