An automated multi-agent orchestration system to look for alien signals in YouTube recommendation algorithms (pattern analysis).
This is a local, offline-friendly ASP.NET Core Razor Pages app that:
- Accepts a ZIP of saved YouTube Home HTML snapshots (ex:
SingleFile.htmlsaves). - Extracts video “tiles” (title + thumbnail URL when available).
- Runs a multi-agent LLM pipeline (Signal → Interpreter → Skeptic → Synth).
- Writes outputs to disk and displays them in a clean Activity panel.
- ✅ Upload ZIP → safe extraction (blocks zip path traversal)
- ✅ Parse tiles from:
ytInitialData(when present), and/or- DOM parsing fallback (SingleFile-friendly)
- ✅ Multi-agent orchestration:
- Signal: clusters + anomalies (JSON output)
- Interpreter: roleplay hypothesis grounded in evidence
- Skeptic: normal explanations + bias checks
- Synth: final combined report with confidence + next experiments
- ✅ Activity endpoint that auto-refreshes and shows:
- Final report
- Signal JSON
- Interpreter output
- Skeptic output
- ✅ Results persisted per job under
App_Data/jobs/<jobId>/
Pages/Index.cshtml– Upload UI + Activity panel (auto-refresh)Pages/Index.cshtml.cs– Upload handler, ZIP extraction, parsing, orchestration, file writesAlienOrchestrator.cs– Multi-agent pipeline + proof-text cappingLmStudioChatClient.cs– OpenAI-compatible chat client for LM StudioLlmAgent.cs,IAgent.cs– Agent wrapper(s)AgentPrompts.cs– Prompt “system” roles for each agentYouTubeSnapshotParser.cs– HTML parsing →VideoTileextraction
- .NET SDK 8 or newer (recommended: latest LTS)
You need LM Studio running a local OpenAI-compatible server:
- Install LM Studio
- Download/load a chat model
- Start the Local Server (OpenAI-compatible)
This project expects the server at:
http://localhost:1234/v1/chat/completions
If you change the port or base URL, update your app configuration accordingly.
If you are using DOM fallback parsing, install:
AngleSharpNuGet package
You can configure LM Studio settings using appsettings.json (recommended) or hard-code in LmStudioChatClient.
Example appsettings.json snippet:
{
"LmStudio": {
"BaseUrl": "http://localhost:1234/v1/",
"Model": "your-model-name-here"
}
}Then bind in Program.cs (or inject via options). If you already inject LmStudioChatClient and AlienOrchestrator, you’re good.
- Load your model
- Start the local server
- Confirm it answers at
http://localhost:1234/v1/chat/completions
dotnet restore
dotnet runOpen the printed URL (usually https://localhost:xxxx).
-
Create YouTube Home snapshots:
- Open YouTube Home (signed in if you want personalization)
- Save each page as a SingleFile HTML (or similar)
- Do this multiple times (ex:
yt_home_01.html…yt_home_20.html)
-
ZIP the HTML files together.
-
Upload the ZIP in the app.
-
Watch the Activity panel:
- It will show state changes (extracted → running_agents → completed)
- Expand accordions for Signal / Interpreter / Skeptic / Final Report
Each upload creates a job folder:
App_Data/jobs/<jobId>/
snapshot.zip
extracted/
(your html files)
status.json
results.json
status.jsoncontains state + countsresults.jsoncontains the serializedJobResult(agent outputs)
If your ZIP is large, your input can exceed the model’s context length.
Fix options:
- Reduce the number of tiles/pages you feed into the proof text
- Cap the proof text length (recommended)
- Use a model with a larger context window
- Increase context settings in LM Studio (if supported by your model/runtime)
This can happen if:
- Your snapshot doesn’t include the
ytInitialDatapayload - The page is saved in a way that strips scripts/data
Fix options:
- Enable the DOM fallback parsing path (AngleSharp)
- Ensure you saved a fully-loaded YouTube Home page
- Try SingleFile with “include scripts” enabled (if available)
Some models ignore prompt instructions. Fix options:
- Switch models
- Add stronger “English-only / no reasoning tags” constraints
- Add a post-filter step to strip
<think>…</think>blocks
- Built with ASP.NET Core Razor Pages
- Uses an OpenAI-compatible local server (LM Studio)
- Optional DOM parsing via AngleSharp