-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the SlimeNexus technical documentation. This project is more than a gamified pet; it is an academic and technical exploration of Local Agentic AI Orchestration, Cross-Platform Systems Engineering, and Gamified Productivity Systems (GPS).
At its core, SlimeNexus addresses the growing need for Decentralized AI Integration. As Cloud-based LLMs become more restricted and costly, SlimeNexus explores the feasibility of Edge Intelligence—bringing high-level reasoning to the user's local hardware.
- Human-Computer Interaction (HCI): Investigating how gamification (via the Slime/Tamagotchi metaphor) reduces the friction of daily task management and habit formation.
- Edge AI Performance: Benchmarking the efficiency of RDNA-based GPUs (like the AMD RX 6750 XT) when handling simultaneous LLM inference and OS-level automation.
- Local Agentic Security: Designing a "Zero-Trust" local bridge where a web-based frontend can safely trigger system-level actions via an authenticated C# intermediary.
To ensure scalability and maintainability, SlimeNexus is built upon industry-standard Design Patterns within the .NET 9 ecosystem.
The solution is decoupled into layers:
- Domain: Pure logic and Slime state definitions (No dependencies).
- Application: Use Cases, Command/Query handling, and AI Orchestration.
- Infrastructure: OS-specific implementations (Windows WMI, Linux Systemd, macOS Metal).
- Presentation: The Avalonia UI and the Local API Gateway.
Since SlimeNexus must run on Windows, Linux, and macOS, we utilize the Strategy Pattern to inject hardware-specific probes at runtime. This allows the core engine to request "GPU VRAM" without knowing if it's talking to an AMD Radeon, NVIDIA, or Apple Silicon chip.
We use MediatR (or a similar lightweight implementation) to decouple the Local API requests from the execution logic. This ensures that a "Task Validation" request from the web follows a predictable, testable, and logged pipeline.
The SlimeNexus Agent observes local system changes (files, processes, git commits) and pushes updates to the Web Frontend via WebSockets (SignalR), maintaining a low-latency state sync.
In the SlimeNexus ecosystem, performance is not an afterthought—it is a requirement.
- Native AOT (Ahead-of-Time): By compiling to native code, we reduce the cold-start time and memory footprint of the background agent, ensuring it doesn't interfere with the user's primary tasks or gaming.
- RDNA 2/3 Optimization: Specialized logic for AMD hardware ensures that the 12GB of VRAM (standard on mid-high tier cards) is utilized efficiently, balancing the LLM context window with the system's graphical needs.
- Asynchronous Orchestration: Every interaction with Ollama and OpenClaw is non-blocking, preventing UI hangs and ensuring the "Jarvis" response feels instantaneous.
- Low Latency: Sub-100ms communication between the Web UI and the Local Agent.
- Privacy by Design: 100% of the AI inference and file scanning remains on the user's machine.
- Extensibility: A plugin-based system where developers can write their own "OpenClaw Tools" in C# or Python to expand the Slime's capabilities.
"The goal of SlimeNexus is to prove that the most powerful productivity tool is the one that lives where the work happens: locally on your machine, wrapped in an engaging, gamified experience."