A read-only SRE copilot powered by Mistral 7B for system administration troubleshooting. Provides advisory diagnostics and safe command suggestions without autonomous execution capability.
- Interactive Chat Mode: Full conversation with context history (up to 10 turns)
- One-Shot Query Mode: Headless mode for scripting and automation
- System Diagnostics: Automatic collection of memory, disk, processes, and system status
- SSH-Based: Communicates with remote Mistral instance on dedicated machine
- Read-Only by Design: Never executes commands; strictly advisory analysis only
- Safe Defaults: Explicit command whitelists and blocklists prevent dangerous operations
- Conversation Context: Maintains history for multi-turn diagnostics
- bash
- ssh (with key-based authentication to apollo)
- Mistral 7B instance running on a remote machine (apollo)
chmod +x setup-ssh.sh sysdawg
./setup-ssh.shThis creates and configures .env with SSH details to your Mistral host.
./sysdawgStart a diagnostic conversation. Type questions and receive AI analysis with diagnostic context. Maintains conversation history for follow-up questions.
Example conversation:
You: Why is my load average so high?
[System diagnostics collected and analyzed]
Sysdawg: Based on your current metrics...
You: What processes are consuming the most CPU?
[Refined diagnostics sent with conversation history]
Sysdawg: Looking at your top processes...
./sysdawg -p "Why is my disk full?"Ask a single question and get an immediate response. Perfect for automation and scripting.
./sysdawg --help
./sysdawg -hThe .env file stores connection details to your Mistral instance:
APOLLO_HOST=apollo.local # Hostname or IP of Mistral machine
APOLLO_USER=jay # SSH username
APOLLO_PORT=22 # SSH port
OLLAMA_MODEL=mistral # Model name in Ollama
OLLAMA_API_URL=http://localhost:11434/api/chat # Ollama API endpointRun ./setup-ssh.sh to create this file interactively. Never commit .env to version control.
To make the persona more robust, you can bake the system prompt directly into a custom Ollama model. This is the recommended approach for better adherence to safety rules.
You can create the custom model either locally (if you run Ollama locally) or on your remote host (apollo).
Option A: Create on Remote Host (Recommended)
- Copy the Modelfile to your server:
scp ollama_model/Modelfile user@apollo:~/sysdawg.Modelfile - SSH into the server and build the model:
ssh user@apollo "ollama create sysdawg -f ~/sysdawg.Modelfile"
Option B: Create Locally
If you are running Ollama on your local machine:
ollama create sysdawg -f ollama_model/ModelfileUpdate your .env file to use the new model name:
# In .env
OLLAMA_MODEL=sysdawgNow sysdawg will use your custom-tuned model which has the persona and safety rules permanently embedded.
Local Machine (orion) Remote Machine (apollo)
sysdawg script ──SSH──> Mistral 7B
- Collects diagnostics - Analyzes diagnostics
- Builds prompts - Returns advice
- Manages conversation - No execution
- Initialization: Loads configuration from
.envand readsSYSDAWG.mdpersona - Diagnostic Collection: Gathers system state (memory, disk, processes, logs, etc.)
- Prompt Building: Combines diagnostics + user question + AI persona
- SSH Communication: Sends prompt to Mistral instance via SSH
- Response Analysis: Receives advisory response and displays to user
- Context Maintenance: Tracks conversation history for follow-up questions
Sysdawg is fundamentally designed to be read-only. It never executes commands.
- System inspection:
cat,grep,ls,find,ps,top,free,df,uptime - Logging:
journalctl --no-pager,systemctl status,dmesg - Utilities:
awk,sed(read-only),netstat,ss
- Privileged execution:
sudo - Destructive operations:
rm,mkfs,dd,iptables -F - Service control:
systemctl restart/stop - Configuration changes:
sed -i(in-place editing) - Package management:
apt,yum,pip
This is not a security feature—it's a fundamental design principle. All suggestions are advisory. You decide what to run.
sysdawg: Main executable script (interactive + one-shot modes)setup-ssh.sh: Configuration helper for initial setup.env: Configuration file (generated by setup-ssh.sh, never committed).env.example: Example configuration templateSYSDAWG.md: System prompt and personality definition for the AIcommands.conf: Safety rules and command configurationsINSTRUCTIONS.md: Development roadmap and phase descriptionsIDEA.md: Original design philosophy and safety principlesCLAUDE.md: Detailed documentation for AI code assistantsCHANGELOG.md: Version historyREADME.md: This file
- Basic shell wrapper with system diagnostics
- Interactive chat and one-shot modes
- SSH communication to Mistral
- Read-only advisory responses
- Structured output format (Summary, Likely Causes, What to Check, Suggestions, Confidence)
- JSON/YAML formatted responses
- Reduced hallucination through constraints
- Command risk classifier (SAFE / REVIEW / DANGEROUS)
- Allowlist and blocklist validation
- Enhanced safety rails
- Python migration
- Modular collectors (power.py, network.py, storage.py, services.py)
- YAML-based configuration
- Plugin architecture
- Multi-device SSH tunneling
- Fleet-wide diagnostics
- Device registry management
- Web UI and dashboard
- Multi-device health visualization
- Container and process management
- Automation of low-risk operations
- Safe command execution with safeguards
- Dry-run mode patterns
Required:
- bash (core functionality)
- ssh (communication with Mistral host)
- Standard utilities: grep, sed, awk, cat, cut, tr
On Mistral Host (apollo):
- Mistral 7B LLM (via Ollama, llama.cpp, or compatible)
- bash (for receiving and processing prompts)
Use interactive mode to ask follow-up questions. The AI maintains context across turns and can refine analysis based on additional checks.
Use one-shot mode in scripts:
./sysdawg -p "Check if postgres is running"Ask the AI to explain what commands to run:
./sysdawg -p "How would I check if my network is properly configured?"The response includes suggested read-only commands and explanations.
Interactive mode is ideal:
- Ask the initial question
- Get diagnostic suggestions
- Run the suggested commands
- Ask follow-up questions with new data
- The AI is an advisor, not an executor. You remain in control of all operations.
- The Mistral 7B model is capable and clever but can hallucinate. Always verify suggestions against your system state.
- Conversation history is stored in memory only; it's not persisted across sessions.
- SSH connectivity to apollo is required. Test with
ssh apollobefore using sysdawg. - The tool is designed to fail safely—if the AI suggests something wrong, nothing breaks unless you run it.
# Verify SSH connectivity
ssh -p 22 user@apollo.local echo "Success"
# Reconfigure with setup-ssh.sh
./setup-ssh.sh- Verify Mistral instance is running on apollo:
ollama list - Check Ollama service status:
systemctl status ollama - Verify network connectivity to apollo
- Ensure standard utilities are installed (grep, sed, awk, etc.)
- Check
.envconfiguration is correct - Review logs with
./sysdawg -hfor debug options
This project is available under several open-source license options:
- MIT License: Simple and permissive
- Apache 2.0: Provides explicit patent rights
- GPL v3: Copyleft license
- BSD 3-Clause: Similar to MIT with additional restrictions
Choose the license that best fits your project goals.
This is an advisory-only AI tool designed to be safe by default. Contributions should maintain this safety-first principle. Never add autonomous execution capabilities.