Skip to content

pastuszko28/Venom

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

1,044 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Venom v1.0 ๐Ÿ

| Dokumentacja w jฤ™zyku polskim

Venom desktop version โ€“ meta-intelligence system โ€” Autonomous AI agent system with strategic planning layer and knowledge expansion.

Venom is transforming from a simple command executor into an autonomous engineer that can:

โœจ Key Features

  • ๐ŸŽจ Creating new tools and self-repair
  • ๐Ÿ”Œ MCP Tool Import - Model Context Protocol integration (import from Git)
  • ๐ŸŒ Internet Access - Searching for current information (prices, news, documentation)
  • ๐Ÿง  Strategic Planning - Automatic decomposition of complex projects into steps
  • ๐Ÿ“š Knowledge Synthesis - Gathering and analyzing documentation from multiple sources
  • ๐Ÿค– Agent Management - Coordination of multiple specialized agents
  • ๐Ÿ’พ Long-term Memory - Saving and utilizing acquired knowledge
  • ๐ŸŽ“ Learning by Observation - Recording demonstrations and automatic workflow generation (NEW!)
  • ๐Ÿ‘๐Ÿ‘Ž Quality Loop - User feedback + logs and response quality metrics
  • ๐Ÿง  Hidden Prompts - Approved responses as context shortcuts
  • ๐Ÿงญ Runtime LLM Selection - Ollama/vLLM + active model controlled from panel
  • ๐Ÿ’ฌ Chat Continuity - Consistent session history per session_id (SessionStore) with reset after backend restart
  • ๐Ÿ—บ๏ธ Memory Visualization - Memory layer (LessonsStore + LanceDB) in /brain view, with session/pinned filtering and pin/delete actions
  • ๐Ÿ› ๏ธ Services Panel - /config shows real statuses of local stack (Backend API, Next.js UI, Ollama, vLLM, LanceDB, Redis, Docker) + Full/Light/LLM OFF profiles

๐ŸŽฏ Usage Examples

# 1. Searching for current information
"What is the current Bitcoin price?"
โ†’ System automatically searches the Internet and returns fresh data

# 2. Complex projects with planning
"Create a Snake game using PyGame"
โ†’ System:
  1. Finds PyGame documentation (ResearcherAgent)
  2. Creates game structure (CoderAgent)
  3. Adds snake logic (CoderAgent)
  4. Implements scoring (CoderAgent)

# 3. Multi-file webpage
"Create an HTML page with a digital clock and CSS styling"
โ†’ System creates separately: index.html, style.css, script.js

# 4. NEW: Learning by demonstration
"Venom, watch how I send a report to Slack"
โ†’ [User performs actions]
โ†’ System records, analyzes and generates workflow
โ†’ "Saved as skill 'send_slack_report'"
โ†’ Later: "Venom, send report to Slack" - executes automatically!

๐Ÿ—๏ธ Architecture

Project Structure

venom_core/
โ”œโ”€โ”€ api/routes/          # REST API endpoints (agents, tasks, memory, nodes)
โ”œโ”€โ”€ core/flows/          # Business flows and orchestration
โ”œโ”€โ”€ agents/              # Specialized AI agents
โ”œโ”€โ”€ execution/           # Execution layer and model routing
โ”œโ”€โ”€ perception/          # Perception (desktop_sensor, audio)
โ”œโ”€โ”€ memory/              # Long-term memory (vectors, graph, workflows)
โ””โ”€โ”€ infrastructure/      # Infrastructure (hardware, cloud, message broker)

Main Components

1. Strategic Layer (Planning)

  • ArchitectAgent - Project manager, breaks down complex tasks into steps
  • ExecutionPlan - Execution plan model with defined steps and dependencies

2. Knowledge Expansion

  • ResearcherAgent - Gathers and synthesizes knowledge from the Internet
  • WebSearchSkill - Search (DuckDuckGo) and scraping (trafilatura)
  • MemorySkill - Long-term memory (LanceDB)

3. Execution Layer

  • CoderAgent - Generates code using knowledge
  • CriticAgent - Verifies code quality
  • LibrarianAgent - Manages files and project structure
  • ChatAgent - Conversation and assistant
  • GhostAgent - GUI automation (RPA - Robotic Process Automation)
  • ApprenticeAgent - Learning workflows through observation (NEW!)

4. Hybrid AI Engine ๐Ÿง 

  • HybridModelRouter (venom_core/execution/model_router.py) - Intelligent routing between local LLM and cloud
  • Operating Modes: LOCAL (local only), HYBRID (mix), CLOUD (mainly cloud)
  • Local First: Privacy and $0 operational costs
  • Providers: Ollama/vLLM (local), Google Gemini, OpenAI
  • Sensitive data NEVER goes to the cloud
  • Runtime as API: model engine is treated as a replaceable HTTP server - we can run it or not, without impacting base logic. This allows using different model standards.
  • LLM-first Direction (Ollama): in single-user mode and low query intensity, Ollama's performance is practically comparable to vLLM, and model switching is simpler. vLLM gains an advantage mainly with high parallelism and heavy load.

5. Learning by Demonstration ๐ŸŽ“

  • DemonstrationRecorder - Recording user actions (mouse, keyboard, screenshots)
  • DemonstrationAnalyzer - Behavioral analysis and transformation pixels โ†’ semantics
  • WorkflowStore - Procedure repository with editing capability
  • GhostAgent Integration - Executing generated workflows

6. Orchestration

  • Orchestrator - Main system coordinator
  • IntentManager - Intent classification (5 types: CODE_GENERATION, RESEARCH, COMPLEX_PLANNING, KNOWLEDGE_SEARCH, GENERAL_CHAT)
  • TaskDispatcher - Task routing to appropriate agents

7. Runtime Services (operational)

  • Backend API (FastAPI/uvicorn) and Next.js UI โ€“ basic processes.
  • LLM Servers: Ollama, vLLM โ€“ start/stop from services panel.
  • LanceDB โ€“ local vector memory (embedded); Redis โ€“ optional broker/locks (can be disabled).
  • Nexus, Background Tasks โ€“ optional spots for future processes (disabled by default, no start/stop actions; can be hidden/ignored if unused).

Note about vision/image: perception currently uses local ONNX models (OCR/object recognition) and selected audio pipelines. Multimodal LLMs (Ollama/vLLM) are supported in theory, but are not wired as the vision runtime yet.

Data Flow

User Query
    โ†“
IntentManager (intent classification)
    โ†“
Orchestrator (flow decision)
    โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Simple code        โ”‚  Complex project    โ”‚  Search              โ”‚
โ”‚  CODE_GENERATION    โ”‚  COMPLEX_PLANNING   โ”‚  RESEARCH            โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚    CoderAgent       โ”‚  ArchitectAgent     โ”‚   ResearcherAgent    โ”‚
โ”‚         โ†“           โ”‚         โ†“           โ”‚        โ†“             โ”‚
โ”‚    CriticAgent      โ”‚  Plan creation      โ”‚   WebSearchSkill     โ”‚
โ”‚         โ†“           โ”‚         โ†“           โ”‚        โ†“             โ”‚
โ”‚       Result        โ”‚  Plan execution     โ”‚   MemorySkill        โ”‚
โ”‚                     โ”‚   (step by step)    โ”‚        โ†“             โ”‚
โ”‚                     โ”‚        โ†“            โ”‚      Result          โ”‚
โ”‚                     โ”‚      Result         โ”‚                      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿš€ Quick Start

๐Ÿ”Ž New web-next dashboard Detailed description of data sources for Brain/Strategy views and test checklist can be found in docs/FRONTEND_NEXT_GUIDE.md. The document also defines entry criteria for the next stage of UI work. Chat session documentation, Direct/Normal/Complex modes and memory behavior: docs/CHAT_SESSION.md. Skills standards and MCP import: docs/DEV_GUIDE_SKILLS.md.

๐Ÿ–ฅ๏ธ Frontend (Next.js โ€“ web-next)

The new presentation layer runs on Next.js 15 (App Router, React 19). The interface consists of two types of components:

  • SCC (Server/Client Components) โ€“ by default we create server components (without "use client" directive), and mark interactive fragments as client. Thanks to this, Brain/Strategy and Cockpit views can stream data without additional queries.
  • Shared layout (components/layout/*) โ€“ TopBar, Sidebar, bottom status bar and overlays share graphic tokens and translations (useTranslation).

Key Commands

# install dependencies
npm --prefix web-next install

# development environment (http://localhost:3000)
npm --prefix web-next run dev

# production build (generates meta versions + standalone)
npm --prefix web-next run build

# short E2E tests (Playwright, prod mode)
npm --prefix web-next run test:e2e

# validate translation consistency
npm --prefix web-next run lint:locales

The predev/prebuild script runs scripts/generate-meta.mjs, which saves public/meta.json (version + commit hash). All HTTP hooks use lib/api-client.ts; in local mode you can point to backend via variables:

NEXT_PUBLIC_API_BASE=http://localhost:8000
NEXT_PUBLIC_WS_BASE=ws://localhost:8000/ws/events
API_PROXY_TARGET=http://localhost:8000

Details (directory architecture, SCC guidelines, view data sources) are described in docs/FRONTEND_NEXT_GUIDE.md.

Note: Cockpit now has two views โ€” / (production layout with selected boxes) and /chat (reference, full copy of previous layout).

Slash Commands in Cockpit

  • Force tool: /<tool> (e.g. /git, /web).
  • Force providers: /gpt (OpenAI) and /gem (Gemini).
  • After detecting prefix, query content is cleaned of directive, and UI shows "Forced" label.
  • UI language setting (PL/EN/DE) is passed as preferred_language in /api/v1/tasks.
  • Context summary strategy (SUMMARY_STRATEGY in .env): llm_with_fallback (default, active model) or heuristic_only.

Installation

# Clone repository
git clone https://github.com/mpieniak01/Venom.git
cd Venom

# Install dependencies
pip install -r requirements.txt

# Configuration (copy .env.example to .env and fill in)
cp .env.example .env

Required Dependencies

Python 3.10+ (recommended 3.11)

Key Packages:

  • semantic-kernel>=1.9.0 - Agent orchestration
  • ddgs>=1.0 - Search engine (successor to duckduckgo-search)
  • trafilatura - Text extraction from web pages
  • beautifulsoup4 - HTML parsing
  • lancedb - Vector database for memory
  • fastapi - API server
  • zeroconf - mDNS service discovery for local network
  • pynput - User action recording (THE_APPRENTICE)
  • google-generativeai - Google Gemini (optional)
  • openai / anthropic - LLM models (optional)

Full list in requirements.txt

Configuration

Create .env file based on .env.example:

cp .env.example .env

โš™๏ธ Running (FastAPI + Next.js)

Full list of steps and deployment checklist can be found in docs/DEPLOYMENT_NEXT.md. Quick summary below:

Development Mode

# backend (uvicorn --reload) + web-next (next dev, turbopack off)
make start        # alias make start-dev

# stop processes and clean ports 8000/3000
make stop

# PID status
make status

Production Mode

make start-prod   # build next + uvicorn without reload
make stop
  • backend runs on http://localhost:8000 (REST/SSE/WS),
  • Next.js serves UI on http://localhost:3000,
  • SERVE_LEGACY_UI=True flag runs old FastAPI panel on port 8000 (fallback/reference solution).

๐Ÿ”ง Launch Profiles (light mode)

Venom offers flexible modes for running components separately - ideal for development environments with limited resources (PC, laptop).

Running Components Separately

Command Description Resource Usage When to Use
make api Backend (production, without auto-reload) ~50 MB RAM, ~5% CPU Working on frontend or not editing backend code
make api-dev Backend (development, with auto-reload) ~110 MB RAM, ~70% CPU (spikes) Active work on backend code
make api-stop Stop backend only - Frees port 8000 and backend memory
make web Frontend (production build + start) ~500 MB RAM, ~3% CPU Demo or not editing UI
make web-dev Frontend (dev server with auto-reload) ~1.3 GB RAM, ~7% CPU Active UI work
make web-stop Stop frontend only - Frees port 3000 and frontend memory
make vllm-start Start vLLM (local LLM model) ~1.4 GB RAM, 13% RAM Only when working with local models
make vllm-stop Stop vLLM - Frees ~1.4 GB RAM
make ollama-start Start Ollama ~400 MB RAM Alternative to vLLM
make ollama-stop Stop Ollama - Frees Ollama memory

Example Usage Scenarios

Scenario 1: Working only on API (Light)

make api          # Backend without auto-reload (~50 MB)
# Don't run web or LLM - save ~2.7 GB RAM

Scenario 2: Working on frontend

make api          # Backend in background (stable, no reload)
make web-dev      # Frontend with auto-reload for UI work
# Don't run LLM if not needed

Scenario 3: Full stack development

make api-dev      # Backend with auto-reload
make web-dev      # Frontend with auto-reload
make vllm-start   # LLM only if working with local models

Scenario 4: Demo / presentation

make start-prod   # Everything in production mode (lower CPU usage)

Scenario 5: API testing only

make api          # Backend without UI
curl http://localhost:8000/health

๐Ÿ’ก Optimization Tips

  • VS Code Server: If working in CLI, close remote VS Code:

    # From WSL/Linux
    pkill -f vscode-server
    # Or if using code tunnel
    code tunnel exit
  • Autoreload: --reload in uvicorn spawns an additional watcher process. Use make api instead of make api-dev when not editing backend code.

  • Next.js dev: next dev uses ~1.3 GB RAM due to auto-reload. Use make web (production) when only testing, not editing UI.

  • LLM Environment: vLLM/Ollama use 1-2 GB RAM. Run them only when working with local models. In AI_MODE=CLOUD mode they are not needed.

All data and tests are treated as local experiments โ€“ Venom runs on user's private machine and we don't encrypt artifacts. Instead, directories with results (**/test-results/, perf-artifacts/, Playwright/Locust reports) go to .gitignore to avoid accidentally committing sensitive data. Transparency takes priority over formal "shadow data".

Key Environment Variables:

AI Configuration (hybrid engine):

# AI Mode: LOCAL (local only), HYBRID (mix), CLOUD (mainly cloud)
AI_MODE=LOCAL

# Local LLM (Ollama/vLLM)
LLM_SERVICE_TYPE=local
LLM_LOCAL_ENDPOINT=http://localhost:11434/v1
LLM_MODEL_NAME=llama3

# Cloud providers (optional, required for HYBRID/CLOUD)
GOOGLE_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here

# Hybrid settings
HYBRID_CLOUD_PROVIDER=google        # google or openai
HYBRID_LOCAL_MODEL=llama3
HYBRID_CLOUD_MODEL=gemini-1.5-pro
SENSITIVE_DATA_LOCAL_ONLY=true     # Sensitive data ALWAYS local

Network and Discovery (local first):

# mDNS (Zeroconf) for local network - venom.local
# NOTE: Cloudflare has been removed, we use local discovery

The Hive (distributed processing):

ENABLE_HIVE=false
HIVE_URL=https://hive.example.com:8080
HIVE_REGISTRATION_TOKEN=your_token
REDIS_HOST=localhost

The Nexus (distributed mesh):

ENABLE_NEXUS=false
NEXUS_SHARED_TOKEN=your_secret_token
NEXUS_PORT=8765

External Integrations:

GITHUB_TOKEN=ghp_your_token         # Personal access token
GITHUB_REPO_NAME=username/repo      # Repository name
DISCORD_WEBHOOK_URL=https://...     # Optional
ENABLE_ISSUE_POLLING=false          # Enable automatic issue polling

๐Ÿ“– Full variable list: .env.example ๐Ÿ“– External integrations documentation: docs/EXTERNAL_INTEGRATIONS.md ๐Ÿ“– Hybrid AI engine documentation: docs/HYBRID_AI_ENGINE.md

Configuration Panel (UI)

Venom 2.0 introduces a graphical configuration panel available in the web interface at http://localhost:3000/config. The panel allows:

Service Management

  • Status monitoring - Backend, UI, LLM (Ollama/vLLM), Hive, Nexus, background tasks
  • Process control - Start/stop/restart from UI without using terminal
  • Real-time metrics - PID, port, CPU%, RAM, uptime, recent logs
  • Quick profiles:
    • Full Stack - All services active
    • Light - Only Backend and UI (resource saving)
    • LLM OFF - Everything except language models

Parameter Editing

The panel allows editing key runtime parameters from UI, with automatic:

  • Range validation - Ports (1-65535), confidence thresholds (0.0-1.0), boolean values
  • Secret masking - API keys, tokens, passwords are hidden by default
  • Configuration backup - Automatic .env backup to config/env-history/ before each change
  • Restart information - System indicates which services require restart after change

Available Parameter Sections:

  1. AI Mode - AI mode, LLM endpoint, API keys, model routing
  2. Commands - Start/stop commands for Ollama and vLLM
  3. Hive - Redis configuration, queues, timeouts
  4. Nexus - Distributed mesh, port, tokens, heartbeat
  5. Tasks - Background tasks (documentation, cleanup, memory consolidation)
  6. Shadow - Desktop awareness, confidence thresholds, privacy filter
  7. Ghost - GUI automation, verification, safety delays
  8. Avatar - Audio interface, Whisper, TTS, VAD

Security

  • Parameter whitelist - Only defined parameters can be edited via UI
  • Type and range validation - Checking value correctness before saving
  • Dependency checking - System won't allow starting a service without meeting requirements (e.g. Nexus requires running backend)
  • Change history - Each .env modification is saved with timestamp (last 50 backups kept)

Configuration Restore

Panel offers function to restore .env from earlier backups:

# Backups are located in:
config/env-history/.env-YYYYMMDD-HHMMSS

๐Ÿ’ก Tip: Quick profiles are ideal for switching between work modes. Use Light during development on laptop, and Full Stack on workstation with GPU.

๐Ÿ“Š Resource Monitoring

Venom offers tools for quick diagnostics of system resource usage.

System Snapshot

# Generate diagnostic report (processes, memory, CPU, service status)
make monitor

# Manual run
bash scripts/diagnostics/system_snapshot.sh

The report will be saved to logs/diag-YYYYMMDD-HHMMSS.txt and contains:

  • Uptime and load average
  • Memory usage (free -h, /proc/meminfo)
  • Top 15 processes (CPU and RAM)
  • Venom process status (uvicorn, Next.js, vLLM, Ollama)
  • PID files status and open ports (8000, 3000, 8001, 11434)

Example usage:

# Before starting work - check baseline
make monitor

# After starting services - compare usage
make api-dev
make web-dev
make monitor

# After finishing - make sure everything stopped
make stop
make monitor

๐Ÿ’พ WSL Memory Management (Windows)

If you run Venom in WSL (Windows Subsystem for Linux), you may encounter issues with vmmem - a Windows process that reserves a lot of RAM despite small Linux-side usage.

Checking Memory Usage

# Show detailed WSL memory statistics
bash scripts/wsl/memory_check.sh

The script will display:

  • Memory summary (free -h)
  • Detailed info from /proc/meminfo
  • Top 10 RAM-consuming processes
  • Memory usage by individual Venom components

Problem: vmmem occupies 20+ GB on Windows

Symptom: Task Manager in Windows shows vmmem process occupying 20-30 GB RAM, even though free -h in WSL shows only 3-4 GB.

Cause: WSL doesn't return memory to Windows immediately. Cache and buffers are kept "just in case".

Solution:

  1. Immediate: WSL memory reset

    # From WSL (stops all Venom processes and executes shutdown)
    bash scripts/wsl/reset_memory.sh
    
    # OR from Windows (PowerShell/CMD)
    wsl --shutdown
  2. Permanent: Limit usage via .wslconfig

    Create file %USERPROFILE%\.wslconfig (e.g. C:\Users\YourName\.wslconfig):

    [wsl2]
    # Memory limit for WSL
    memory=12GB
    
    # Number of processors
    processors=4
    
    # Swap limit
    swap=8GB

    Available example with comments:

    # See full configuration with examples
    cat scripts/wsl/wslconfig.example
    
    # Copy to Windows (from WSL)
    cp scripts/wsl/wslconfig.example /mnt/c/Users/YourName/.wslconfig

    After saving .wslconfig execute:

    # From Windows (PowerShell/CMD)
    wsl --shutdown

    Then restart WSL terminal.

Example .wslconfig Configurations

PC with 16 GB RAM (economical):

[wsl2]
memory=8GB
processors=4
swap=4GB

PC with 32 GB RAM (balanced):

[wsl2]
memory=12GB
processors=6
swap=8GB

Workstation with 64 GB RAM (performance):

[wsl2]
memory=32GB
processors=12
swap=16GB

Monitoring vmmem in Windows

  1. Open Task Manager (Ctrl+Shift+Esc)
  2. "Details" or "Processes" tab
  3. Find "vmmem" process - this is memory used by WSL
  4. Compare with free -h results in WSL

If difference is significant (>50%), consider:

  • Running wsl --shutdown to free cache
  • Setting limits in .wslconfig
  • Using Light profiles (make api instead of make start-dev)

Running

# Start server
uvicorn venom_core.main:app --reload

# Or use make
make run

๐Ÿ“– Documentation

Architecture and Vision

Agents

Frontend and UI

Tools and Features

DevOps and Deployment

Collaboration

๐Ÿงช Tests

cd /home/ubuntu/venom
source .venv/bin/activate || true

# Run all tests
pytest

๐Ÿ”ฌ Tests and Benchmarks

Full instructions (steps + expected values) are in docs/TESTING_CHAT_LATENCY.md. Most important commands:

Backend (FastAPI / agents)

  • pytest -q โ€” quick test of entire system.
  • pytest tests/test_researcher_agent.py / tests/test_architect_agent.py โ€” agent scenarios.
  • pytest tests/perf/test_chat_pipeline.py -m performance โ€” SSE measurement (task_update โ†’ task_finished) + parallel batch.
  • pytest --cov=venom_core --cov-report=html โ€” coverage report.

Frontend Next.js

  • npm --prefix web-next run lint
  • npm --prefix web-next run build
  • npm --prefix web-next run test:e2e โ€” Playwright on prod build.

Response Time and Chat Performance

  • npm --prefix web-next run test:perf โ€” Playwright comparing Next Cockpit and old panel (PERF_NEXT_BASE_URL / PERF_LEGACY_BASE_URL, HTML report goes to test-results/perf-report).
  • Available env vars: PERF_NEXT_LATENCY_BUDGET, PERF_LEGACY_LATENCY_BUDGET (default 5000ms/6000ms) and PERF_*_RESPONSE_TIMEOUT if limits need to be relaxed on slower machines.
  • pytest tests/perf/test_chat_pipeline.py -m performance โ€” backend pipeline (time to task_finished + batch).
  • ./scripts/run-locust.sh โ€” start Locust panel (http://127.0.0.1:8089) and manual API load.
  • ./scripts/archive-perf-results.sh โ€” dump test-results/, Playwright/Locust reports to perf-artifacts/<timestamp>/.

Test results do NOT go to repo (we ignore **/test-results/, perf-artifacts/, playwright-report/, etc.) โ€“ this way you store them locally without risk of data exposure.

๐Ÿ› ๏ธ Development Tools

Pre-commit Hooks

# Installation
pip install pre-commit
pre-commit install

# Manual run
pre-commit run --all-files

Linting and Formatting

cd /home/ubuntu/venom
source .venv/bin/activate || true

# Ruff (linter + formatter)
ruff check . --fix
ruff format .

# isort (import sorting)
isort .

# mypy (type checking)
mypy venom_core

Tools use the repo configuration (pyproject.toml) and skip data directories such as models/ and models_cache/.

๐Ÿ“Š Project Statistics

  • Lines of code: 118,555 (non-empty lines; excluding docs/, node_modules/, logs/, data/)
  • Number of agents: 33 (modules venom_core/agents/*)
  • Number of skills: 19 executable (venom_core/execution/skills/*) + 4 helper (Memory/Voice/Whisper/Core)
  • Number of tests: 518 (pytest def test_) + 18 (Playwright test()
  • Test coverage: 65%

๐ŸŽฏ Roadmap

โœ… v1.0 (current)

  • Planning Layer (ArchitectAgent)
  • Knowledge Expansion (ResearcherAgent + WebSearchSkill)
  • Internet Integration
  • Long-term memory
  • Comprehensive tests
  • NEW: External integrations (PlatformSkill) ๐Ÿค–
    • GitHub integration (Issues, pull requests)
    • Discord/Slack notifications
    • Issue โ†’ PR process

๐Ÿšง v1.1 (planned)

  • Background polling for GitHub Issues
  • Dashboard panel for external integrations
  • Recursive summarization of long documents
  • Search results cache
  • Plan validation and optimization
  • Better error recovery

๐Ÿ”ฎ v1.2 (future)

  • Webhook support for GitHub
  • MS Teams integration
  • Multi-source verification
  • Google Search API integration
  • Parallel plan step execution
  • Plan cache for similar tasks
  • GraphRAG integration

๐Ÿค Contributing

Contributions are welcome! See CONTRIBUTING.md to learn how to get started.

Collaboration Process

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'feat: add new feature')
  4. Push branch (git push origin feature/amazing-feature)
  5. Open a PR

Conventions

  • Code and comments: English or Polish
  • Commit messages: Conventional Commits (feat, fix, docs, test, refactor)
  • Style: Black + Ruff + isort (automatic via pre-commit)
  • Tests: Required for new features

๐ŸŒ THE NEXUS: Distributed Architecture

Experimental (v1.0.x): Venom can operate as a Central Node (Nexus) managing a swarm of remote instances ("Spores"). Stabilization is planned for v1.1.

Distributed mesh features:

  • ๐Ÿ”— Master-worker architecture - Nexus (brain) + Spores (executors)
  • ๐Ÿ“ก WebSocket communication - fast, bidirectional
  • ๐Ÿ” mDNS service discovery - automatic node discovery in local network (venom.local)
  • โš–๏ธ Load balancing - automatic selection of least loaded node
  • ๐Ÿ”„ Hot-plugging - dynamic adding/removing nodes
  • ๐Ÿ’“ Health checks and failover - automatic offline detection

Usage Example:

# 1. Run Venom in Nexus mode
export ENABLE_NEXUS=true
export NEXUS_SHARED_TOKEN=your-secret-token
cd venom_core && python main.py

# 2. Run Venom Spore on remote machine
cd venom_spore
export SPORE_NEXUS_HOST=venom.local  # or 192.168.1.10
export SPORE_SHARED_TOKEN=your-secret-token
python main.py

# 3. Check connected nodes
curl http://localhost:8000/api/v1/nodes

# 4. Execute task on remote node
curl -X POST http://localhost:8000/api/v1/nodes/{node_id}/execute \
  -H "Content-Type: application/json" \
  -d '{"skill_name": "ShellSkill", "method_name": "run", "parameters": {"command": "ls"}}'

Docker Compose Demo:

# Run swarm simulation (2 Docker nodes)
docker-compose -f docker-compose.spores.yml up

# Run demo
python examples/nexus_demo.py

๐Ÿ“– Full documentation: venom_spore/README.md ๐Ÿ“– Hive Architecture: docs/THE_HIVE.md

๐Ÿ‘ฅ Team

  • Development Lead: mpieniak01
  • Architecture: Venom Core Team
  • Contributors: Contributors list

๐Ÿ™ Acknowledgments

  • Microsoft Semantic Kernel
  • Microsoft AutoGen
  • OpenAI / Anthropic / Google AI
  • Open Source Community

Venom - Autonomous AI agent system for next generation automation

๐ŸŒŸ If you like the project, leave a star on GitHub!

๐Ÿ“ License

Distributed under the MIT License. See LICENSE for more information.

Copyright (c) 2025-2026 Maciej Pieniak


About

Venom Meta-Intelligence

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.3%
  • TypeScript 18.5%
  • JavaScript 3.7%
  • HTML 2.3%
  • CSS 1.5%
  • Shell 0.4%
  • Other 0.3%