Skip to content

joseph-webber/agentic-brain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

153 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Agentic Brain Logo

🧠 Agentic Brain

Install. Run. Create.

Enterprise AI, Production Ready. The Multi-LLM Orchestration framework combining GraphRAG, Knowledge Graph, and Vector Database retrieval. Built for high-performance Python with CUDA/ROCm GPU Acceleration and Apple Silicon (M1-M4) MLX support.

Redis Cache, WebSocket Streaming, and Kafka/Redpanda event streaming deliver real-time agent workflows — secure, scalable, and compliant (HIPAA, SOC 2) for Healthcare, Finance, Legal, and Defense.


APP · SDK · PLATFORM

pip install agentic-brain

CI Release PyPI version Python License Tests Battle Tested

4,700+ tests across unit, integration & E2E • 95%+ coverage48 WooCommerce-specific tests.

SOC 2 Ready ISO 27001 Ready HIPAA Ready GDPR

WCAG 2.1 AA macOS Windows Linux M1/M2/M3

WooCommerce WordPress

Discord GitHub Discussions Twitter/X


🛒 E-commerce & WordPress Integration

WooCommerce + WordPress are first-class citizens in Agentic Brain.

  • WooCommerce full API support — products, orders, customers, coupons, taxes, shipping, and webhooks.
  • WordPress CMS integration — posts, pages, media, and taxonomy sync for content-aware commerce.
  • Product sync & inventory management — real-time catalog updates, variations, stock levels, and backorders.
  • Order processing & fulfillment — status updates, refunds, shipping tracking, and fulfillment workflows.
  • Analytics & reporting — sales summaries, inventory insights, and customer metrics.
  • Natural language commerce chatbotWooCommerceChatbot for admin, customer, and guest journeys.

Explore Commerce & WordPress Docs →

🔌 WordPress Plugin

Docs: WordPress Integration GuidePlugin README

Quick install:

  1. Download the plugin ZIP from GitHub Releases or clone into wp-content/plugins/agentic-brain.
  2. Activate Agentic Brain in Plugins → Installed Plugins.
  3. Open Settings → Agentic Brain and add your API endpoint + API key.
  4. Click Sync Now to push products and posts to the AI backend.

Key features:

  • AI chat widget with storefront-aware responses.
  • AI product search shortcode + Gutenberg block.
  • Real-time WooCommerce + WordPress RAG sync.
  • WCAG 2.1 AA accessibility baked in.

🎙️ Voice System

The Agentic Brain voice system is built for accessible, reliable spoken output. Phase 2 strengthened the stack with:

  • serialized speech so voices do not overlap
  • distributed speech locking across processes with Redis fallback
  • spatial and stereo positioning so each lady has a stable place
  • durable queueing and event streaming through Redpanda, Redis, and memory fallback
  • watchdog recovery for stalled voice workers

Voice quick start

ab voice list --primary
ab voice speak "Hello there" -v "Karen (Premium)"
ab voice mode work
ab voice conversation --demo
python demo_voice_system.py

Voice Copilot integration

Joseph can now speak to either a direct Claude voice agent or GitHub Copilot CLI:

# Standalone Claude voice agent
python voice_launcher.py --mode standalone

# One-turn smoke test without microphone input
python voice_launcher.py --mode standalone --once --text "Say hello to Joseph in one sentence." --no-speak

# Route voice into GitHub Copilot CLI
python voice_launcher.py --mode copilot --repo-path /Users/joe/brain

# One-turn Copilot bridge smoke test
python voice_launcher.py --mode copilot --once --text "Summarize what this repository is for." --no-speak

Files:

  • voice_standalone.py - sox recording, OpenAI transcription, Claude responses, Redis state in voice:standalone:*
  • voice_copilot_bridge.py - same audio pipeline bridged into gh copilot, Redis state in voice:bridge:*
  • voice_launcher.py - accessible entry point for both modes, Redis launch reporting in voice:integrator:*

Voice docs

🐝 Redis Swarm Coordination

Agentic Brain supports Redis-based multi-agent coordination for parallel development.

Features

  • Agent Registration - Agents register via Redis keys
  • Task Distribution - Work distributed via Redis lists
  • Real-time Coordination - Pub/sub for instant communication
  • Result Aggregation - Collect and merge findings
  • Health Monitoring - Track agent status

Quick Example

from agentic_brain.swarm import RedisCoordinator

coord = RedisCoordinator()
coord.register_agent("searcher", capabilities=["github", "web"])
coord.submit_task({"action": "search", "query": "voice chat llm"})
results = coord.collect_results(timeout=60)

Documentation


📚 Documentation

Explore the Full Documentation Index

Key guides:

Documentation hubs:


🚀 One-Click Deploy

Deploy to Heroku Deploy to Railway Deploy to Render
Deploy to Fly.io Deploy to DigitalOcean
Deploy to Google Cloud Deploy to Azure Deploy to AWS Deploy to Kubernetes


🌐 Demo Deployments

GitHub Pages (Docs Only)

The static documentation site is built from main and published at agentic-brain-project.github.io/agentic-brain. Every push to main refreshes the public docs within roughly one minute via GitHub Pages.

Status: ✅ Live and reachable (documentation demo)

Render One-Click Demo

Use the maintained render.yaml blueprint plus deployment/render/README.md for a full-stack demo deployment (web app, Neo4j, managed Redis). Fork the repo, open https://render.com/deploy, point it at your fork, and Render will provision everything with the demo credentials from .env.docker.example. The resulting URL (for example, https://agentic-brain.onrender.com) is perfect for sharing the live demo.

Deploy live demo to Render

Accessing the Live Demo

After Render finishes provisioning:

  1. Open your Render web service URL (example: https://agentic-brain-demo.onrender.com).
  2. Validate health: https://<your-service>.onrender.com/health.
  3. Open API docs: https://<your-service>.onrender.com/docs.
  4. Share that URL for external testing.

Docker Compose Production Profile

For a self-hosted showcase:

  1. cp .env.docker.example .env.docker and set NEO4J_PASSWORD, REDIS_PASSWORD, and JWT_SECRET.
  2. docker compose -f docker/docker-compose.prod.yml up -d.
  3. Visit http://localhost:8000 for the API and http://localhost:7474 for Neo4j Browser.

The production compose profile now loads variables from .env.docker, enforces a Redis password, and keeps the stack constrained to a single VM for demos.


📊 By the Numbers

110+

RAG Loaders

7+

LLM Providers

42

Deployment Modes

180+

Voice Options*

27

Durability Modules

4700+

CI Tests Passing

* Voice Options: 145+ macOS voices + 35+ cloud TTS voices (varies by OS and provider).


Quick StartFeaturesComparisonDocumentation

🔌 110+ RAG Integrations

Connect to ANY data source — from cloud storage to databases to enterprise systems.

☁️ Cloud Storage & Documents

SharePoint OneDrive Google Drive Dropbox Box Confluence Notion

💰 Accounting & Finance

Xero QuickBooks Intuit Sage FreshBooks

🗄️ Databases

MongoDB PostgreSQL MySQL Redis Elasticsearch Neo4j Pinecone

🤝 CRM & Sales

Salesforce HubSpot Pipedrive Zoho CRM

💬 Communication

Slack Microsoft Teams Discord Email

🛠️ Developer Tools

GitHub GitLab Bitbucket Jira Linear Asana

📄 File Formats

PDF Word Excel PowerPoint CSV JSON XML Markdown

🌐 Web & APIs

REST API GraphQL Web Scraping RSS Webhooks

📚 See the integration catalog in docs/DATA_LOADERS.md and the platform guides in docs/integrations/README.md


🆕 Agentic Definition Language (ADL)

Configure your entire AI brain from a single .adl file — like JHipster JDL, but for LLMs, RAG, voice, API, and security:

application AgenticBrain {
  name "My Enterprise AI"
  version "1.0.0"
  license Apache-2.0
}

llm Primary {
  provider OpenAI
  model gpt-4o
}

api REST {
  port 8000
  cors ["*"]
}
agentic adl init       # create brain.adl
agentic adl validate   # validate syntax
agentic adl generate   # generate adl_config.py, .env, docker-compose.yml

See docs/ADL.md for full syntax.

🐳 Docker Setup

Get a production-ready environment running in seconds:

  1. Configure credentials:

    cp .env.docker.example .env.docker
    # Edit .env.docker with your secure passwords

    Required Credentials:

    • NEO4J_PASSWORD: Set to Brain2026 (or your preferred password)
    • REDIS_PASSWORD: Set to brain_secure_2024 (or your preferred password)
  2. Start services:

    docker compose up -d
    Service URL Default Credentials
    Neo4j http://localhost:7474 neo4j / Brain2026
    Redis redis://localhost:6379 brain_secure_2024
    Redpanda localhost:9644 (No auth by default)
  3. Verify health:

    docker compose ps

🔧 CI/CD & Testing

CI

Services Required:

  • Neo4j 2026.02.3 (with GDS 2.27.0 and APOC plugins)
  • Redis 7+
  • Redpanda (Kafka-compatible event bus)

Environment Variables:

NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=Brain2026
REDIS_URL=redis://:Brain2026@localhost:6379/0
KAFKA_BOOTSTRAP_SERVERS=localhost:9092
DEFAULT_LLM_PROVIDER=mock
TESTING=true

Running Tests:

# Install dependencies
pip install -e ".[test,dev,api]"

# Start services
docker compose up -d neo4j redis redpanda

# Run tests in parallel
pytest tests/ \
  --ignore=tests/e2e/ \
  -m "not integration" \
  -n auto --maxprocesses=4 \
  -v --tb=short --timeout=60

# Cleanup
docker compose down -v

CI Pipeline:

  • Matrix testing on Python 3.11, 3.12, 3.13
  • Parallel test execution (pytest-xdist)
  • Code quality: Black, Ruff, MyPy
  • Security: Bandit, pip-audit
  • Coverage reports to Codecov
  • Installation tests on Ubuntu & macOS

Test Categories:

  • Unit tests: core engine, router, agents, voice safety (pytest -m "not integration").
  • Integration tests: Neo4j, Redis, Redpanda, event bus (pytest -m "integration").
  • E2E tests: installer and full workflows (tests/e2e/).
  • WooCommerce tests: 48 tests for WooCommerce/WordPress agents, APIs and chatbots.

See CI_FIX_SUMMARY.md for detailed CI configuration.

✨ Key Features

🧭 Smart LLM Router

Modes: Turbo · Cascade · Consensus
Auto-selects optimal models (Groq, Claude, Gemini) based on latency, cost, and complexity benchmarks.

🧬 Polymorphic Personas

Industry-specific AI operators (Defense, Healthcare, Legal, Finance) with pre-tuned guardrails, lexicons, and workflows.

📚 155+ RAG Loaders

Expanded library covering documents, DevOps, commerce, enterprise systems, and event streams.

🕸️ GraphRAG Architecture

Hybrid retrieval combining vector search, graph traversal, safe Text2Cypher, and community-aware expansion for higher-precision answers.

⚡ Hardware Acceleration

Metal (MLX) · CUDA · ROCm
First-class acceleration for Apple Silicon, NVIDIA, and AMD. Switch targets per agent or per workload.

🛡️ Ethics & Safety

Built-in AI safety layer with policy packs, automated content filtering, and human-in-the-loop review pipeline.

📡 Event Streaming

Redpanda & Kafka
Real-time event bus for inter-agent communication, telemetry, and distributed state management.

🔌 Real-Time connectivity

WebSocket & Redis
Full-duplex WebSocket streaming for UI updates and Redis-backed pub/sub for instant bot-to-bot sync.

🔐 Enterprise Security

Firebase Auth, SSO (OAuth2/OIDC) & SAML
Production-ready authentication, role-based access control, and audit logging out of the box.

🛍️ WooCommerce + WordPress

Full REST coverage for products, orders, customers, coupons, webhooks, and CMS sync.

📦 Product & Inventory Sync

Real-time catalog updates, stock management, variations, and fulfillment-ready workflows.

🤖 Commerce Chatbots & Analytics

Natural language storefront assistant with sales analytics and reporting dashboards.

🛒 E-Commerce

  • WooCommerce full API support via WooCommerceAgent + CommerceHub for products, orders, customers, coupons, analytics, and webhooks.
  • WordPress CMS integration with WordPressLoader for posts, pages, media, and taxonomy sync.
  • Product sync & inventory management for real-time stock, variations, and catalog updates.
  • Order processing & fulfillment including status updates, refunds, shipping, and webhook-driven flows.
  • Analytics & reporting for sales performance, inventory insights, and customer metrics.
  • Natural language commerce chatbot using WooCommerceChatbot, supporting admin, customer, and guest journeys.
  • WordPress plugin in plugins/wordpress/agentic-brain/ for drop-in chat, product search, settings, and Gutenberg blocks.

🧠 Unified Brain Architecture — THE Key Differentiator

ONE MIND. MULTIPLE MODELS. INFINITE SCALE.

The killer differentiator of Agentic Brain: Claude, GPT-4o, Gemini, Groq, xAI/Grok, and Ollama working as a single distributed intelligence.

┌──────────────────────────────────────────────────────────────┐
│                  UNIFIED BRAIN IN ACTION                     │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│  User: "Is this code secure?"                              │
│           ↓                                                  │
│  Smart Router → Task Analysis (security review)            │
│           ↓                                                  │
│  Dispatch to 5 specialist LLMs in parallel:               │
│    • Claude Opus 🔴 (deep reasoning)                      │
│    • GPT-4o 🟦 (code analysis)                            │
│    • Groq Llama-70B ⚡ (fast verification)                │
│    • xAI/Grok 🦅 (Twitter-aware context)                 │
│    • Ollama Local 🦙 (free second opinion)               │
│           ↓                                                  │
│  Redis Inter-LLM Communication → All models see each     │
│  other's findings, refine responses collaboratively       │
│           ↓                                                  │
│  Consensus Voting (3/5 agreement threshold)               │
│           ↓                                                  │
│  Response: "CONSENSUS: Code is VULNERABLE. Reasons:     │
│            [agreements] Dissenting: [none]                │
│            Confidence: 100% (5/5 models agree)"           │
│                                                              │
└──────────────────────────────────────────────────────────────┘

Why This Matters

Problem Unified Brain Solution
LLM Hallucinations 3/5 consensus voting reduces false positives to <1%
Model Lock-in Switch providers anytime without code changes
Cost Explosion Smart router: free Ollama → cheap Groq → expensive Claude (auto-fallback)
Single Point of Failure One model down? Other 4 continue working
Bias & Blindspots 5 models see problems you'd miss with just 1
Slow Responses Parallel inference across all models = 1 response time

The 6 Unified Brain Capabilities

1. ⚡ Redis Pub/Sub Inter-Link Real-time telepathy between LLMs. Models share context, intermediate findings, and refined answers instantly.

# All 5 models see each other's reasoning!
brain.broadcast_task("Review this security fix")
# → All models collaborate via Redis, not siloed

2. 🗳️ Consensus Voting System Critical decisions require multi-model agreement. Hallucination rate drops to near zero.

result = brain.consensus_task(
    "Is this code exploitable?",
    threshold=0.8  # 80% agreement required
)
print(f"Consensus: {result['consensus']}")
print(f"Confidence: {result['confidence']:.0%}")  # 80-100% typical

3. 🔀 Smart Routing & Fallback Task complexity determines model selection automatically:

  • Fast path: Groq/Haiku (instant, free)
  • Smart path: Sonnet/GPT-4o (balanced)
  • Deep path: Opus/o1 (complex reasoning)
  • Auto-fallback: Next model if first fails
bot_id = brain.route_task("Write a quick hello world")
# Returns 'groq-70b' or 'ollama-fast' (fast + free)

4. 🔗 Universal Context All models share knowledge via Neo4j Knowledge Graph. One model's insight is all models' insight.

5. 📊 Task Classification Automatic categorization: Code → Reviewer, Testing → Tester, Security → Security Specialist

brain._classify_task("Find SQL injection vulnerabilities")  
# → TaskType.SECURITY → Routes to security-specialist models

6. ⚙️ Provider Agnostic Unified interface for all providers:

  • 🟦 OpenAI: GPT-4o, GPT-4-turbo
  • 🔴 Anthropic: Claude Opus, Sonnet, Haiku
  • 🔵 Google: Gemini Pro (multimodal)
  • Groq: Llama-3 70B (lightning fast, free)
  • 🦅 xAI/Grok: Twitter-integrated AI (free credits)
  • 🦙 Ollama: Any local model (llama3, mistral, neural-chat)

Real-World Example: Code Security Review

from agentic_brain.unified_brain import UnifiedBrain

brain = UnifiedBrain()

# Single best answer (fast)
bot = brain.route_task("Review this Python code for security issues")
# → Returns 'claude-opus' (best for security)

# Multi-model consensus (accurate)
result = brain.consensus_task(
    "Is this code vulnerable to SQL injection?",
    threshold=0.8,
    num_models=5  # Poll all 5 specialist models
)

print(f"Consensus Answer: {result['consensus']}")
print(f"Confidence: {result['confidence']:.0%}")
print(f"Models Polled: {', '.join(result['models_used'])}")

# Collaborate in real-time
brain.broadcast_task(
    "Generate security test cases for this function",
    wait_for_consensus=True  # Wait for majority agreement
)

Brain Status & Monitoring

from agentic_brain.unified_brain import UnifiedBrain

brain = UnifiedBrain()
status = brain.get_brain_status()
print(f"🧠 Brain Operational")
print(f"  Total Models: {status['total_bots']}")
print(f"  Providers: {', '.join(status['providers'])}")
print(f"  Capabilities: {', '.join(status['capabilities'])}")
print(f"  Inter-Bot Comms: {'✓' if status['inter_bot_comms_active'] else '✗'}")

🗣️ Voice & Accessibility

Native, bi-directional voice interaction designed for accessibility and hands-free operation.

  • 145+ macOS voices + 35+ cloud TTS voices: High-fidelity synthesis via Apple Neural Engine and cloud providers.
  • Screen Reader First: Full WCAG 2.1 AA compliance with optimized ARIA labels and focus management.
  • Voice Control: Navigate, query, and command the entire platform purely through voice.

💡 Why Agentic Brain?

  • Enterprise AI, Production Ready with 4,700+ Tests, Battle Tested for regulated workloads
  • 🔗 GraphRAG, Knowledge Graph, Vector Database retrieval for traceable answers and provenance
  • 🤖 Multi-LLM Orchestration with policy-driven routing, fallback chains, and cost controls
  • Apple Silicon M1 M2 M3 M4 MLX support plus CUDA ROCm GPU Acceleration for NVIDIA/AMD
  • 🔁 Redis Cache, WebSocket Streaming, and Kafka/Redpanda pipelines for real-time agents
  • Voice/Audio accessibility (macOS) for VoiceOver-first enterprise experiences

💬 Use Cases

Real-world applications:

  • Financial risk analysis with auditable GraphRAG trails
  • Legal document intelligence with knowledge graph grounding

🛡️ Battle Tested

Agentic Brain is rigorously tested with 4,700+ passing tests:

Test Category Count Status
Core & Unit Tests 4,300+ ✅ Passing
E2E Tests 150+ ✅ Passing
Voice/Audio 130+ ✅ Passing
WebSocket/Events 80+ ✅ Passing
Redis Inter-Bot 39 ✅ Passing
Commerce 127 ✅ Passing

Zero failures. Production ready.

🏗️ Enterprise Tech Stack

Built on battle-tested infrastructure for scalability and security:

  • FastAPI: High-performance async Python backend
  • Neo4j: Native graph database for knowledge graph persistence
  • Kafka / Redpanda: Event streaming backbone for distributed agents
  • Redis Cache: Inter-LLM communication, state sharing, and low-latency queues
  • WebSocket Streaming: Real-time token delivery to apps and dashboards
  • Firebase Authentication: Secure identity and session management
  • Vector Database: High-dimensional embeddings for semantic search
  • Ollama / vLLM: Local inference optimized for Apple Silicon (MLX) & CUDA

Smart LLM Router Architecture

┌─────────────────────────────────────────────────────────────┐
│                    Smart LLM Router                         │
├─────────────────────────────────────────────────────────────┤
│  Query → Task Analysis → Model Selection → Response         │
│                                                             │
│  ┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐        │
│  │ OpenAI  │  │ Gemini  │  │  Groq   │  │  Local  │        │
│  │ GPT-4o  │  │  Pro    │  │ Llama3  │  │ Ollama  │        │
│  └─────────┘  └─────────┘  └─────────┘  └─────────┘        │
└─────────────────────────────────────────────────────────────┘

♿ Accessibility First

Accessibility is not optional — it's foundational.

Feature Description
🎯 WCAG 2.1 AA Full compliance. Screen reader optimized (VoiceOver, NVDA, JAWS)
⌨️ CLI-First Terminal is the primary interface. No GUI required. SSH-ready.
🔊 180+ Voices 145+ macOS voices + 35+ cloud TTS voices. Never miss a result.
🎹 Keyboard Only Every feature accessible without a mouse
📺 High Contrast Theme support for low vision users
🔇 No Flashing Safe for photosensitive users
# Enable accessibility features
ab config set accessibility.screen_reader true
ab config set accessibility.voice_feedback true
ab chat "Hello, Brain!"  # Response is spoken aloud

📖 Full Accessibility Documentation →


🤖 AI-Native Architecture

Multi-provider. No vendor lock-in. Switch with one command.

Anthropic
Claude 3.5 Sonnet/Opus
MCP support, tool use
OpenAI
GPT-4o, GPT-4 Turbo
Assistants API, functions
GitHub
Copilot Compatible
Complementary workflows
Ollama
100% Local
Zero cloud dependency
# Switch providers instantly
ab config set llm.provider anthropic   # Claude
ab config set llm.provider openai      # GPT-4
ab config set llm.provider ollama      # Local Llama

# Automatic fallback chain
ab config set llm.fallback "ollama:llama3.1:8b"  # If cloud fails

📖 Full AI Integration Documentation →


🤝 Strategic Partners

Agentic Brain integrates deeply with industry-leading platforms:

Temporal
Durable Execution
Drop-in replacement for Temporal SDK.
from agentic_brain.temporal import workflow
JHipster
Full-Stack Generation
Enterprise blueprints, Spring Boot patterns.
ab mode switch jhipster
Neo4j
Knowledge Graphs
Native GraphRAG with vector search.
brain.rag.graph_search(query)
WordPress
CMS & E-commerce
AI for 43% of websites.
from agentic_brain.commerce import WooCommerceAgent
Firebase
Cross-Device Sync
Real-time messaging, offline-first.
from agentic_brain.transport import FirebaseTransport
Ollama
Local LLM
Run AI completely offline.
brain.llm.local("llama3")

🎯 Mode System

42 deployment modes — switch your entire stack with one command:

ab mode switch medical      # HIPAA compliance, audit logging, PHI handling
ab mode switch banking      # PCI-DSS, SOX compliance, fraud detection
ab mode switch military     # Air-gapped, zero-trust, classified handling
ab mode switch retail       # Customer service, inventory, POS integration
┌─────────────────────────────────────────────────────────────┐
│              Industry-Specific AI Personas                  │
├──────────┬──────────┬──────────┬──────────┬────────────────┤
│ Defense  │Healthcare│  Legal   │ Finance  │  Education     │
│ BLUF fmt │HIPAA-safe│Citations │Compliant │ Step-by-step   │
│ temp:0.2 │ temp:0.1 │ temp:0.3 │ temp:0.2 │   temp:0.5     │
└──────────┴──────────┴──────────┴──────────┴────────────────┘

Mode Quick Reference

Mode Code Industry Key Features Compliance
mil Military/Defense Air-gapped, zero-trust, encrypted-at-rest ITAR, FedRAMP
med Healthcare PHI handling, HIPAA audit, consent tracking HIPAA, HL7
fin Banking/Finance Transaction monitoring, fraud ML, PCI scope PCI-DSS, SOX
gov Government FIPS 140-2, FedRAMP boundary, IL4/IL5 ready FedRAMP, NIST
ret Retail Inventory sync, POS, customer loyalty PCI (subset)
edu Education FERPA handling, student privacy, LMS FERPA, COPPA
leg Legal Client confidentiality, e-discovery, chain of custody ABA Model Rules
ins Insurance Claims processing, underwriting ML, policyholder data HIPAA, SOC 2
age Aged Care Dignity-first AI, family comms, incident tracking Aged Care Act 2024
dis Disability NDIS integration, support plans, assistive tech NDIS Standards
View All 42 Modes →
Code Mode Code Mode Code Mode
mil Military med Medical fin Finance
gov Government ret Retail edu Education
leg Legal ins Insurance age Aged Care
dis Disability man Manufacturing log Logistics
hos Hospitality rea Real Estate ene Energy
tel Telecom med Media spo Sports
agr Agriculture con Construction min Mining
mar Maritime avi Aviation aut Automotive
pha Pharma bio Biotech env Environment
ngo Non-Profit rel Religious art Arts/Culture
inf Influencer cre Creator str Streamer
pod Podcaster mus Musician wri Writer
pho Photography vid Videography gam Gaming
fit Fitness wel Wellness nut Nutrition
hom Home fam Family pet Pets
dev Developer ops DevOps sec Security

🔮 GraphRAG Architecture

Vector + Graph + Community Reasoning — Agentic Brain's hybrid retrieval stack

flowchart TB
    subgraph Ingestion
        Docs[Documents]
        APIs[APIs]
        Chat[Chat]
        Events[Events]
    end

    subgraph Processing
        Loaders[155+ RAG Loaders]
        Chunking[Chunking + Embeddings]
        Extract[Entity + Relationship Extraction]
    end

    subgraph Storage
        VectorDB[(Chunk / Entity Vectors)]
        GraphDB[(Neo4j Knowledge Graph)]
        Communities[(Leiden / Community Layer)]
    end

    subgraph Retrieval
        Hybrid[Hybrid Search]
        Expand[Graph Traversal + Community Expansion]
        Rerank[RRF / Reranking]
        Response[LLM Response]
    end

    Docs & APIs & Chat & Events --> Loaders
    Loaders --> Chunking
    Loaders --> Extract
    Chunking --> VectorDB
    Extract --> GraphDB
    GraphDB --> Communities
    VectorDB --> Hybrid
    GraphDB --> Hybrid
    Communities --> Expand
    Hybrid --> Expand --> Rerank --> Response
Loading

Architecture diagram description

The diagram shows the release's GraphRAG improvements as a pipeline of cooperating stages:

  1. Ingestion collects content from documents, APIs, chat, and event streams.
  2. Processing splits content into chunks, generates embeddings, and extracts entities plus relationships.
  3. Storage keeps vectors and graph structure together in Neo4j-friendly schemas so retrieval can combine both views.
  4. Hybrid retrieval starts with vector similarity, expands through graph edges, then optionally widens context through community detection.
  5. Reranking and response use fused scores plus graph provenance to feed final answer generation.

♻️ Latest GraphRAG Upgrades (2.16.0)

  • N+1 query elimination — entity, chunk, and relationship writes are wrapped in batched UNWIND Cypher so ingest time scales linearly even on 100k+ document drops.
  • Real MLX embeddings — Apple Silicon deployments automatically call MLXEmbeddings for metal-accelerated vectors (with deterministic fallback only when MLX is missing).
  • Community detection with Neo4j GDS Leiden — set GraphRAGConfig.community_algorithm="leiden" to persist cleaner community IDs directly into the graph and drive topic summaries.
  • Hybrid search with RRF fusion — reciprocal-rank fusion now blends vector, BM25, and graph expansion results for higher precision@k and transparent scoring in responses.
  • Async Neo4j driver + transaction retries — ingest and query paths use the async driver with resilient retry wrappers so transient timeouts or failover never drop a request.

📚 Docs updated: GraphRAG deep dive · Neo4j integration guide · Changelog

View Detailed Architecture (ASCII)
┌────────────────────────────────────────────────────────────────────────────────────┐
│                           🧠 AGENTIC BRAIN GraphRAG                               │
├────────────────────────────────────────────────────────────────────────────────────┤
│                                                                                    │
│  Sources: docs · APIs · chat · events                                             │
│       │                                                                            │
│       v                                                                            │
│  155+ loaders → chunking + embeddings → entity / relationship extraction          │
│       │                               │                                             │
│       └───────────────┬───────────────┘                                             │
│                       v                                                             │
│              Neo4j graph + vector-backed chunks                                    │
│                       │                                                             │
│         ┌─────────────┼─────────────┐                                               │
│         v             v             v                                               │
│   vector search   graph traversal   community layer (Leiden-ready)                 │
│         └─────────────┬─────────────┘                                               │
│                       v                                                             │
│       reciprocal-rank fusion + reranking + safe graph-aware generation             │
│                                                                                    │
└────────────────────────────────────────────────────────────────────────────────────┘

GraphRAG highlights in this release

  • Hybrid vector + graph retrieval using Neo4j-backed chunks, entities, and relationships
  • Safe graph querying with read-only Text2Cypher plus keyword fallback
  • Embedding integration via MLX-aware embedding hooks and Neo4j vector indexes
  • Community-aware design with Leiden-compatible graph analytics workflows
  • Layered APIs so you can choose lightweight extraction, simple GraphRAG, or production hybrid retrieval

📦 Supported Data Sources

Loaders

🏢 Enterprise (Confluence, Teams, Salesforce...)

Confluence Notion Slack Teams SharePoint Salesforce JIRA

💻 Code (GitHub, GitLab, Bitbucket...)

GitHub GitLab Bitbucket

📄 Documents (PDF, Word, Excel...)

PDF Word Excel Markdown

🗄️ Databases (Postgres, Mongo, Neo4j...)

PostgreSQL MySQL MongoDB Neo4j Redis

☁️ Cloud (AWS, GCP, Azure...)

AWS GCP Azure

🔧 DevOps (Kubernetes, Docker, Terraform...)

Kubernetes Docker Terraform

📊 Analytics (Datadog, Splunk, Prometheus...)

Datadog Splunk Prometheus Grafana

⚡ Hardware Acceleration

Hardware Framework Performance
🍎 Apple Silicon MLX <10ms embeddings
🟢 NVIDIA GPU CUDA RTX/A100/H100
🔴 AMD GPU ROCm RX 7900/MI300
💻 CPU Fallback NumPy Auto-detected
┌─────────────────────────────────────────┐
│        Hardware Acceleration            │
├─────────────┬─────────────┬─────────────┤
│  Apple M2+  │   NVIDIA    │    AMD      │
│    MLX      │    CUDA     │   ROCm      │
│  ~1.4ms/emb │  ~0.5ms/emb │  ~0.8ms/emb │
└─────────────┴─────────────┴─────────────┘

🧠 Chat Intelligence

Feature Description
Intent Detection ACTION/QUESTION/CHAT
Mood Analysis Adjust tone dynamically
Safety Checks Hallucination detection
Personality Professional/Empathetic

🔢 Vector Embedding Pipeline

┌─────────────────────────────────────────────────────────────────┐
│                    VECTOR EMBEDDING PIPELINE                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│   ┌──────────┐    ┌──────────────┐    ┌────────────────┐       │
│   │  Input   │───▶│  Chunking    │───▶│  Embedding     │       │
│   │  Text    │    │  Strategy    │    │  Model         │       │
│   └──────────┘    └──────────────┘    └───────┬────────┘       │
│                                               │                 │
│                   Hardware Acceleration       │                 │
│        ┌──────────────┬───────────────┐      │                 │
│        │   MLX        │   CUDA        │      │                 │
│        │ (M1/M2/M3)   │  (NVIDIA)     │◀─────┘                 │
│        └──────────────┴───────────────┘                        │
│                          │                                      │
│                          ▼                                      │
│              ┌───────────────────────┐                         │
│              │   Vector Database     │                         │
│              │  (Neo4j / Pinecone)   │                         │
│              └───────────────────────┘                         │
└─────────────────────────────────────────────────────────────────┘

🔌 Event Streaming

flowchart LR
    Kafka[Kafka / Redpanda] --> Brain[Agentic Brain]
    Brain --> Workers[Workers & Bots]
    Brain --> Storage[(Neo4j + Vectors)]
Loading
┌─────────┐    ┌──────────────┐    ┌─────────┐
│ Clients │───▶│   FastAPI    │───▶│  Tasks  │
└─────────┘    │   Server     │    └─────────┘
               └──────┬───────┘          │
                      │                  ▼
               ┌──────▼───────┐    ┌─────────┐
               │   Kafka /    │───▶│ Workers │
               │   Redpanda   │    └─────────┘
               └──────────────┘

Durable event processing with Kafka/Redpanda integration. Messages persist through restarts.

🛡️ Ethics Module

┌─────────────────────────────────────────┐
│         AI Ethics Module                │
├─────────────────────────────────────────┤
│  Input → Safety Check → PII Filter →   │
│         ↓                               │
│  ┌───────────────────────────────────┐ │
│  │ • Privacy protection              │ │
│  │ • Content safety                  │ │
│  │ • Consent validation              │ │
│  │ • Accountability logging          │ │
│  │ • Fairness checks                 │ │
│  └───────────────────────────────────┘ │
│         ↓                               │
│  Output (or Quarantine if flagged)      │
└─────────────────────────────────────────┘

⚔️ Comparison

Feature 🧠 Agentic Brain 🦜 LangChain 🦙 LlamaIndex
Dependencies 2 (minimal) 50+ 30+
Install Size ~5 MB ~200 MB ~150 MB
Cold Start <100ms 2-5 seconds 1-3 seconds
GraphRAG Native Built-in ❌ Plugin ❌ Plugin
Knowledge Graph + Vector Database Native ⚠️ Partial ⚠️ Partial
Multi-LLM Orchestration Built-in ⚠️ DIY ⚠️ DIY
GPU Acceleration (CUDA/ROCm/MLX) Yes ⚠️ Limited ⚠️ Limited
Redis Cache / WebSocket Streaming Built-in ⚠️ Plugins ⚠️ Plugins
Enterprise AI, Production Ready Yes ⚠️ DIY ⚠️ DIY
Workflow Durability 27 modules ❌ None ❌ None
Voice Output 145+ macOS + 35+ cloud voices* ❌ None ❌ None
Mode System 42 modes ❌ None ❌ None
Temporal Compatible Drop-in ❌ No ❌ No
Air-Gap Ready Yes ⚠️ Difficult ⚠️ Difficult
Enterprise Auth (JWT/OAuth/Firebase) Built-in ⚠️ DIY ⚠️ DIY
Local LLM First Ollama native ⚠️ Wrapper ⚠️ Wrapper

Learn more in Why Agentic Brain?


⚡ Quick Start

One-Liner Install (Recommended)

macOS/Linux:

curl -fsSL https://raw.githubusercontent.com/joseph-webber/agentic-brain/main/install.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/joseph-webber/agentic-brain/main/install.ps1 | iex

Corporate networks with SSL issues?

# macOS/Linux
export AGENTIC_SKIP_SSL=true
curl -fsSL https://raw.githubusercontent.com/joseph-webber/agentic-brain/main/install.sh | bash

⚡ Quick Install (One-Liner)

curl -fsSL https://raw.githubusercontent.com/joseph-webber/agentic-brain/main/install.sh | bash

Manual Docker Install

git clone https://github.com/joseph-webber/agentic-brain.git
cd agentic-brain
docker compose up -d --build

Option 3: pip install (minimal)

pip install agentic-brain

⚙️ Configuration

Quick Setup

cp .env.example .env
cp .env.docker.example .env.docker
# Edit files and add your API keys

Free LLM Providers (No Cost!)

Provider Setup URL Notes
Ollama https://ollama.ai Local, private, no signup required
Groq https://console.groq.com Fastest, 30 req/min free tier
Google Gemini https://aistudio.google.com/apikey 1M tokens/day free

Example .env Setup

# LLM Provider (choose one)
LLM_PROVIDER=ollama                    # Local LLM (no cost)
# LLM_PROVIDER=groq                    # Cloud LLM (fast, free tier)
# LLM_PROVIDER=gemini                  # Google Gemini (free tier)

# API Keys (optional, depends on provider)
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxx  # Get from https://console.groq.com
GEMINI_API_KEY=AIzaxxxxxxxxxxxxxxxxxxxx  # Get from https://aistudio.google.com

# Database
NEO4J_PASSWORD=Brain2026
REDIS_PASSWORD=BrainRedis2026

For complete configuration details, see Environment Setup Guide

🌐 Services & Ports

Once installed and running, access these services:

Service Port URL Credentials
Brain API 8000 http://localhost:8000 -
Neo4j Browser 7474 http://localhost:7474 neo4j / Brain2026
Neo4j Bolt 7687 bolt://localhost:7687 neo4j / Brain2026
Redis 6379 redis://localhost:6379 BrainRedis2026
Redpanda Broker 9092 localhost:9092 -
Redpanda Admin 9644 http://localhost:9644 -
Ollama (Local LLM) 11434 http://localhost:11434 -

📡 API Endpoints

Core API endpoints for integration:

✅ Testing Services

Verify all services are running:

# Test Agentic Brain API
curl http://localhost:8000/health

# Test Neo4j
curl http://localhost:7474

# Test Redis
docker exec agentic-brain-redis redis-cli -a BrainRedis2026 ping

# Test Redpanda Kafka cluster health
curl http://localhost:9644/v1/cluster/health

📦 System Versions

Current stack versions:

  • Agentic Brain: 3.1.0
  • Neo4j: 2026.02.3-community
  • Neo4j GDS Plugin: 2.27.0 (GraphRAG support)
  • Redis: 7-alpine
  • Redpanda: latest
  • Python: 3.11+

Prerequisites

  • Python 3.11+
  • Ollama (recommended for local AI) or an API key (OpenAI/Anthropic)
  • Optional: Neo4j for GraphRAG + Knowledge Graph workloads

Configure (one-time)

ab config init
ab config set llm.provider openai   # or anthropic / ollama
ab config set llm.api_key $OPENAI_API_KEY

Run

# Start the CLI chat
ab chat "Hello, Brain!"

Usage Examples (SDK)

Chat with Memory:

from agentic_brain import Agent

agent = Agent("assistant")
await agent.chat_async("My name is Sarah")
await agent.chat_async("What's my name?")  # → "Sarah"

Cross-Session Memory:

# Session 1 (Monday)
await agent.chat_async("I'm working on the AusPost integration")

# Session 2 (Tuesday) - agent remembers!
await agent.chat_async("What was I working on?")
# → "You were working on the AusPost integration"

📖 View Advanced Memory Architecture →

Chat Intelligence:

from agentic_brain.chat.intelligence import IntentDetector, MoodDetector

# Detect user intent
intent_detector = IntentDetector()
intent, confidence = intent_detector.detect_sync("Fix this bug!")
# → (Intent.ACTION, 0.92)

# Detect user mood for tone adjustment
mood_detector = MoodDetector()
mood, _ = mood_detector.detect("This is broken AGAIN!!!")
# → (Mood.FRUSTRATED, 0.95)
Chat Feature Description
Intent Detection ACTION, QUESTION, CHAT, COMPLAINT, CLARIFICATION
Mood Analysis HAPPY, FRUSTRATED, CONFUSED, NEUTRAL, URGENT
Personality Profiles Professional, friendly, empathetic, technical
Safety Checker Hallucination detection, action confirmation
Conversation Summary Auto-summarize long conversation threads

📖 Full Chat Intelligence Guide →

Commerce (WooCommerce SDK):

from agentic_brain.commerce import WooCommerce

woo = WooCommerce(url="https://mystore.com", key="...", secret="...")
products = woo.products.list()
order = woo.orders.create(...)

GraphRAG Pipeline:

from agentic_brain.rag import RAGPipeline

rag = RAGPipeline(neo4j_uri="bolt://localhost:7687")
await rag.ingest("./documents/")
answer = await rag.query("What are our Q3 targets?")
┌──────────┐    ┌──────────┐    ┌──────────┐    ┌──────────┐
│  110+    │───▶│ Chunking │───▶│ Embedding│───▶│  Neo4j   │
│ Loaders  │    │ + NER    │    │ (MLX/GPU)│    │  Graph   │
└──────────┘    └──────────┘    └──────────┘    └──────────┘
                                                      │
┌──────────┐    ┌──────────┐    ┌──────────┐         │
│ Response │◀───│   LLM    │◀───│  Hybrid  │◀────────┘
│          │    │ (Router) │    │  Search  │
└──────────┘    └──────────┘    └──────────┘

Durable Workflow:

from agentic_brain.temporal import workflow, activity

@workflow.defn
class OrderWorkflow:
    @workflow.run
    async def run(self, order_id: str):
        await workflow.execute_activity(validate_order, order_id)
        await workflow.execute_activity(process_payment, order_id)
        await workflow.execute_activity(ship_order, order_id)

Switch Modes:

ab mode switch medical   # Now HIPAA compliant
ab mode status           # View current mode
ab mode list             # See all 42 modes

Quick Start (Docker)

One command to launch everything:

docker compose up -d --build

Then visit:

Credentials:

  • Neo4j: neo4j / Brain2026
  • Redis: :BrainRedis2026

View logs:

docker compose logs -f agentic-brain
docker compose logs -f neo4j

Stop everything:

docker compose down

🤖 Ollama Local LLM Setup

Run local LLMs without API keys. Agentic Brain supports Ollama models like llama2, mistral, and neural-chat.

macOS/Linux:

# Download and install
curl https://ollama.ai/install.sh | sh

# Start Ollama service
ollama serve

# In another terminal, pull a model
ollama pull llama2  # ~4GB

Windows:

# Install via Winget (recommended)
winget install Ollama.Ollama

# Or download from https://ollama.ai

# Then start Ollama in PowerShell
ollama serve

# Pull a model
ollama pull llama2

Configure Agentic Brain to use Ollama:

ab config set llm.provider ollama
ab config set llm.base_url http://localhost:11434
ab config set llm.model llama2

Corporate Network Note: If behind a corporate proxy with SSL inspection, add:

export PYTHONHTTPSVERIFY=0

Then run Agentic Brain.


🏢 Enterprise Features

🔒 Security & Compliance

  • Enterprise Auth — JWT RS256/ES256, OAuth2/OIDC, SAML 2.0
  • Multi-Tenant Isolation — PUBLIC / PRIVATE / CUSTOMER scopes
  • Secrets Management — Vault, AWS SM, Azure KV, GCP SM, Keychain
  • Air-Gapped Ready — works in disconnected environments
  • Zero Telemetry — opt-in only, nothing phones home
  • Zero Trust Architecture — never trust, always verify

🧠 Advanced RAG

  • GraphRAG Core — Neo4j knowledge graphs native
  • Hybrid Search — Vector + BM25 keyword fusion
  • GraphQL API — Strawberry GraphQL for RAG queries
  • Vector Embeddings — MLX/CUDA/ROCm accelerated
  • Cross-Encoder Reranking — Precision over recall
  • Source Citations — Confidence scores included

⏱️ Workflow Durability

  • Temporal.io Compatible — same API, no lock-in
  • Event Streaming — Kafka/Redpanda durable queues
  • 27 Durability Modules — signals, sagas, versioning
  • Crash Recovery — workflows resume automatically
  • Task Queues — worker pools, namespaces, priorities

📊 Observability

  • OpenTelemetry — distributed tracing
  • Prometheus Metrics — standard /metrics endpoint
  • Usage Analytics — token tracking, cost estimation
  • Health Probes — liveness/readiness for K8s
  • Dashboard — real-time workflow monitoring

🛡️ Security & Compliance

Banks. Hospitals. Government. Military. We've got you covered.

Compliance Frameworks

Framework Status Industries
SOC 2 ✅ Ready SaaS, Enterprise, B2B
ISO 27001 ✅ Ready All industries
HIPAA ✅ Ready (BAA available) Healthcare, Pharma, Biotech
GDPR ✅ Compliant Any EU operations
SOX ✅ Controls Finance, Public Companies
APRA CPS 234 ✅ Aligned Australian Banking
PCI-DSS ⏳ In Progress Payment Processing
FedRAMP ⏳ In Progress US Government
ITAR ✅ Air-Gap Ready Defense, Aerospace

Security Features

🔐 Authentication

  • Production-ready: JWT (RS256/ES256), OAuth 2.0 / OIDC, API Key (HMAC), LDAP, Firebase Auth, Session auth, Basic auth, and mTLS.
  • Coming Soon: SAML 2.0 Single Sign-On and Multi-Factor Authentication (MFA). See docs/ROADMAP.md for implementation status and dependencies.

🛡️ Data Protection

  • AES-256-GCM encryption
  • TLS 1.3 in transit
  • Field-level encryption
  • PII detection & masking
  • Key rotation (KMS/HSM)
  • Tenant isolation

📋 Audit & Compliance

  • Immutable audit logs
  • 7-year retention
  • SIEM integration
  • Real-time alerting
  • Break-glass access
  • Compliance reports

Industry Modes

Switch your entire compliance posture with one command:

# Healthcare (HIPAA + HITECH)
ab mode switch medical
# → PHI handling, consent tracking, 6-year audit retention

# Banking (SOX + PCI-DSS + APRA)
ab mode switch banking
# → Financial controls, segregation of duties, encrypted at rest

# Government (FedRAMP + NIST 800-53)
ab mode switch government
# → Air-gapped ready, FIPS 140-2, IL4/IL5 support

# European (GDPR + ePrivacy)
ab mode switch european
# → Data locality, consent management, right to erasure

Enterprise Trust

┌─────────────────────────────────────────────────────────────────────────┐
│                     WHY ENTERPRISES TRUST US                            │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                          │
│  ✅ SOC 2 Type II Ready          ✅ Penetration Tested Annually         │
│  ✅ ISO 27001 Aligned            ✅ Bug Bounty Program                   │
│  ✅ HIPAA BAA Available          ✅ 24/7 Security Team                   │
│  ✅ GDPR Data Processing Agreement                                      │
│  ✅ Zero data retention on LLM calls (unless you want it)              │
│  ✅ Self-hosted / air-gapped deployment options                        │
│                                                                          │
│  📧 compliance@agentic-brain.dev — Request compliance docs              │
│                                                                          │
└─────────────────────────────────────────────────────────────────────────┘

📖 Full Documentation:


🧠 Advanced Memory Architecture

What makes AI feel intelligent: persistent, semantic, cross-session memory.

┌────────────────────────────────────────────────────────────┐
│                    BRAIN MEMORY SYSTEM                     │
├────────────────────────────────────────────────────────────┤
│  SESSION       LONG-TERM      SEMANTIC       EPISODIC     │
│  MEMORY        MEMORY         MEMORY         MEMORY       │
│                                                            │
│  Conversation  Neo4j Graph    Vector         Event        │
│  Context       Knowledge      Embeddings     Sourcing     │
│                                                            │
│         └──────────┬───────────┬──────────────┘           │
│                    │           │                          │
│              UNIFIED MEMORY API                           │
│              brain.memory.recall()                        │
└────────────────────────────────────────────────────────────┘

💾 Memory Types

  • Session Memory — In-conversation context
  • Long-term Memory — Neo4j knowledge graph
  • Semantic Memory — Vector embeddings for meaning
  • Episodic Memory — Event sourcing timeline

🔧 Memory Features

  • Cross-session recall — Remember across sessions
  • Semantic search — Find by meaning, not keywords
  • Forgetting strategies — Smart memory management
  • Memory compression — Efficient storage
# Unified memory API
await brain.memory.remember("The user prefers bullet points")

# Later, even in a new session:
context = await brain.memory.recall("How does the user like info formatted?")
# Returns the preference even with different wording!

📖 View Full Memory Architecture →

Apple Silicon

MLX Native

CUDA

Full CUDA Support

ROCm

ROCm 5.x+

from agentic_brain.rag import detect_hardware, get_accelerated_embeddings

device, info = detect_hardware()  # → "mlx", "M2 Pro 12-core"
embeddings = get_accelerated_embeddings()  # 14x faster on Apple Silicon!

🤖 LLM Providers

7+ providers with intelligent fallback routing:

Provider Models Best For
🦙 Ollama Llama 3, Mistral, Phi Local, private, offline
🎭 Anthropic Claude 3.5 Sonnet/Opus Complex reasoning
🧠 OpenAI GPT-4o, GPT-4 Turbo General purpose
🌐 OpenRouter 100+ models Model variety
🔷 Azure OpenAI GPT-4, embeddings Enterprise compliance
☁️ AWS Bedrock Claude, Titan AWS ecosystem
🌊 Cohere Command, embeddings RAG optimization
🤗 HuggingFace Open models Research, fine-tuning
from agentic_brain import LLMRouter

router = LLMRouter()  # Auto-fallback: Ollama → OpenRouter → OpenAI → Anthropic
response = await router.chat("Hello!")
print(f"Used: {response.provider}")  # Shows which succeeded

🔌 110+ RAG Loaders

📖 Full Data Loaders Reference →

Sector Integrations
🛍️ E-commerce Shopify WooCommerce Magento BigCommerce
🏥 Healthcare Epic Cerner FHIR
⚖️ Legal DocuSign Adobe Sign Clio
📊 Analytics Google Analytics Mixpanel Segment
🎯 HR & Recruiting Workday Greenhouse Lever
📁 Media YouTube Podcast Video
📆 Project Mgmt Monday Trello ClickUp
☁️ Enterprise Salesforce HubSpot Slack Jira

Category Count Examples
📑 Documents 11 PDF, DOCX, XLSX, Markdown, HTML, JSON, YAML
💻 Code 12 Python, TypeScript, Java, Go, Rust, C++, Swift
🖼️ Media 7 YouTube, Audio (Whisper), Video OCR, Podcasts
🌐 Web 4 URLs, Sitemaps, RSS, REST APIs
🗄️ Databases 6 PostgreSQL, MySQL, SQLite, MongoDB, Firestore
☁️ Cloud 3 S3, Google Cloud Storage, Azure Blob
🏢 Enterprise 25 SharePoint, Confluence, Notion, Slack, Jira, Salesforce, Workday
🏥 Healthcare 3 FHIR, Epic, Cerner
⚖️ Legal 3 DocuSign, Adobe Sign, Legal Parsers

Highlighted Integrations:

WordPress WooCommerce Divi Firebase

Quick Load & Query Example:

from agentic_brain.rag import RAGPipeline
from agentic_brain.rag.loaders import PDFLoader, NotionLoader, SlackLoader

# Load from multiple sources
pdf_docs = await PDFLoader().load_directory("./policies/")
notion_docs = await NotionLoader(api_key="...").load_database("wiki")
slack_docs = await SlackLoader(token="...").load_channel("support", days=90)

# Ingest into GraphRAG knowledge graph
rag = RAGPipeline(neo4j_uri="bolt://localhost:7687")
await rag.ingest_documents(pdf_docs + notion_docs + slack_docs)

# Query across ALL sources with relationship awareness
result = await rag.graph_query(
    "What is our refund policy and how do customers typically ask about it?"
)
print(result.answer)       # Combines policy docs + Slack context!
print(result.sources)      # Shows PDF policy + Slack conversations

🌐 WordPress / WooCommerce / Divi

AI for the world's most popular platforms:

Platform Market Share What We Enable
WordPress 43% of web AI content, chatbots, SEO automation
WooCommerce 28% of e-commerce Order support, product recommendations
Divi 2M+ sites Visual Builder module, drag-and-drop AI
from agentic_brain.commerce import CommerceUserType, WooCommerceAgent, WooCommerceChatbot

agent = WooCommerceAgent(
    url="https://store.com",
    consumer_key="ck_xxx",
    consumer_secret="cs_xxx",
)
chatbot = WooCommerceChatbot(agent, store_name="Agentic Store")

# Customer: "Where's my order #1234?"
reply = await chatbot.handle_message(
    "Where's my order #1234?",
    user_type=CommerceUserType.CUSTOMER,
)
print(reply.message)

ROI: $100-250K/year for mid-size stores (70% support ticket reduction, 15-25% AOV increase)

📖 WordPress Integration Guide


🔥 Firebase Real-Time Sync

Cross-device, offline-first AI applications:

from agentic_brain.transport import FirebaseTransport

async with FirebaseTransport(config, session_id="user-123") as transport:
    agent = Agent("assistant", transport=transport)
    
    # Messages sync instantly across web, mobile, desktop!
    response = await agent.chat("Hello from any device!")

Why Firebase?

  • <50ms sync across all connected clients
  • 📱 Works offline — messages queue and sync later
  • 🔐 Firebase Auth — Google, Apple, email, anonymous
  • 💰 Generous free tier — 50K daily reads, 20K writes

📖 Firebase Integration Guide


from agentic_brain.rag.loaders import NotionLoader, SlackLoader

notion = NotionLoader(api_key="...")
docs = await notion.load_database("knowledge-base")

🗣️ Voice & Accessibility

145+ macOS voices + 35+ cloud TTS voices across 40+ languages, fully accessible:

from agentic_brain.voice import speak

speak("Order confirmed!", voice="Karen", rate=160)  # Australian
speak("Commande confirmée!", voice="Amelie")        # French
speak("注文確認しました", voice="Kyoko")              # Japanese
  • ✅ Screen-reader compatible
  • ✅ VoiceOver integration (macOS)
  • ✅ NVDA/JAWS support (Windows)
  • ✅ High-contrast mode
  • ✅ Keyboard-only navigation

🇦🇺 Australian Compliance

Framework Status
Privacy Act 1988 ✅ APPs compliance
Essential Eight ✅ Security controls
Aged Care Act 2024 ✅ Aged care mode
NDIS Standards ✅ Disability mode
AML/CTF Act ✅ Finance mode

APAC Ready: Singapore MAS TRM, NZ Privacy Act 2020


🛠️ Development

git clone https://github.com/agentic-brain-project/agentic-brain.git
cd agentic-brain
pip install -e ".[dev]"

pytest tests/ -v           # 4,700+ tests
pre-commit run --all-files # Linting
mypy src/                  # Type checking

📚 Documentation

Resource Link
Quick Start QUICKSTART_API.md
🐳 Docker Setup DOCKER_SETUP.md
🔒 Security Policy SECURITY.md
🤝 Contributing CONTRIBUTING.md
📜 Changelog docs/CHANGELOG.md
🗺️ Roadmap ROADMAP.md
📐 Architecture docs/architecture/ARCHITECTURE.md

📐 Architecture Overview

View System Architecture →
flowchart TB
    subgraph Clients["Clients"]
        CLI["CLI"]
        SDK["SDK"]
        API["REST API"]
    end

    subgraph Core["Agentic Brain"]
        direction TB
        LLM["LLM Router<br/>8 Providers"]
        RAG["GraphRAG<br/>Neo4j + Vector"]
        Durability["Durability<br/>27 Modules"]
        Modes["42 Modes"]
    end

    subgraph Storage["Storage"]
        Neo4j["Neo4j"]
        PG["PostgreSQL"]
        Redis["Redis"]
    end

    Clients --> Core
    Core --> Storage
    LLM --> RAG --> Durability --> Modes
Loading

View Full Architecture →

View RAG Pipeline →
flowchart LR
    Sources["155+ Loaders"] --> Process["Chunking + Embeddings"]
    Process --> Graph["Entity + Relationship Extraction"]
    Process --> Store["Chunk Vectors"]
    Graph --> Neo4j["Neo4j Graph"]
    Store --> Neo4j
    Neo4j --> Retrieve["Hybrid Search + Community Expansion"]
    Retrieve --> Rerank["RRF / Reranking"]
    Rerank --> LLM["Generation"]
Loading
View Integration Map →
flowchart TB
    subgraph Brain["Agentic Brain"]
        Core["Core Engine"]
    end

    LLM["7+ LLM Providers"] --> Core
    Data["149+ Data Sources"] --> Core
    Vector["5 Vector DBs"] --> Core
    CMS["WordPress/Divi"] --> Core
    Transport["Firebase/WS"] --> Core
Loading

🔧 CI Status

Workflow Status
CI CI
Docs Docs
Docker Docker
Release Release

📄 License

Apache 2.0See LICENSE

  • ✅ Commercial use
  • ✅ Modifications
  • ✅ Private use
  • ✅ Patent grant

🙏 Acknowledgments

Built by Agentic Brain Contributors

Strategic Partners:

Powered by:


⭐ Star this repo if you find it useful!

Report BugRequest FeatureDiscussions


Built for everyone: Military. Banks. Hospitals. Enterprises. Also for: Influencers. Creators. Families. You.

Made with 🧠 in Australia for the world

About

🌏 AI agents that explain themselves. Built for everyone. Voice-first with EXPLAINABLE decisions (SHAP/LIME). Healthcare, disability, aged care & defence ready. GPU-accelerated RAG, 4700+ tests, 70+ accents, enterprise auth. Zero cloud lock-in. Self-hosted. Made in Australia 🇦🇺

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors