Agentic AI project for attack-path simulation using:
- LangGraph workflow orchestration
- Ollama local LLM (planner + reporter)
- Infrastructure graph traversal with policy-aware permission checks
The simulator models how an attacker can move from entry points to sensitive assets, then reports:
- confirmed attack paths
- blocked traversals with reasons
- risk score per discovered path
- defensive priorities
- YAML-driven infrastructure modeling (nodes, edges, policies, simulation config)
- Capability-based and policy-based access reasoning
- DFS attack-path simulation with stateful node execution
- LangGraph loop:
- ingest
- planner
- simulate
- assess
- reporter
- Ollama optional:
- if Ollama is unavailable, deterministic fallback planning/reporting is used
security_agent/
app/
main.py
agents/
ollama_client.py
security_reasoning_agent.py
graph/
builder.py
capability.py
execution_engine.py
models.py
node.py
permission.py
policy.py
state.py
prompts/
templates.py
state/
agent_state.py
utils/
ymal_parser.py
infra_samples/
agentic_demo.yaml
requirements.txt
app/main.pyparses CLI args and loads infra YAML.- YAML is normalized by
app/utils/ymal_parser.py. GraphBuilder.from_infra_spec(...)creates nodes/edges and loads policy rules.SecurityReasoningAgentruns LangGraph nodes:ingest: graph + simulation contextplanner: choose start nodes/depth (LLM or fallback)simulate: execute attack-path traversalassess: optionally iterate with deeper search if neededreporter: generate analyst report (LLM or fallback)
ExecutionEngineoutputs:attack_paths(to sensitive nodes)blockedtraversals (with denial reason)summarymetrics
- Python 3.10+
- Optional for LLM mode:
- Ollama installed and running locally
- A pulled model (default:
llama3.1:8b)
From repository root (security_agent):
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txtpython app\main.py --disable-llmStart Ollama and pull model first:
ollama serve
ollama pull llama3.1:8bThen run:
python app\main.py --model llama3.1:8b --ollama-url http://localhost:11434--infra-file <path> Path to infra YAML (default: infra_samples/agentic_demo.yaml)
--model <name> Ollama model (default: llama3.1:8b)
--ollama-url <url> Ollama base URL (default: http://localhost:11434)
--disable-llm Disable Ollama and use fallback logic
--max-depth <int> Override simulation depth
--max-iterations <int> Planner/simulator loop count (default: 2)
--start-nodes <a,b,c> Comma-separated start nodes
--continue-after-sensitive Continue traversal after sensitive node is reached
Top-level keys:
nodes: list of assets/identitiesedges: directed possible movement pathspolicies: explicit allow/deny and role-based policysimulation: traversal configuration
Minimal example:
nodes:
- id: internet
role: External
entry_point: true
capabilities: [ssh]
- id: app-role
role: IAM_ROLE
capabilities: [dump_secrets]
- id: secrets-prod
role: SecretsManager
sensitive: true
edges:
- source: internet
target: app-role
type: assume_role
- source: app-role
target: secrets-prod
type: permission
policies:
role_permissions:
- [External, IAM_ROLE]
- [IAM_ROLE, SecretsManager]
simulation:
max_depth: 6
stop_at_first_sensitive: trueThe program prints:
- infrastructure graph
- simulation summary
- discovered attack paths
- blocked traversals
- analyst report
- warnings (for fallback and validation situations)
app/utils/ymal_parser.pyname intentionally usesymalin the current codebase.- If Ollama is down, the run still succeeds using deterministic fallback logic.