-
-
Notifications
You must be signed in to change notification settings - Fork 263
Update ASI04_Agentic_Supply_Chain_Vulnerabilities .md #719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
pushing the Gdoc draft in for review Signed-off-by: syedDS <32578528+syedDS@users.noreply.github.com>
itskerenkatz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic work! Added a few comments
| 1. Example 1: Specific instance or type of this vulnerability. | ||
| 2. Example 2: Another instance or type of this vulnerability. | ||
| 3. Example 3: Yet another instance or type of this vulnerability. | ||
| Agentic systems dynamically load behaviors and components like tools, prompts, plugins, models, and agent\-to\-agent coordination protocols at runtime\. This makes the agent’s operational logic partially externalized and mutable, which attackers can exploit by compromising these upstream or lateral components\. This scope differs from traditional LLM supply chain risks \(LLM03:2025\), which primarily cover static corruption of training data, model weights, or tokenizer artifacts\. In contrast, agentic supply chain threats focus on how adversaries can hijack runtime orchestration and exploit decentralized trust boundaries where components are dynamically loaded, swapped, or shared between agents without centralized assurance\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GREAT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would only focus on not only trust boundaries (as ASI03 focuses on that), but rather on the risks to cause Data leakage, Output manipulation and Workflow Hijacking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
| AppSec supply chain risks emerge when code or dependencies can be modified in ways that alter application logic for eg, a package update that silently overrides an internal function\. Agentic AI supply chain risks, on the other hand, stem from the runtime orchestration layer unique to LLMs, where context, prompts, and tool calls are assembled\. Here, the concern is not only how code executes, but how the agent’s reasoning, goals, and operational flow could be influenced raising the risk of harmful autonomous actions or unintended privilege escalation without direct code\-level flaws\. | ||
|
|
||
| Scenario #2: Another example of an attack scenario showing a different way the vulnerability could be exploited. | ||
| The difference lies in the blast radius: AppSec risks affect code execution paths, while Agentic risks affect the agent’s “decision\-making” layer\. In many real\-world cases, these risks overlap AppSec issues becoming amplified when they flow into Agentic orchestration\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"while Agentic risks affect the agent’s “decision-making” layer...." I think it worth adding "which can later lead to data leakage, manipulation or risky actions" (feel free to alter phrasing, only my thoughts)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
|
|
||
| ## <a id="_608udwltndp9"></a>Common Examples of Vulnerability: | ||
|
|
||
| 1. __Poisoned prompt templates autoloaded remotely\.__ An agent automatically pulls prompt templates from an external source that contain hidden malicious instructions \(e\.g\., to exfiltrate data or perform destructive actions\), leading it to execute harmful behavior without developer intent\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"leading it to execute unintended risky behavior"
Harmful usually aligns with hate, violence speech and I guess you meant it in a broader way, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
| 2. __Tool\-descriptor injection\.__ An attacker embeds hidden instructions or malicious payloads into a tool’s metadata or MCP/agent\-card, which the host agent interprets as trusted guidance and acts upon\. | ||
| 3. __Runtime dynamic tool typosquatting\.__ When an agent searches for tools at runtime by name or keyword, it may resolve to a typosquatted or malicious endpoint and unknowingly invoke a hostile service\. | ||
| 4. __Post\-install backdoor with local AI\-CLI abuse\.__ A compromised package installs post\-install hooks that run on developer machines, probing for secrets and invoking local AI assistants with crafted prompts to autonomously locate and exfiltrate credentials and files\. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about: Symbol attack - when tools or agents are faking to be tools or agents that they are not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
|
|
||
| Development Environment Hardening | ||
|
|
||
| - Run agents in sandboxed containers with strong network/syscall restrictions to contain malicious plugin/tool behaviors\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I haven’t understood correctly, but if it’s in a sandbox, how will it communicate with external tools or agents?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated context to better showcase what does sandbox mean here!
| Development Environment Hardening | ||
|
|
||
| - Run agents in sandboxed containers with strong network/syscall restrictions to contain malicious plugin/tool behaviors\. | ||
| - Use RBAC\-based privilege scoping for agent identities to prevent unauthorized tool installation or broad access inheritance\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is more ASI03 and feels to me a bit redundant here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
| Policy\-as\-Code Enforcement | ||
|
|
||
| - Extend AppSec supply\-chain gates \(SCA/SAST\) with agentic policy controls, rejecting unsigned components, unverified SBOMs, and poisoned prompt flows\. | ||
| - Build runtime policies for context validation \( rejecting malicious retrieval content or unauthorized prompt overrides\)\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is ASI06
|
|
||
| Active Testing, Drift & Red\-Teaming | ||
|
|
||
| - Perform red\-team simulations of poisoned components \(malicious plugin, poisoned prompt template, compromised collaborator agent\) to validate defense efficacy\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
against poisoned components
| A researcher demonstrates a prompt injection vulnerability in GitHub’s Model Context Protocol \(MCP\), where a malicious public tool includes hidden instructions in its metadata\. When invoked, the AI assistant obeys the embedded command—exfiltrating private repo data—without user awareness\. | ||
| 📎 [https://devclass\.com/2025/05/27/researchers\-warn\-of\-prompt\-injection\-vulnerability\-in\-github\-mcp\-with\-no\-obvious\-fix/](https://devclass.com/2025/05/27/researchers-warn-of-prompt-injection-vulnerability-in-github-mcp-with-no-obvious-fix/) | ||
|
|
||
| ### <a id="_64un27vnbufq"></a>Scenario 4: Replit Vibe Coding Incident |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to name it as: "Hallucinated resource" to be more generic and focus on the risk rather than the incident
| ### <a id="_4qqw01yspamq"></a>Scenario 5: AutoGPT SSTI RCE via Prompt Injection | ||
|
|
||
|
|
||
| Researchers identify a server\-side template injection \(SSTI\) bug in AutoGPT’s prompt routing logic, allowing remote attackers to inject malicious code through user\-generated task descriptions, leading to full remote code execution\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit more related to ASI05, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
- Redefined Agentic Supply Chain definition - Incorporated new Attacks - Incorporated the review comments Signed-off-by: syedDS <32578528+syedDS@users.noreply.github.com>
Added context to attack where malicious service deliberately impersonates a legitimate tool or agent Signed-off-by: syedDS <32578528+syedDS@users.noreply.github.com>
pushing the Gdoc draft in for review and comments