-
Notifications
You must be signed in to change notification settings - Fork 365
Description
Package Name
No response
Package Version(s)
No response
Describe the feature you'd like
Overview
Add an AI Guard security middleware for the Vercel AI SDK (ai package) to dd-trace. This middleware evaluates prompts and tool calls against security policies before they are processed by the LLM, detecting and blocking violations based on Datadog AI Guard configuration.
Key Features
- Prompt Evaluation: Evaluate prompts sent to the LLM against security policies
- Tool Call Evaluation: Evaluate LLM-generated tool calls (function calling)
- Streaming Support: Real-time evaluation with
streamText() - Datadog Blocking Mode Integration: Blocking decisions are fully delegated to Datadog-side configuration
- Fail-safe Behavior: Fallback handling when AI Guard service is unavailable (
allowOnFailureoption)
Usage Example
const tracer = require('dd-trace').init({
experimental: { aiguard: { enabled: true } }
})
const middleware = new tracer.AIGuardMiddleware({ tracer })
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware
})Requirements
ai@>=6.0.0(LanguageModelV3Middlewaresupport)- dd-trace AI Guard SDK must be enabled
Is your feature request related to a problem?
Yes. Currently, dd-trace provides an AI Guard SDK (tracer.aiguard.evaluate()), but integrating it with the Vercel AI SDK requires manual implementation by developers.
Pain Points:
- Developers must understand the
LanguageModelMiddlewareinterface and implement prompt/tool call transformation logic - Tool call evaluation in streaming mode is particularly complex
- Risk of accidentally exposing the
reasonfield to end users through error handling - Difficult to maintain consistency with Datadog's blocking mode settings
Describe alternatives you've considered
-
Documentation Only: Simply documenting the integration approach would require each developer to write the same code, increasing the risk of implementation errors (especially security-related ones)
-
Auto-instrumentation: Auto-instrumenting the Vercel AI SDK was considered, but this approach doesn't allow developers to control when the middleware is inserted
Additional context
Security Considerations
The Datadog documentation explicitly states that the AI Guard reason field is for auditing/logging purposes only and must not be returned to end users or the LLM. This middleware ensures:
AIGuardMiddlewareAbortErrordoes not storereason/tags- Error messages use fixed wording only
causeis not stored, fundamentally eliminating information leakage risk
Delegation to Datadog Blocking Mode
The middleware has no blocking strategy of its own and always calls evaluate(..., { block: true }). This ensures:
- Consistent meaning of
@ai_guard.blocked:true - Centralized block ON/OFF management on the Datadog side
- Recommended monitors (blocked traffic) function correctly
API Design
| Option | Type | Default | Description |
|---|---|---|---|
tracer |
Tracer |
(required) | Initialized tracer instance |
allowOnFailure |
boolean |
true |
Whether to allow requests when evaluation fails |
Error Codes
| Error Code | Description |
|---|---|
AI_GUARD_MIDDLEWARE_ABORT |
Blocked due to policy violation |
AI_GUARD_MIDDLEWARE_CLIENT_ERROR |
AI Guard evaluation itself failed |
Related Documentation