Skip to content

Security: leestott/copilotsdk_foundrylocal

Security

SECURITY.md

Security Policy

Reporting a Vulnerability

If you discover a security vulnerability in this project, please report it responsibly. Do not open a public issue.

Instead, please email security@microsoft.com with:

  • A description of the vulnerability
  • Steps to reproduce
  • Potential impact

You should receive a response within 48 hours. We will work with you to understand the scope and coordinate a fix before any public disclosure.

Supported Versions

Version Supported
1.x ✅ Yes
< 1.0 ❌ No

Security Design

This project was designed with several security principles in mind:

On-Device Inference

All model inference runs locally via Foundry Local. Source code, prompts, and model responses never leave the device. This makes the tool suitable for private repositories and regulated environments.

Command Allowlist

The run_command tool only permits a hardcoded set of commands:

  • npm test
  • npm run lint
  • npm run build

The agent cannot execute arbitrary shell commands.

Path Sandboxing

All file operations (list_files, read_file, write_file) are sandboxed to the demo-repo/ directory. Paths that resolve outside the sandbox are rejected. The sandboxing implementation uses path.resolve with a trailing separator check to prevent path traversal attacks (e.g., ../../etc/passwd or sibling directories like demo-repo-evil/).

File Size Limits

read_file enforces a 256 KB maximum to prevent the agent from loading very large files into context (which could cause denial-of-service via memory exhaustion).

Recursion Depth Limits

list_files imposes a maximum recursion depth of 20 to prevent stack overflow from deeply nested or symlink-looping directory structures.

No Secrets in Code

API keys and endpoints are loaded from environment variables (see .env.example). No credentials are committed to the repository.

Dependencies

This project uses:

Keep dependencies up to date. Run npm audit periodically.

Responsible AI

The agent operates on code you control in a sandboxed environment. However, as with any LLM-powered tool:

  • Review all generated code before merging to production.
  • Do not trust model output blindly — the model can make mistakes.
  • Run the verification phase to ensure tests pass after agent edits.

There aren’t any published security advisories