The quality gate for Agent Skills.
skill-guard is a CLI tool that validates, secures, and governs Agent Skills across their full lifecycle — from contribution to production monitoring.
Agent Skills are powerful. They're also ungoverned. As soon as more than one person contributes skills to a shared agent, things break in hard-to-diagnose ways:
- A new skill's description overlaps with an existing one → agent picks the wrong skill half the time
- Skills with dangerous scripts get merged because nobody reviewed the
scripts/directory - Nobody knows what skills are installed, who owns them, or whether they still work
- A skill passes every test in isolation but fails when the real agent uses it with 25 other skills loaded
skill-guard is the quality gate that catches these problems before they reach production.
ONBOARDING (pre-merge, in CI):
skill-guard validate → format compliance + quality scoring
skill-guard secure → scan for dangerous patterns
skill-guard conflict → detect trigger overlap with existing skills
skill-guard test → runs evals against an OpenAI-compatible endpoint. Supports injection methods: custom_hook (pre/post scripts), directory_copy, and git_push.
skill-guard check → runs validate + secure + conflict as a single gate. Agent evals run if --endpoint is configured.
ONGOING (post-merge, scheduled):
skill-guard monitor → re-run evals, detect drift, manage lifecycle. Run via cron or CI for continuous drift detection. No built-in scheduler.
skill-guard catalog → searchable registry of registered skills (approval workflow planned for v0.7)
pip install skill-guard
# Initialize in your skills repo
skill-guard init
# Validate a skill
skill-guard validate ./skills/my-skill/
# Check for security issues
skill-guard secure ./skills/my-skill/
# Skip scanning references/ for injection patterns
skill-guard secure ./skills/my-skill/ --skip-references
# Check for conflicts with existing skills
skill-guard conflict ./skills/my-skill/ --against ./skills/
# Run the full gate (validate + secure + conflict; test runs if --endpoint is configured)
skill-guard check ./skills/my-skill/ --against ./skills/# skill-guard.yaml
test:
endpoint: http://localhost:8000
model: gpt-4.1
injection:
method: directory_copy
directory_copy_dir: /app/skills
# Or push into a repo that your agent pulls from:
# test:
# endpoint: http://localhost:8000
# model: gpt-4.1
# injection:
# method: git_push
# git_repo_path: /path/to/agent-repo
# git_remote: origin
# git_branch: main
# git_skills_dir: skills$ skill-guard validate ./skills/my-skill/
skill-guard validate — my-skill
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Check ┃ Result ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ skill_md_exists │ ✅ SKILL.md found │
│ valid_yaml_frontmatter │ ✅ Valid YAML frontmatter │
│ name_field_present │ ✅ name: my-skill │
│ description_field_present │ ✅ description field present │
│ directory_name_matches │ ✅ Directory name matches skill name │
│ description_trigger_hint │ ✅ Description contains trigger hint ('Use when')│
│ no_broken_body_paths │ ✅ No broken relative paths in SKILL.md body │
│ evals_directory_exists │ ⚠️ No evals/ directory found │
│ │ → Create evals/config.yaml with test cases │
│ metadata_has_author │ ✅ author: my-team │
│ metadata_has_version │ ✅ version: 1.0 │
└───────────────────────────┴──────────────────────────────────────────────────┘
Score: 97/100 | Grade: A | Blockers: 0 | Warnings: 1
| Requirement | Version | Notes |
|---|---|---|
| Python | 3.11+ | Required. 3.12 and 3.13 tested. |
| pip | any recent | Bundled with Python |
| typer | ≥0.13.0 | Installed automatically |
| Agent endpoint | — | Required only for skill-guard test (OpenAI-compatible API) |
Note:
skill-guard validate,secure,conflict,init,catalog, andcheckwork fully offline — no agent or API key needed.
# Core (static analysis — no agent required)
pip install skill-guard
# Optional embeddings support
pip install skill-guard[embeddings]
# Optional LLM-based conflict detection
pip install skill-guard[llm]# TF-IDF (default)
skill-guard conflict ./skills/my-skill/ --against ./skills/ --method tfidf
# Embeddings-based overlap detection
skill-guard conflict ./skills/my-skill/ --against ./skills/ --method embeddings
# Choose a different embeddings model
skill-guard conflict ./skills/my-skill/ --against ./skills/ --method embeddings --model all-MiniLM-L12-v2
# Offline embeddings (local model only; no downloads)
skill-guard conflict ./skills/my-skill/ --against ./skills/ --method embeddings \
--model-path /models/all-MiniLM-L6-v2 --offline
# LLM-based overlap detection
export OPENAI_API_KEY=...
skill-guard conflict ./skills/my-skill/ --against ./skills/ --method llmembeddings uses the all-MiniLM-L6-v2 sentence-transformers model by default (override with --model, --model-path, or conflict.embeddings_model/conflict.embeddings_model_path) and caches downloads under conflict.embeddings_cache_dir (default .skill-guard-cache/embeddings/). On first download, it prints a "Downloading model..." message to stderr. Use --offline to require a local/cached model and skip downloads. llm uses the OpenAI Chat API with gpt-4o-mini by default.
Add conflict_ignore to your SKILL.md frontmatter to skip comparisons against specific skills:
---
name: my-skill
description: "Use when ..."
conflict_ignore:
- legacy-skill
- skills/legacy-skill/SKILL.md
---- Getting Started
- End-to-End Integration Guide ← start here for real agent setup
- Writing Evals
- Hook Scripts
- CI/CD Integration
- Configuration Reference
skill-guard validate includes Anthropic AgentSkills spec compliance checks by default. Set validate.anthropic_spec: false in skill-guard.yaml if you need to disable those additional findings.
0: success1: validation/security failures2: warnings only (whenfail_on_warningis false)3: config error4: parse error5: hook script failure6: health check timeout
Use pre-commit to enforce checks before skill changes land:
repos:
- repo: https://github.com/vaibhavtupe/skill-guard
rev: v0.6.0
hooks:
- id: skill-guard-validate
- id: skill-guard-secure
- id: skill-guard-checkThese hooks run against changed SKILL.md files, deduplicate by skill root, and then execute the corresponding skill-guard command for each affected skill.
Use skill-guard init --template base to scaffold a new skill, or skill-guard init --list-templates to see the available scaffolds. Generated templates include SKILL.md, evals/, references/, scripts/, and assets/ so they validate immediately.
- uses: vaibhavtupe/skill-guard-action@v1
with:
path: ./skills/my-skillSee vaibhavtupe/skill-guard-action for the full action repo.
Use the separate action repo vaibhavtupe/skill-guard-action@v1 in workflows:
- uses: vaibhavtupe/skill-guard-action@v1
with:
command: check
path: ./skills/my-skill
against: ./skills/- Does not replace Anthropic's skill-creator for writing skills
- Does not host or serve skills — skills live in your repo
- Does not modify skills — it reports issues, authors fix them
- Does not require a database or server — the catalog is a YAML file in your repo
See CONTRIBUTING.md. We welcome contributions of all kinds.
Apache 2.0. See LICENSE.