Skip to content

Latest commit

 

History

History
165 lines (127 loc) · 5.59 KB

File metadata and controls

165 lines (127 loc) · 5.59 KB

CLI Usage

This page contains the detailed CLI and injection examples that were previously inline in the README. For a shorter entry path, see README and quickstart guide.

CLI Usage (MVP)

# Artifacts
agent-artifacts artifact publish ./skills/deploy_fastapi.yaml --name deploy_fastapi --version 1.0.0
agent-artifacts artifact get deploy_fastapi@1.0.0 --format spec
agent-artifacts artifact tag deploy_fastapi@1.0.0 --add stable
agent-artifacts artifact get deploy_fastapi@stable --format spec

# Transactions
agent-artifacts tx begin --actor "agent" --reason "user requested deploy"
agent-artifacts tx stage <tx_id> --key "user.preference.deploy_mode" --value docker_compose --type preference --source user --confidence 0.9
agent-artifacts tx validate <tx_id> --pipeline default --confidence-threshold 0.85
agent-artifacts tx validate <tx_id> --pipeline default --verifier sample
agent-artifacts tx validate <tx_id> --status approved --confidence 0.95 --evidence "user confirmed" --validator human
agent-artifacts tx commit <tx_id>
agent-artifacts tx commit <tx_id> --supersede
agent-artifacts tx status <tx_id> --format json
agent-artifacts tx staged <tx_id> --format json

Commits require an approved validation record. Use tx validate (or tx approve) before calling tx commit, otherwise the commit will fail.

--verifier sample uses a built-in stub verifier that returns a canned response for demonstration.

Validation policies (JSON/YAML)

agent-artifacts tx validate <tx_id> --policy configs/policies/default.yaml
agent-artifacts tx approve <tx_id> --validator "alice" --evidence "reviewed"
agent-artifacts tx reject <tx_id> --validator "alice" --evidence "contains PII"

Memory governance (query, resolve, redact, export)

agent-artifacts memory query --key "user.preference.deploy_mode"
agent-artifacts memory query --source user --min-confidence 0.8
agent-artifacts memory resolve --key "user.preference.deploy_mode" --strategy latest
agent-artifacts memory scan-pii --limit 50
agent-artifacts memory redact --entry-id <entry_id> --reason "PII removal" --actor "admin"
agent-artifacts memory redact --policy ./policies/redact.yaml --reason "PII policy"
agent-artifacts memory export --output snapshot.json

Redaction policy format and examples: Memory redaction.

Supersede strategy:

# Mark prior entries with the same key/type as superseded on commit
agent-artifacts tx commit <tx_id> --supersede

Memory consumption (prompt injection)

from agent_artifacts.memory import MemoryInjectionHook, MemoryInjectionConfig, MemoryQueryConfig

adapter.pipeline.register(
    MemoryInjectionHook(
        query=MemoryQueryConfig(keys=["user.preference.deploy_mode"], limit=10),
        config=MemoryInjectionConfig(max_tokens=120),
    )
)

prompt = adapter.prepare_prompt(base_prompt, ctx)

More details: Context budgeting.

Policy format details: Validation policy.

Skill + Trace injection

from agent_artifacts.skills import SkillInjectionHook, SkillInjectionConfig, SkillQueryConfig
from agent_artifacts.audit import TraceInjectionHook, TraceInjectionConfig, TraceQueryConfig

adapter.pipeline.register(
    SkillInjectionHook(
        query=SkillQueryConfig(refs=["deploy_fastapi@stable"]),
        config=SkillInjectionConfig(max_tokens=200, mode="summary"),
    )
)
adapter.pipeline.register(
    TraceInjectionHook(
        query=TraceQueryConfig(decisions=["execute_skill"], limit=5),
        config=TraceInjectionConfig(max_tokens=120),
    )
)

Minimal flow example (LangGraph adapter):

from agent_artifacts.adapters.langgraph import LangGraphAdapter
from agent_artifacts.skills import SkillInjectionHook, SkillQueryConfig
from agent_artifacts.audit import TraceInjectionHook, TraceQueryConfig

adapter = LangGraphAdapter(storage)
adapter.pipeline.register(
    SkillInjectionHook(
        query=SkillQueryConfig(tags=["stable"]),
        config=SkillInjectionConfig(mode="summary"),
    )
)
adapter.pipeline.register(TraceInjectionHook(query=TraceQueryConfig(limit=3)))

ctx = adapter.start_context(actor="agent", reason="handle request")
prompt = adapter.prepare_prompt("Base prompt.", ctx)

Skill resolution modes:

  • mode="ref": emit only skill@version references
  • mode="summary": emit references plus brief metadata (default)
  • mode="full": emit full skill specs (useful for LLMs that need full instructions)
  • mode="adaptive": choose full/summary/ref per skill based on remaining token budget

Token-saving example (adaptive):

adapter.pipeline.register(
    SkillInjectionHook(
        query=SkillQueryConfig(tags=["stable"]),
        config=SkillInjectionConfig(
            mode="adaptive",
            max_tokens=200,
            adaptive_order=["full", "summary", "ref"],
        ),
    )
)

Optional summary fields:

  • include_inputs=True: include input field names
  • include_outputs=True: include output field names
  • include_steps=True: include workflow step summaries
  • io_schema_mode="schema": include compact input/output schema in JSON summaries

Minimal runnable example for mode="full":

python examples/skills/resolve_full_prompt.py

CLI helper to render a prompt section from a stored skill:

agent-artifacts artifact resolve deploy_fastapi@1.0.0 --mode full --format json

Resolve a local skill file without publishing:

agent-artifacts artifact resolve --path ./examples/skills/deploy_fastapi_service.yaml --mode summary --format json --include-inputs --io-schema-mode schema