This page contains the detailed CLI and injection examples that were previously inline in the README. For a shorter entry path, see README and quickstart guide.
# Artifacts
agent-artifacts artifact publish ./skills/deploy_fastapi.yaml --name deploy_fastapi --version 1.0.0
agent-artifacts artifact get deploy_fastapi@1.0.0 --format spec
agent-artifacts artifact tag deploy_fastapi@1.0.0 --add stable
agent-artifacts artifact get deploy_fastapi@stable --format spec
# Transactions
agent-artifacts tx begin --actor "agent" --reason "user requested deploy"
agent-artifacts tx stage <tx_id> --key "user.preference.deploy_mode" --value docker_compose --type preference --source user --confidence 0.9
agent-artifacts tx validate <tx_id> --pipeline default --confidence-threshold 0.85
agent-artifacts tx validate <tx_id> --pipeline default --verifier sample
agent-artifacts tx validate <tx_id> --status approved --confidence 0.95 --evidence "user confirmed" --validator human
agent-artifacts tx commit <tx_id>
agent-artifacts tx commit <tx_id> --supersede
agent-artifacts tx status <tx_id> --format json
agent-artifacts tx staged <tx_id> --format jsonCommits require an approved validation record. Use tx validate (or tx approve) before
calling tx commit, otherwise the commit will fail.
--verifier sample uses a built-in stub verifier that returns a canned response for demonstration.
agent-artifacts tx validate <tx_id> --policy configs/policies/default.yaml
agent-artifacts tx approve <tx_id> --validator "alice" --evidence "reviewed"
agent-artifacts tx reject <tx_id> --validator "alice" --evidence "contains PII"agent-artifacts memory query --key "user.preference.deploy_mode"
agent-artifacts memory query --source user --min-confidence 0.8
agent-artifacts memory resolve --key "user.preference.deploy_mode" --strategy latest
agent-artifacts memory scan-pii --limit 50
agent-artifacts memory redact --entry-id <entry_id> --reason "PII removal" --actor "admin"
agent-artifacts memory redact --policy ./policies/redact.yaml --reason "PII policy"
agent-artifacts memory export --output snapshot.jsonRedaction policy format and examples: Memory redaction.
Supersede strategy:
# Mark prior entries with the same key/type as superseded on commit
agent-artifacts tx commit <tx_id> --supersedefrom agent_artifacts.memory import MemoryInjectionHook, MemoryInjectionConfig, MemoryQueryConfig
adapter.pipeline.register(
MemoryInjectionHook(
query=MemoryQueryConfig(keys=["user.preference.deploy_mode"], limit=10),
config=MemoryInjectionConfig(max_tokens=120),
)
)
prompt = adapter.prepare_prompt(base_prompt, ctx)More details: Context budgeting.
Policy format details: Validation policy.
from agent_artifacts.skills import SkillInjectionHook, SkillInjectionConfig, SkillQueryConfig
from agent_artifacts.audit import TraceInjectionHook, TraceInjectionConfig, TraceQueryConfig
adapter.pipeline.register(
SkillInjectionHook(
query=SkillQueryConfig(refs=["deploy_fastapi@stable"]),
config=SkillInjectionConfig(max_tokens=200, mode="summary"),
)
)
adapter.pipeline.register(
TraceInjectionHook(
query=TraceQueryConfig(decisions=["execute_skill"], limit=5),
config=TraceInjectionConfig(max_tokens=120),
)
)Minimal flow example (LangGraph adapter):
from agent_artifacts.adapters.langgraph import LangGraphAdapter
from agent_artifacts.skills import SkillInjectionHook, SkillQueryConfig
from agent_artifacts.audit import TraceInjectionHook, TraceQueryConfig
adapter = LangGraphAdapter(storage)
adapter.pipeline.register(
SkillInjectionHook(
query=SkillQueryConfig(tags=["stable"]),
config=SkillInjectionConfig(mode="summary"),
)
)
adapter.pipeline.register(TraceInjectionHook(query=TraceQueryConfig(limit=3)))
ctx = adapter.start_context(actor="agent", reason="handle request")
prompt = adapter.prepare_prompt("Base prompt.", ctx)Skill resolution modes:
mode="ref": emit onlyskill@versionreferencesmode="summary": emit references plus brief metadata (default)mode="full": emit full skill specs (useful for LLMs that need full instructions)mode="adaptive": choose full/summary/ref per skill based on remaining token budget
Token-saving example (adaptive):
adapter.pipeline.register(
SkillInjectionHook(
query=SkillQueryConfig(tags=["stable"]),
config=SkillInjectionConfig(
mode="adaptive",
max_tokens=200,
adaptive_order=["full", "summary", "ref"],
),
)
)Optional summary fields:
include_inputs=True: include input field namesinclude_outputs=True: include output field namesinclude_steps=True: include workflow step summariesio_schema_mode="schema": include compact input/output schema in JSON summaries
Minimal runnable example for mode="full":
python examples/skills/resolve_full_prompt.pyCLI helper to render a prompt section from a stored skill:
agent-artifacts artifact resolve deploy_fastapi@1.0.0 --mode full --format jsonResolve a local skill file without publishing:
agent-artifacts artifact resolve --path ./examples/skills/deploy_fastapi_service.yaml --mode summary --format json --include-inputs --io-schema-mode schema