Many cryptographic primitives could reshape human-to-human communications in our business and personal lives, but this doesn’t happen because cryptography is complex and the math is hard to “do in your head.”
Agents could learn that. They can map mundane intents to primitives and instantiate protocols at runtime. Scheduling a meeting can use Private Set Intersection (PSI) instead of sharing calendars. “Prove you’re over 21” at the bar can use a Zero-Knowledge Proof (ZKP) (with nonce/challenge anti-replay) instead of photocopying an ID. Anonymous reporting with verifiable membership can use anonymous credentials / ring signatures / group signatures to solve spam. Tip tokens can use blind signatures so the issuer can’t link purchase to spend while still preventing double spending. The list is actually pretty long.
Achieving this is a multidimensional challenge. Agents must: (1) "spot" and select the right primitive in an everyday context, (2) negotiate adoption with another agent, (3) implement the protocol correctly, (4) use crypto tools and computation competently, and (5) reason about threats and security strength. These are exactly the five judging dimensions of Protocol Agent, a benchmark that measures not “crypto knowledge” in the abstract (which has already been studied), but the practical ability to apply cryptography to improve daily life.
This benchmark is the first step in a larger effort (more coming in Q1 2026): post-training models that perform better on it.
- Human-readable: Read here
An A2A (Agent-to-Agent) green agent compatible with the AgentBeats platform.
Protocol Agent benchmarks a single purple agent via self-play on the crypto conversational challenges from benchmark_challenges_diverse_v1.json, scoring with the same rubric dimensions as the arena:
- Primitive Selection
- Negotiation Skills
- Implementation Correctness
- Computation / Tool Usage
- Security Strength
This repo is standalone for local demo runs: it includes a local baseline purple agent (baseline_purple/) and a one-command runner that streams the multi-role conversation as it runs.
src/
├─ server.py # Server setup and agent card configuration
├─ executor.py # A2A request handling
├─ agent.py # Protocol Agent implementation (entrypoint)
├─ benchmark_schema.py # Benchmark JSON loader + datamodel
├─ runner.py # Self-play match runner
├─ judge_openai.py # OpenAI judge wrapper
├─ scoring.py # Outcome + aggregation (arena-aligned)
└─ messenger.py # A2A messaging utilities
baseline_purple/
├─ src/ # Local baseline purple agent (A2A server)
└─ requirements.txt
scripts/
├─ run_local.sh # One-command local end-to-end runner
└─ run_client.py # Local streaming client (prints turns + result artifact)
tests/
└─ test_agent.py # Agent tests
Dockerfile # Docker configuration
pyproject.toml # Python dependencies
.github/
└─ workflows/
└─ test-and-publish.yml # CI workflow
- Set env vars:
export OPENAI_API_KEY="...your key..."
export OPENAI_MODEL_JUDGE="gpt-4.1-mini"
export OPENAI_MODEL_PARTICIPANT="gpt-4.1-mini"- Run:
./scripts/run_local.shYou should see streamed lines like:
turn 1 | Alice: ...turn 2 | Bob: ...
and then a final Result artifact (JSON + summary).
python3 src/server.py --host 127.0.0.1 --port 9009{
"participants": { "agent": "http://localhost:9019" },
"config": {
"benchmark_path": "assets/benchmark_challenges_diverse_v1.json",
"limit_challenges": 1,
"max_turns": 4,
"repetitions": 1,
"seed": 0,
"include_transcripts": false,
"timeout_s_per_turn": 300
}
}OPENAI_API_KEY: required for judging.OPENAI_MODEL_JUDGE: e.g.gpt-4.1-mini.OPENAI_BASE_URL(optional): defaults tohttps://api.openai.com/v1/responses.
Build:
docker build --platform linux/amd64 -t protocol-agent:local .Run:
docker run -p 9009:9009 protocol-agent:localThe repository includes a GitHub Actions workflow that automatically builds, tests, and publishes a Docker image of your agent to GitHub Container Registry.
If your agent needs API keys or other secrets, add them in Settings → Secrets and variables → Actions → Repository secrets. They'll be available as environment variables during CI tests.
- Push to
main→ publisheslatesttag:
ghcr.io/<your-username>/<your-repo-name>:latest
- Create a git tag (e.g.
git tag v1.0.0 && git push origin v1.0.0) → publishes version tags:
ghcr.io/<your-username>/<your-repo-name>:1.0.0
ghcr.io/<your-username>/<your-repo-name>:1
Once the workflow completes, find your Docker image in the Packages section (right sidebar of your repository). Configure the package visibility in package settings.
Note: Organization repositories may need package write permissions enabled manually (Settings → Actions → General). Version tags must follow semantic versioning (e.g.,
v1.0.0).