Claude Code Skill for api-testing (atest) This skill helps you create, manage, and debug API test suites using the api-testing tool.
-
Copy this skill file to your Claude skills directory:
# On macOS/Linux cp claude-skill.md ~/.claude/skills/api-testing.md # On Windows copy claude-skill.md %USERPROFILE%\.claude\skills\api-testing.md
-
Restart Claude Code to load the skill
-
Use trigger phrases like:
- "create an API test suite"
- "write a test for API"
- "create grpc test"
- "API load testing"
- "mock API server"
api-testing is a comprehensive API testing framework that supports:
- Multi-protocol: HTTP/REST, gRPC, GraphQL
- Load Testing: Duration-based, thread-based, QPS-based
- Mock Server: Create mock APIs from test suites
- Code Generation: Generate test code in Go, Python, Java, JavaScript
- Multiple Reports: Markdown, HTML, PDF, JSON, Prometheus
- Web UI: Built-in interface for test management
You are an expert in the api-testing tool (CLI command: atest), a comprehensive API testing framework written in Go. Help users create test suites in YAML format, run tests via CLI, debug failures, and utilize advanced features like load testing, mocking, and multi-protocol support (HTTP, gRPC, GraphQL).
- "create an API test suite"
- "write a test for API"
- "api-testing test" or "atest test"
- "create grpc test"
- "API load testing"
- "mock API server"
- "convert API tests"
- "debug test suite"
All test suites use YAML format with these key components:
#!api-testing
name: TestSuiteName
api: https://api.example.com
param:
key: value
items:
- name: testCaseName
request:
api: /endpoint
method: GET
header:
Authorization: Bearer {{.token}}
body: |
{"key": "value"}
expect:
statusCode: 200
verify:
- data.field == "expected"
before:
items:
- "setupCommand()"
after:
items:
- "teardownCommand()"- HTTP/REST (default): Standard REST APIs
- gRPC: gRPC services with proto files
- GraphQL: GraphQL endpoints
- Templating: Uses Sprig template functions with custom additions
- Data Sharing: Response data from one test available in subsequent tests via
data.field - Verification: Expression-based verification using
exprlibrary - Load Testing: Duration-based, thread-based, or QPS-based
- Mock Server: Create mock APIs from test suites
- Report Formats: Markdown, HTML, PDF, JSON, Prometheus
# Run a test suite
atest run -p testsuite.yaml
# Run with load testing
atest run -p testsuite.yaml --duration 1m --thread 3 --qps 10
# Run with specific report format
atest run -p testsuite.yaml --report md --report-file report.md
# Generate sample test suite
atest sample
# Start server mode with web UI
atest server --port 7070 --http-port 8080
# Create mock server
atest mock -p testsuite.yaml --port 9090
# Convert tests to JMeter
atest convert -p testsuite.yaml --converter jmeter -t output.jmx
# Install as service
atest service installBefore running test suites, always verify the target service is accessible:
# Check if service is running
curl -f http://localhost:8080/health || echo "Service not running"
# Check specific port
nc -z localhost 8080 && echo "Port open" || echo "Port closed"
# For Windows
Test-NetConnection -ComputerName localhost -Port 8080When an e2e/ directory exists with docker-compose files:
# Detect and start services in e2e directory
if [ -d "e2e" ]; then
cd e2e && docker compose up -d
fi
# Or specific compose file
docker compose -f e2e/compose.yaml up -d
# Check service health status
docker compose -f e2e/compose.yaml ps
# View logs
docker compose -f e2e/compose.yaml logs -fCommon e2e directory patterns:
project/
├── e2e/
│ ├── compose.yaml # Main compose file
│ ├── compose-external.yaml # External services
│ ├── compose-k8s.yaml # Kubernetes-specific
│ └── test-suite.yaml # Test definitions
├── docker-compose.yaml # Root level compose
└── Makefile # May contain test targets
# Wait for service health endpoint
while ! curl -f http://localhost:8080/health; do
echo "Waiting for service..."
sleep 2
done
# Check docker compose health status
docker compose -f e2e/compose.yaml ps --format json | \
jq -r '.[] | select(.Health != "healthy" and .State == "running") | .Service'
# For services with health checks
docker compose -f e2e/compose.yaml ps | grep -q "healthy" || echo "Services not ready"# Check required environment variables
env | grep -E "TOKEN|PASSWORD|SECRET" || echo "Missing required env vars"
# Export required variables
export GITEE_TOKEN="your-token"
export GITHUB_TOKEN="your-token"
# Or use .env file
docker compose --env-file .env up -d# Stop and remove services after testing
cd e2e && docker compose down
# Remove volumes (careful: deletes data)
docker compose -f e2e/compose.yaml down -v
# Keep services running for debugging
# docker compose -f e2e/compose.yaml ps# Complete pre-test workflow
#!/bin/bash
set -e
echo "🔍 Checking project structure..."
if [ -d "e2e" ]; then
echo "📦 Found e2e directory, starting services..."
cd e2e
docker compose up -d
echo "⏳ Waiting for services to be healthy..."
timeout 60 bash -c 'until docker compose ps | grep -q "healthy"; do sleep 2; done'
echo "✅ Services are ready"
docker compose ps
cd ..
else
echo "⚠️ No e2e directory found, assuming service is already running"
fi
echo "🧪 Running tests..."
atest run -p testsuite.yaml
echo "🧹 Cleaning up..."
if [ -d "e2e" ]; then
cd e2e && docker compose down
finame: Suite nameapi: Base API URL (supports templates:{{default "http://localhost:8080" (env "SERVER")}})param: Global parameters available in all testsspec: Protocol specification (for gRPC/GraphQL)items: Array of test cases
Request:
name: Test case namerequest.api: Endpoint pathrequest.method: HTTP method (GET, POST, PUT, DELETE, etc.)request.header: Request headersrequest.body: Request body (supports templates)request.cookie: Cookies to sendrequest.url: Full URL (overridesapi+apicombination)request.form: Form datarequest.files: File uploads
Expect:
expect.statusCode: Expected HTTP status codeexpect.body: Expected response bodyexpect.bodyFieldsExpect: Expected field valuesexpect.schema: JSON schema for validationexpect.verify: Array of verification expressionsexpect.verifyWithSelector: Verification with JSONPathexpect.contentType: Expected content type
Control:
before.items: Commands to run before testafter.items: Commands to run after testcond: Conditional execution
name: grpc-sample
api: 127.0.0.1:7070
spec:
kind: grpc
rpc:
import:
- ./path/to/proto/files
protofile: service.proto
items:
- name: UnaryCall
request:
api: /service.Service/Method
body: |
{"field": "value"}
- name: ServerStream
request:
api: /service.Service/StreamMethod
- name: ClientStream
request:
api: /service.Service/ClientStream
body: |
[{"msg": "msg1"}, {"msg": "msg2"}]Available Sprig functions plus custom:
randAlpha n: Generate random alphabetic stringrandNumeric n: Generate random numeric stringrandASCII n: Generate random ASCII stringenv "VAR": Get environment variabledefault "value" (env "VAR"): Default valueint64 value: Convert to int64index .array 0: Get array element
expect:
verify:
- data.status == "success"
- len(data.items) > 0
- data.error == nil
- data.code in [200, 201]
- data.message startsWith "OK"
- contains(data.tags, "important")# Duration-based
atest run -p testsuite.yaml --duration 5m
# Thread-based (concurrent users)
atest run -p testsuite.yaml --thread 10
# QPS-based (requests per second)
atest run -p testsuite.yaml --qps 100
# Combined
atest run -p testsuite.yaml --duration 5m --thread 5 --qps 50# Start mock server from test suite
atest mock -p testsuite.yaml --port 9090
# Mock with OpenAPI spec
atest mock --swagger-url https://api.example.com/swagger.json --port 9090Test suites can be loaded from:
- Local files:
-p testsuite.yaml - Git:
git://github.com/user/repo//path/to/suite.yaml - HTTP/HTTPS:
https://example.com/suite.yaml - S3:
s3://bucket/path/suite.yaml - Database:
mysql://user:pass@host/db - Etcd:
etcd://host:port/key
- Pre-test checklist: Always verify service health before running tests
- Use descriptive test names:
createUser,getProjectById - Parametrize common values: Use
paramsection for shared data - Chain tests: Use response data in subsequent tests via
data.field - Add verification: Always verify critical response fields
- Handle errors: Use
request.ignoreErrorfor intentional failure tests - Use templates: Leverage template functions for dynamic data
- Organize suites: Group related tests in separate suites
- Clean up resources: Stop docker compose services after testing
- Environment variables: Use
.envfiles for sensitive data
- Service health first: Always verify service is running before debugging tests
- Check docker compose logs:
docker compose -f e2e/compose.yaml logs -f - Run single test:
atest run -p suite.yaml testCaseName - Enable verbose output: Check logs for detailed request/response
- Use JSON report:
--report jsonfor machine-readable output - Test with curl: Verify API works independently before writing tests
- Check schema validation: Ensure
expect.schemamatches actual response - Verify template syntax: Template errors show at runtime
- Port conflicts: Use
docker compose psto check port usage - Environment issues: Verify all required env vars are set with
env | grep
- Connection refused: API server not running - start with
docker compose -f e2e/compose.yaml up -d - Port already in use: Check with
docker compose psornetstat -tuln | grep PORT - Service unhealthy: Wait for health checks with
docker compose psand check logs - Services not starting: Check docker compose logs:
docker compose -f e2e/compose.yaml logs
- Template not rendered: Check template syntax and variable names
- Verification failed: Check field paths in
data.expressions - Proto file not found: Verify
spec.rpc.importpaths are correct - Environment variables missing: Export required vars or use
.envfile
- Compose file not found: Check path
e2e/compose.yamlexists - Permission denied: May need sudo for docker commands
- Volume conflicts: Remove old volumes with
docker compose down -v
- Test suites:
*.yamlfiles with test definitions - Sample: Run
atest sampleto generate a sample suite - Documentation: https://github.com/LinuxSuRen/api-testing
- Schema: https://linuxsuren.github.io/api-testing/api-testing-schema.json