Skip to content

AlexVarin2001/evm-node-rpc-benchmark

 
 

Repository files navigation

EVM Node RPC Benchmark

Distributed benchmarking tool for EVM-compatible RPC nodes with real-time monitoring, historical comparison, and worker-based architecture.

Architecture

This project consists of two components:

Master (Server)

  • Web UI for configuring and running benchmarks
  • Aggregates results from workers or runs benchmarks directly
  • Stores historical results to disk
  • Provides comparison charts for analyzing performance over time

Worker (Slave)

  • Lightweight service deployed next to EVM nodes
  • Executes benchmarks locally (minimizes network latency)
  • Reports results back to master via WebSocket
  • Configurable via environment variables

Features

  • Dual Mode Operation: Benchmark directly from master or via distributed workers
  • Real-time Charts: Live latency visualization with Chart.js
  • Historical Tracking: Save test results with names and comments
  • Comparison View: Compare multiple test runs on a single chart
  • Worker Management: Run workers alongside nodes for accurate benchmarking
  • Multiple Methods: Support for eth_blockNumber, eth_getBlockByNumber, and eth_call
  • Flexible Configuration: Adjustable RPS, duration, headers, and custom payloads
  • Statistics: Min/Max/Avg latency plus P90/P95/P99 percentiles

Quick Start

Option 1: Local Development (Direct Mode)

# Clone repository
cd evm-node-rpc-benchmark/

# Setup Python environment
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt

# Run master
uvicorn main:app --host 0.0.0.0 --port 8000

Open http://localhost:8000 and enter an RPC URL directly (e.g., https://base.drpc.org).

Option 2: Docker (Distributed Mode)

Build Images

# Build Master
docker build -f Dockerfile.master -t evm-bench-master .

# Build Worker
docker build -f Dockerfile.worker -t evm-bench-worker .

Run Master

docker run -d \
  -p 8000:8000 \
  -v $(pwd)/data:/app/data \
  --name evm-master \
  evm-bench-master

Access UI at http://localhost:8000

Run Worker (on node machine)

docker run -d \
  -p 8001:8001 \
  -e NODE_HOST=localhost \
  -e NODE_PORT=8545 \
  -e WORKER_NAME=Node-Production-1 \
  --network host \
  --name evm-worker \
  evm-bench-worker

Environment Variables:

  • NODE_HOST: Hostname of the EVM node (default: localhost)
  • NODE_PORT: Port of the EVM node (default: 8545)
  • WORKER_NAME: Identifier for this worker (default: worker)

Note: Use --network host to allow worker to access localhost:8545 on the host machine.

Usage

Running Direct Benchmarks

  1. Go to "Current Benchmark" tab
  2. Enter RPC URL: http://your-node.example.com or https://public-rpc.com
  3. Set Node Port: (ignored for direct HTTP/HTTPS URLs)
  4. Choose Method: eth_blockNumber, eth_getBlockByNumber, or eth_call
  5. Set RPS (requests per second) and Duration
  6. Optional: Add Test Name and Comment for history tracking
  7. Click "Run benchmark"

Running Worker-Based Benchmarks

  1. Deploy worker on the same machine as your EVM node
  2. In Master UI, enter Worker WebSocket URL: ws://worker-ip:8001/ws
  3. Set Node Port: Port where the node listens (e.g., 8545)
  4. Configure benchmark parameters
  5. Click "Run benchmark"

The worker will benchmark http://localhost:{node_port} and stream results back to master.

Viewing History

  1. Click "History & Comparison" tab
  2. Browse all saved test results
  3. Click "Refresh" to update the list

Comparing Tests

  1. Go to "History & Comparison" tab
  2. Check boxes next to 2 or more tests
  3. Click "Compare Selected"
  4. View comparison chart with multiple runs overlaid

Deleting Tests

  • Individual: Click "Delete" button in the row
  • Bulk: Select checkboxes, click "Delete Selected"

Configuration

Master Configuration

The master automatically saves results to /app/data. Mount a volume for persistence:

-v /path/to/local/data:/app/data

Worker Configuration

Configure workers via environment variables:

docker run -e NODE_HOST=10.0.0.5 -e NODE_PORT=8545 -e WORKER_NAME=GPU-Node-1 ...

API Endpoints

Master Server

  • GET / - Web UI
  • POST /run - Run benchmark (form data)
  • WebSocket /ws/run - Real-time benchmark execution
  • GET /api/history - List saved results
  • DELETE /api/history/{id} - Delete a result
  • POST /api/compare - Get data for comparison (body: {"ids": [...]})

Worker Server

  • WebSocket /ws - Receive commands from master
    • Command: {"cmd": "start", "config": {...}}
    • Response: Streams {"type": "update", "results": [...]} messages

Troubleshooting

Master Issues

  • No results showing: Check browser console for WebSocket errors
  • Results not persisting: Verify /app/data volume is mounted
  • Worker connection fails: Check worker is running and accessible

Worker Issues

  • Worker can't reach node: Verify NODE_HOST and NODE_PORT
  • Connection refused: Use --network host or ensure proper Docker networking
  • High latency: Worker should be on same machine as node for accurate results

General

  • Errors with 401/403: RPC endpoint requires authentication headers
  • Timeouts: Lower RPS or check network/firewall
  • Results inconsistent: Run warmup by testing twice (first test includes connection overhead)

Development

Project Structure

evm-node-rpc-benchmark/
├── core.py              # Shared benchmarking logic
├── storage.py           # Result persistence
├── main.py              # Master server (FastAPI)
├── worker.py            # Worker server (FastAPI)
├── templates/
│   └── index.html       # Web UI
├── Dockerfile.master    # Master Docker image
├── Dockerfile.worker    # Worker Docker image
└── requirements.txt     # Python dependencies

Running Tests Locally

# Terminal 1: Master
uvicorn main:app --host 0.0.0.0 --port 8000

# Terminal 2: Worker (optional)
uvicorn worker:app --host 0.0.0.0 --port 8001

Customizing Methods

Edit core.py to add custom RPC methods or payloads. The eth_call method includes a heavy KECCAK256 loop payload for stress testing.

Performance Notes

  • RPS Limits: Start with low RPS (5-10) and increase gradually
  • Duration: Keep below 3600s for browser stability
  • Worker Placement: Deploy workers on the same host as nodes for accurate latency measurement
  • Network Overhead: Direct benchmarking from master includes internet latency; use workers for node-only metrics

License

MIT

Contributing

PRs welcome! Please ensure:

  • Code follows existing patterns
  • UI changes are tested in both Chrome and Firefox
  • Docker builds succeed for both master and worker images

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 52.6%
  • Python 47.0%
  • Dockerfile 0.4%