Here is the AI-generated documentation by Deep-Wiki.
The Steem Load Balancer is a Node.js-based application designed to distribute API requests across a list of predefined Steem Blockchain RPC Nodes. It enhances application availability and reliability by routing requests to the most responsive node.
This project was developed by STEEM's Top Witness, @justyy, who also established Two STEEM Load Balancer RPC Nodes, steem.justyy.com (New York) and api.steemyy.com (London), using this project as its foundation.
A similar service, https://steem.senior.workers.dev, is based on CloudFlare Worker which runs at the CloudFlare Edge Network but comes with a daily quota of 100,000 requests.
Another similar service, https://api2.steemyy.com is based on CloudFlare's snippets. This node requires a CloudFlare Paid Plan (Pro+) and it will route to a quickest RPC node. See steem-proxy-cloudflare for more information.
The primary motivation behind this project is to provide a scalable and reliable Load Balancer Node that can be integrated into applications to improve their availability and performance. Unlike CloudFlare-based solutions, this setup does not have a daily request quota, making it suitable for high-demand applications.
Please note that this can be easily configured to work with other Blockchains such as Hive and Blurt.
- Load Balancing: Distributes requests across multiple Steem API servers. The
jussi_numandstatusare checked before a node is chosen (See below). - Rate Limiting: Protects against abuse by limiting the number of requests. For example, maximum 300 requests per 60 second window. This can be set in the config.yaml.
- Logging: Provides detailed logs for debugging and monitoring.
- SSL Support: Configurable SSL certificates for secure HTTPS communication. Reject or Ignore the SSL certificates when forwarding the requests via the field
rejectUnauthorizedin config.yaml
The node first checks whether a previously selected node is still valid (i.e., the cached entry hasn't expired). If it is valid, the request is directly forwarded to that node. Otherwise, the system sends a get_version request to the candidate nodes listed in config.nodes. Among the first config.firstK nodes (default: 1), the node with the highest jussi_num value is selected, cached, and used for subsequent requests.
The Steem blockchain ensures idempotency at the transaction level when the signed content is identical. See Testing Parallel Transfer on STEEM which means that:
We can simplify the load-balancing logics. Just fan out the requests to multiple nodes and return the quickest response.
However, this is against the idea of load balancing. Faning out requests will be putting loads on RPC nodes.
A Steem RPC node should return the following to indicate the healthy state. The jussi_num needs to catch up with the latest block height. If the jussi_num is far behind, e.g. the max_jussi_number_diff in config.yaml, then the node will not be considered. Currently, it is set to 100.
{
"status": "OK",
"datetime": "2025-02-01T11:06:30.781448",
"source_commit": "ae6c6c77601436e496a8816ece2cbc6e26fbe3c2",
"docker_tag": "latest",
"jussi_num": 92629431
}The configuration for the Steem Load Balancer is specified in the config.yaml file. Here's a breakdown of the configuration options:
Configuration File: config.yaml
nodes:
- "https://api2.justyy.com"
- "https://api.justyy.com"
- "https://api.steemit.com"
# - "https://api.steemitdev.com"
# - "https://api.pennsif.net"
# - "https://api.moecki.online"
# - "https://api.botsteem.com"
# - "https://api.steememory.com"
rateLimit:
windowMs: 30000
maxRequests: 600
headers:
"https://api.justyy.com":
"X-Edge-Key": "${X_EDGE_KEY}"
"https://api2.justyy.com":
"X-Edge-Key": "${X_EDGE_KEY}"
version: "2026-01-14"
max_age: 3
logging: true
max_payload_size: "5mb"
max_jussi_number_diff: 500
min_blockchain_version: "0.23.0"
logging_max_body_len: 100
retry_count: 3
user_agent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36"
sslCertPath: "${SSL_CERT_PATH}"
sslKeyPath: "${SSL_KEY_PATH}"
rejectUnauthorized: false
timeout: 2500
plimit: 5
port: 9091
cache:
enabled: true
ttl: 3
debug: false
firstK: 1
strategy: "max_jussi_number" # options: first, random, max_jussi_number, latest_version- nodes: An array of API server URLs to which requests will be distributed. You can add or remove nodes as needed.
- rateLimit: Configuration for rate limiting.
- windowMs: Time window in milliseconds for the rate limit (e.g., 60000 ms = 1 minute).
- maxRequests: Maximum number of requests allowed in the time window.
- version: The version of the Steem Load Balancer.
- max_age: Cache duration for responses in seconds (GET).
- logging: Boolean value to enable or disable logging.
- sslCertPath: Path to the SSL certificate file for HTTPS communication.
- sslKeyPath: Path to the SSL key file for HTTPS communication.
- user_agent: User Agent String in the Header to forward.
- min_blockchain_version: Min blockchain version number e.g. 0.23.0 to decide the validity of a node.
- max_payload_size: Max payload size.
- max_jussi_number_diff: The maximum difference of block difference is allowed.
- logging_max_body_len: truncate the request.body in log.
- retry_count: Retry count for GET and POST forward requests. There is a 100ms between retries.
- rejectUnauthorized: Should we ignore SSL errors? Default true.
- timeout: Max timeout (milliseconds) for fetch to time out.
- plimit: Max concurrent requests to poke the servers. Reduce this if the server is laggy.
- cache.enabled: Should we cache the chosen Node?
- cache.ttl: When cache.enabled, how many seconds before cache expires.
- debug: When set to debug, more messages are set e.g. in the response header.
- firstK: Choosing the node which has the max Jussi Number from the first
firstKnodes that respond OK. Default is 1. - strategy: The strategy to pick the chosen node. This can be one of: first, random, max_jussi_number (default), latest_version
- headers: The headers can be specified to pass to the downstream nodes. When proxying requests, the Load Balancer injects a shared-secret header so downstream nodes can identify trusted traffic and apply elevated (or exempt) rate-limit policies.
Clone the Repository:
git clone https://github.com/doctorlai/steem-load-balancer.git
cd steem-load-balancerUpdate the config.yaml file with your desired nodes, rate limits, and SSL paths.
DOCKER_IMAGE=steem-load-balancer
STEEM_LB_PORT=9091
# Build the Docker image
docker build -t $DOCKER_IMAGE .docker run --name $DOCKER_IMAGE -p $STEEM_LB_PORT:9091 -v /root/.acme.sh/:/root/.acme.sh/ $DOCKER_IMAGEUse the following utility to build the docker and then start the server.
source ./setup-env.sh
./build.sh
./run.shAnd also, there are stop and restart.
You can pass ./config.yaml to either ./run.sh or ./restart.sh. For example:
./restart.sh ./config.yamlA latest image has been built and store at docker hub, so you can do:
docker pull justyy/steem-load-balancer:latest
docker tag justyy/steem-load-balancer:latest steem-load-balancer:latestThen:
# Run the steem load balancer node with restart policy
STEEM_LB_PORT=443
RETRY_COUNT=3
docker run \
-e NODE_ENV=production \
-e SSL_CERT_PATH=$SSL_CERT_PATH \
-e SSL_KEY_PATH=$SSL_KEY_PATH \
-e STEEM_LB_VERSION=$STEEM_LB_VERSION \
-e DEBUG=false \
--name steem-load-balancer \
--restart on-failure:$RETRY_COUNT \
-p $STEEM_LB_PORT:9091 \
-v /root/.acme.sh/:/root/.acme.sh/ \
steem-load-balancer:latest
## or simply
./run.sh # or ./restart.shYou can use the docker-compose or docker compose to build the load balancer:
docker-compose up --build -dThis will build and start the steem load balancer.
To view the logs using docker-compose, run:
docker-compose logs -fTo restart the docker-compose container, run:
docker-compose down # Stop the containers
docker-compose up -d # Start the containers in detached modeOr simply:
docker-compose restart steem_lbUse the following script i.e. integration-tests.sh to perform a basic integration test — it builds the Docker image, starts the server locally, sends a request, and verifies that the response has a 'status' of 'OK' with a status code of 200.
source ./setup-env.sh
## on success, e1it code is 0.
## on failure, exit code is 1.
./tests/integration-tests.shUse integration-tests-docker-compose.sh to test the steem load balancer via docker-compose.
Run npm test or npm run test to run the unit tests on the project.
Run npm coverage to run the tests with coverage report.
Tools are placed at ./tools directory.
docker logs -f steem-load-balancerIf you have SSL certificates, provide the paths in the config.yaml file. If SSL is not configured or the certificate files are missing, the server will default to HTTP.
The rate limiting configuration prevents abuse by restricting the number of requests a user can make within a given time window. Adjust the rateLimit settings in config.yaml as needed.
Enable logging by setting "logging": true in config.yaml. Logs will be printed to the console and can help with debugging and monitoring.
On the GET requests, the response JSON will show some additional data including statistics (including Uptime, Access Counters, Error Counters, Not Chosen Counters and Jussi Behind Counters):
See a sample JSON response for sending a GET to the STEEM RPC Load Balancer.
Port 9091 is the port number used in the container. However this can be changed in config.yaml. This is useful if you want to run the node directly e.g:
node src/index.js- Port 443 is already taken: Ensure no other process is using port 443. Use sudo lsof -i :443 to check. Change the port in the configuration if needed.
- SSL Certificate Issues: Ensure the SSL certificate and key files are in the correct format and paths are correctly specified.
See this post,
load_balancing_nodes = [
"https://api.steemyy.com",
"https://api2.steemyy.com",
"https://steem.justyy.com"
]
def node_rotator(nodes):
"""Infinite generator that yields nodes in round-robin."""
while True:
for node in nodes:
yield node
def fetch_from_load_balancer(node):
"""Optional preprocessing, health check, or logging."""
print(f"Selected node: {node}")
return node
node_gen = node_rotator(load_balancing_nodes)
node = fetch_from_load_balancer(next(node_gen))
## Replace with your actual app loop condition
while True:
try:
# use the node in your API calls
pass
except Exception as e:
print(f"Node error: {e}, switching node...")
node = fetch_from_load_balancer(next(node_gen))With this setup, your app will:
- Always start with a healthy RPC node
- Automatically switch to a new one if the current node fails
- This approach provides better stability and resilience compared to relying on a single hardcoded RPC endpoint.
Setting NODE_ENV to "production" (by default) or "development".
This project is licensed under the MIT License.
Contribution are absolutely welcome! Please follow the guidance here and CODE OF CONDUCT
If you like this and want to support me in continuous development, you can do the following:



