Releases: streamingfast/firehose-core
v1.7.3
v1.7.2
-
Fixed
substreams-tier2
not setting itself ready correctly on startup sincev1.7.0
. -
Added support for
--output=bytes
mode which prints the chain's specific Protobuf block as bytes, the encoding for the bytes string printed is determined by--bytes-encoding
, useshex
by default. -
Added back
-o
as shortand for--output
infirecore tools ...
sub-commands.
v1.7.1
- Add back
grpc.health.v1.Health
service tofirehose
andsubstreams-tier1
services (regression in 1.7.0) - Give precedence to the tracing header
X-Cloud-Trace-Context
overTraceparent
to prevent user systems' trace IDs from leaking passed a GCP load-balancer
v1.7.0
Reader
- Reader Node Manager HTTP API now accepts
POST http://localhost:10011/v1/restart<?sync=true>
to restart the underlying reader node binary sub-process. This is a alias for/v1/reload
.
Tools
- Enhanced
firecore tools print merged-blocks
with various small quality of life improvements:- Now accepts a block range instead of a single start block.
- Passing a single block as the block range will print this single block alone.
- Block range is now optional, defaulting to run until there is no more files to read.
- It's possible to pass a merged blocks file directly, with or without an optional range.
Firehose
Important
This release will reject firehose connections from clients that don't support GZIP or ZSTD compression. Use --firehose-enforce-compression=false
to keep previous behavior, then check the logs for incoming Substreams Blocks request
logs with the value compressed: false
to track users who are not using compressed HTTP connections.
Important
This release removes the old sf.firehose.v1
protocol (replaced by sf.firehose.v2
in 2022, this should not affect any reasonably recent client).
- Add support for ConnectWeb firehose requests.
- Always use gzip compression on firehose requests for clients that support it (instead of always answering with the same compression as the request).
Substreams
-
The
substreams-tier1
app now has two new configuration flags named respectivelysubstreams-tier1-active-requests-soft-limit
andsubstreams-tier1-active-requests-hard-limit
helping better load balance active requests across a pool oftier1
instances.The
substreams-tier1-active-requests-soft-limit
limits the number of client active requests that a tier1 accepts before starting
to be report itself as 'unready' within the health check endpoint. A limit of 0 or less means no limit.This is useful to load balance active requests more easily across a pool of tier1 instance. When the instance reaches the soft
limit, it will start to be unready from the load balancer standpoint. The load balancer in return will remove it from the list
of available instances, and new connections will be routed to remaining clients, spreading the load.The
substreams-tier1-active-requests-hard-limit
limits the number of client active requests that a tier1 accepts before
rejecting incoming gRPC requests with 'Unavailable' code and setting itself as unready. A limit of 0 or less means no limit.This is useful to prevent the tier1 from being overwhelmed by too many requests, most client auto-reconnects on 'Unavailable' code
so they should end up on another tier1 instance, assuming you have proper auto-scaling of the number of instances available. -
The
substreams-tier1
app now exposes a new Prometheus metricsubstreams_tier1_rejected_request_counter
that tracks rejected
requests. The counter is labelled by the gRPC/ConnectRPC returned code (ok
andcanceled
are not considered rejected requests). -
The
substreams-tier2
app now exposes a new Prometheus metricsubstreams_tier2_rejected_request_counter
that tracks rejected
requests. The counter is labelled by the gRPC/ConnectRPC returned code (ok
andcanceled
are not considered rejected requests). -
Properly accept and compress responses with
gzip
for browser HTTP clients using ConnectWeb withAccept-Encoding
header -
Allow setting subscription channel max capacity via
SOURCE_CHAN_SIZE
env var (default: 100)
v1.6.9
Substreams
- Fix an issue preventing proper detection of gzip compression when multiple headers are set (ex: python grpc client)
- Fix an issue preventing some tier2 requests on last-stage from correctly generating stores. This could lead to some missing "backfilling" jobs and slower time to first block on reconnection.
- Fix a thread leak on cursor resolution resulting in bad counter for active connections
- Add support for zstd encoding on server
v1.6.8
Note
This release will reject connections from clients that don't support GZIP compression. Use --substreams-tier1-enforce-compression=false
to keep previous behavior, then check the logs for incoming Substreams Blocks request
logs with the value compressed: false
to track users who are not using compressed HTTP connections.
- Substreams: add
--substreams-tier1-enforce-compression
to reject connections from clients that do not support GZIP compression - Substreams performance: reduced the number of
mallocs
(patching some third-party libraries) - Substreams performance: removed heavy tracing (that wasn't exposed to the client)
- Fixed
reader-node-line-buffer-size
flag that was not being respected inreader-node-stdin
app - Well-known chains: change genesis block for near-mainnet from 9820214 to 9820210
- BlockPoller library: reworked logic to support more flexible balancing strategy
v1.6.7
firehose-grpc-listen-addr
andsubstreams-tier1-grpc-listen-addr
flags now accepts comma-separated addresses (allows listening as plaintext and snakeoil-ssl at the same time or on specific ip addresses)- removed old
RegisterServiceExtension
implementation (not used anywhere anymore) - rpc-poller lib: fix fetching the first block on an endpoint (was not following the cursor, failing unnecessarily on non-archive nodes)
v1.6.6
- Bump
substreams
anddmetering
to latest version adding theoutputModuleHash
to metering sender.
v1.6.5
Substreams fixes
Note All caches for stores using the updatePolicy
set_sum
(added in substreams v1.7.0) and modules that depend on them will need to be deleted, since they may contain bad data.
- Fix bad data in stores using
set_sum
policy: squashing of store segments incorrectly "summed" some values that should have been "set" if the last event for a key on this segment was a "sum" - Fix small bug making some requests in development-mode slow to start (when starting close to the module initialBlock with a store that doesn't start on a boundary)
Others
-
[Operator] Node Manager HTTP
/v1/resume
call now acceptsextra-env=<key>=<value>&extra-env=<keyN>=<valueN>
enabling to override environment variables for the next restart only. Usecurl -XPOST "http://localhost:10011/v1/resume?sync=true&extra-env=NODE_DEBUG=true"
(changelocalhost:10011
accordingly to your setup).This is not persistent upon restart!
-
[Metering] Revert undesired Firehose metric
Endpoint
changes, the correct new value used issf.firehose.v2.Firehose/Blocks
(had been mistakenly set tosf.firehose.v2.Firehose/Block
between version v1.6.1 and v1.6.4 inclusively).
v1.6.4
Substreams fixes
- Fixed an(other) issue where multiple stores running on the same stage with different initialBlocks will fail to proress (and hang)