Releases: ArweaveTeam/arweave
Release 2.7.3
Arweave 2.7.3 Release Notes
2.7.3 is a minor release containing:
Re-packing in place
You can now repack a storage module from one packing address to another without needing any extra storage space. The repacking happens "in-place" replacing the original data with the repacked data.
See the storage_module
section in the arweave
help ( ./bin/start help
) for more information.
Packing bug fixes and performance improvements
This release contains several packing performance improvements and bug fixes.
Coordinated Mining performance improvement
This release implements an improvement in how nodes process H1 batches that they receive from their Coordinated Mining peers. As a result the cm_in_batch_timeout
is no longer needed and has been deprecated.
Release 2.7.2
This release introduces a hard fork that activates at height 1391330, approximately 2024-03-26 14:00 UTC.
Coordinated Mining
When coordinated mining is configured multiple nodes can cooperate to find mining solutions for the same mining address without the risk of losing reserved rewards and blacklisting of the mining address. Without coordinated mining if two nodes publish blocks at the same height and with the same mining address, they may lose their reserved rewards and have their mining address blacklisted (See the Mining Guide for more information). This allows multiple nodes which each store a disjoint subset of the weave to reap the hashrate benefits of more two-chunk solutions.
Basic System
In a coordinated mining cluster there are 2 roles:
- Exit Node
- Miners
All nodes in the cluster share the same mining address. Each Miner generates H1 hashes for the partitions they store. Occasionally they will need an H2 for a packed partition they don't store. In this case, they can find another Miner in the coordinated mining cluster who does store the required partition packed with the required address, send them the H1 and ask them to calculate the H2. When a valid solution is found (either one- or two-chunk) the solution is sent to the Exit Node. Since the Exit Node is the only node in the coordinated mining cluster which publishes blocks, there's no risk of slashing. This point can be further enforced by ensuring only the Exit Node stores the mining address private key (and therefore only the Exit Node can sign blocks for that mining address)
Every node in the coordinated mining cluster is free to peer with any other nodes on the network as normal.
Single-Miner One Chunk Flow
![Screenshot 2024-03-01 at 9 41 36 AM](https://private-user-images.githubusercontent.com/3465100/309295385-14df8f43-22e7-4db9-b5fb-f8ea0bcd816b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxMzQzNTgsIm5iZiI6MTczOTEzNDA1OCwicGF0aCI6Ii8zNDY1MTAwLzMwOTI5NTM4NS0xNGRmOGY0My0yMmU3LTRkYjktYjVmYi1mOGVhMGJjZDgxNmIucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDlUMjA0NzM4WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MDJiNDkxYzk3ZjgyZDA4ZWE2NDIyOGViYjYzZWJjZWJlYTYwODRjZDlmNDhjYmJkZDhlMjBmZTQ1NDhmMDYzYSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.gm2kOVFInJP73FbSg8lFtz-Nms1917d0WCoWNjYZOb4)
Note: The single-miner two chunk flow (where Miner1 stores both the H1 and H2 partitions) is very similar
Coordinated Two Chunk Flow
![Screenshot 2024-03-01 at 9 42 33 AM](https://private-user-images.githubusercontent.com/3465100/309295434-a7595ba1-bf98-4df1-8260-bf8cec9df1e1.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxMzQzNTgsIm5iZiI6MTczOTEzNDA1OCwicGF0aCI6Ii8zNDY1MTAwLzMwOTI5NTQzNC1hNzU5NWJhMS1iZjk4LTRkZjEtODI2MC1iZjhjZWM5ZGYxZTEucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDlUMjA0NzM4WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9M2QzYzg5YWZlYmM2OTNhY2U0ZGFlYjM5MGQwYWQyNGFhODQyN2ZiMTY4Mzk3NDk1NWI4ZjYwODIyMDg3NjM3ZCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.RIayNOfjJoYLsd2F_0s7B0fzy_N36nbbPzn5QlnJSZc)
Configuration
- All nodes in the Coordinated Mining cluster must specify the
coordinated_mining
parameter - All nodes in the Coordinated Mining cluster must specify the same secret via the
cm_api_secret
parameter. A secret can be a string of any length. - All miners in the Coordinated Mining cluster should identify all other miners in the cluster using the
cm_peer
multi-use parameter.- Note: an exit node can also optionally mine, in which case it is also considered a miner and should be identified by the
cm_peer
parameter
- Note: an exit node can also optionally mine, in which case it is also considered a miner and should be identified by the
- All miners (excluding the exit node) should identify the exit node via the
cm_exit_peer
parameter.- Note: the exit node should not include the
cm_exit_peer
parameter
- Note: the exit node should not include the
- All miners in the Coordinated Mining cluster can be configured as normal but they should all specify the same
mining_addr
.
There is one additional parameter which can be used to tune performance:
cm_out_batch_timeout
: The frequency in milliseconds of sending other nodes in the coordinated mining setup a batch of H1 values to hash. A higher value reduces network traffic, a lower value reduces hashing latency. Default is 20.
Native Support for Pooled Mining
The Arweave node now has built-in support for pooled mining.
New configuration parameters (see arweave node help for descriptions)::
is_pool_server
is_pool_client
pool_api_key
pool_server_address
Mining Performance Improvements
Implemented several optimizations and bug fixes to enable more miners to achieve their maximal hashrate - particularly at higher partition counts.
A summary of changes:
- Increase the degree of horizontal distribution used by the mining processes to remove performance bottlenecks at higher partition counts
- Optimize the erlang VM memory allocation, management, and garbage collection
- Fix several out of memory errors that could occur at higher partition counts
- Fix a bug which could cause valid chunks to be discarded before being hashed
Updated Mining Performance Report:
=========================================== Mining Performance Report ============================================
VDF Speed: 3.00 s
H1 Solutions: 0
H2 Solutions: 3
Confirmed Blocks: 0
Local mining stats:
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
| Partition | Data Size | % of Max | Read (Cur) | Read (Avg) | Read (Ideal) | Hash (Cur) | Hash (Avg) | Hash (Ideal) |
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
| Total | 2.0 TiB | 5 % | 1.3 MiB/s | 1.3 MiB/s | 21.2 MiB/s | 5 h/s | 5 h/s | 84 h/s |
| 1 | 1.2 TiB | 34 % | 0.8 MiB/s | 0.8 MiB/s | 12.4 MiB/s | 3 h/s | 3 h/s | 49 h/s |
| 2 | 0.8 TiB | 25 % | 0.5 MiB/s | 0.5 MiB/s | 8.8 MiB/s | 2 h/s | 2 h/s | 35 h/s |
| 3 | 0.0 TiB | 0 % | 0.0 MiB/s | 0.0 MiB/s | 0.0 MiB/s | 0 h/s | 0 h/s | 0 h/s |
+-----------+-----------+----------+-----------+---------------+---------------+------------+------------+--------------+
(All values are reset when a node launches)
- H1 Solutions / H2 Solutions display the number of each solution type discovered
- Confirmed Blocks displays the number of blocks that were mined by this node and accepted by the network
- Cur values refer to the most recent value (e.g. the average over the last ~10seconds)
- Avg values refer to the all-time running average
- Ideal refers to the optimal rate given the VDF speed and amount of data currently packed
% of Max refers to how much of the given partition - or whole weave - is packed
Protocol Changes
The 2.7.2 Hard Fork is scheduled for block 1391330 (or roughly 2024-03-26 14:00 UTC), at which time the following protocol changes will activate:
- The difficulty of a 1-chunk solution increases by 100x to better incentivize full-weave replicas
- An additional pricing transition phase is scheduled to start November, 2024
- A pricing cap of 340 Winston per GiB/minute is implemented until the November pricing transition
- The checkpoint depth is reduced from 50 blocks to 18
- Unnecessary poa2 chunks are rejected early to prevent a low impact spam attack. Even in the worst case this attack would add minimal bloat to the blockchain and thus wasn't a practical exploit. Closing the vector as a matter of good hygiene.
Additional Bug Fixes and Improvements
- Enable Randomx support for OSX and arm/aarch64
- Simplified TLS protocol support
- See new configuration parameters
tls_cert_file
andtls_key_file
to configure TLS
- See new configuration parameters
- Add several more prometheus metrics:
- debug-only metrics to track memory performance and processor utilization
- mining performance metrics
- coordinated mining metrics
- metrics to track network characteristics (e.g. partitions covered in blocks, current/scheduled price, chunks per block)
- Introduce a
bin/data-doctor
utilitydata-doctor merge
can merge multiple storage modules into 1data-doctor bench
runs a series of read rate benchmarks
- Introduce a new
bin/benchmark-packing
utility to benchmark a node's packing peformance- The utility will generate input files if necessary and will process as close to 1GiB of data as possible while still allowing each core to process the same number of whole chunks.
- Results are written to a csv and printed to console
Release 2.7.1
This release introduces a hard fork that activates at height 1316410, approximately 2023-12-05 14:00 UTC.
Note if you are running your own VDF Servers, update the server nodes first, then the client nodes.
Bug fixes
Address Occasional Block Validation Failures on VDF Clients
This release fixes an error that would occasionally cause VDF Clients to fail to validate valid blocks. This could occur following a VDF Difficulty Retarget if the VDF client had cached a stale VDF session with steps computed at the prior difficulty. With this change VDF sessions are refreshed whenever the difficulty retargets.
Stabilize VDF Difficulty Oscillation
This release fixes an error that caused unnecessary oscillation when retargeting VDF difficulty. With this patch the VDF difficulty will adjust smoothly towards a difficulty that will yield a network average VDF speed of 1 second.
Ensure VDF Clients Process Updates from All Configured VDF Servers
This release makes an update to the VDF Client code so that it processes all updates from all configured VDF Servers. Prior to this change a VDF Client would only switch VDF Servers when the active server became non-responsive - this could cause a VDF Client to get "stuck" on one VDF Server even if an alternate server provided better data.
Delay the pricing transition
This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly February 20, 2024
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.1
See the Mining Guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.
Release 2.7.0
This release introduces a hard fork that activates at height 1275480, approximately 2023-10-05 07:00 UTC.
New features
Flexible Merkle Tree Combinations
When combining different data transactions, the merkle trees for each data root can be added to the larger merkle tree without being rebuilt or modified. This makes it easier, quicker, and less CPU-intensive to combine together multiple data transactions.
Documentation on Merkle Tree Rebasing: https://github.com/ArweaveTeam/examples/blob/main/rebased_merkle_tree/README.md
Example Code: https://github.com/ArweaveTeam/examples/blob/main/rebased_merkle_tree/rebased_merkle_tree.js
VDF Retargeting
The average VDF speed across the network is now tracked and used to increase or decrease the VDF difficulty so as to maintain a roughly 1-second VDF time across the network.
Bug fixes and other updates
Delay the pricing transition
This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly Dec. 14, 2023.
Memory optimization when mining
This change allows the mining server to periodically reclaim memory. Previously when a miner was configured with a suitably high mining_server_chunk_cache_size_limit
(e.g. 5,000-7,000 per installed GB of RAM) memory usage would creep up, sometimes causing an out of memory error. With this change, that memory usage can be periodically reclaimed, delaying or eliminating the OOM error. Further performance and memory improvements are planned in the next release.
Start form local state
Introduce the start_from_latest_state
and start_from_block
configuration options allowing a miner to be launched from their local state rather than downloading the initialization data from peers. Most useful when bootstrapping a testnet.
Ensure genesis transaction data is served via the /tx endpoint
Fix for issue #455
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.0
See the Mining Guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.
Release 2.6.10
The release introduces a few improvements, bug fixes, and one new endpoint.
- Fix two memory issues that occasionally cause out-of-memory exceptions:
- When running a VDF server with a slow VDF client, the memory footprint of the VDF server would gradually increase until all memory was consumed;
- When syncing weave data the memory use of a node would spike when copying data locally between neighboring partitions, occasionally triggering an out-of-memory exception
- implement the
GET /total_supply
endpoint to return the sum of all the existing accounts in the latest state, in Winston; - several performance improvements to the weave sync process;
- remove the following metrics from the
/metrics
endpoint (together accounting for several thousand individual metrics):erlang_vm_msacc_XXX
erlang_vm_allocators
erlang_vm_dist_XXX
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.6.10
See the mining guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.
Release 2.6.9
The release introduces a few improvements and bug fixes.
- Improve syncing speed and stability significantly;
- fix the issue where the node connected to a VDF server would occasionally lag behind;
- add support for the VDF server pull interface, removing the requirement of a static IP when using a VDF server; to enable it, run your client with
enable vdf_server_pull
; - improve the mining performance of the nodes connected to the VDF server;
- fix the bug introduced in 2.6.4 where two-chunk solutions with the chunks coming from different partitions would be dropped;
- disable the server-side packing/unpacking of chunks by default (used to be enabled but very strictly limited); enable with
enable pack_served_chunks
; - add the GET /inflation/{height} endpoint returning the inflation reward for the given height;
- reduce peak memory footprint during node initialization, and baseline memory footprint while syncing.
Note if you are running your own VDF servers, update the server nodes first, then the client nodes.
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.6.9
See the mining guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.
Release 2.6.8
This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window by 4 months, and extends the interpolation between of old and new pricing systems to 18 months, from 12. This release introduces a hard fork that activates at height 1,189,560, approximately 2023-05-30 16:00 UTC.
Please note that the activation date for this patch is May 30th, as the present version has a real but small effect on end-user storage pricing. You will need to make sure you have upgraded your miner before this time to connect to the network.
Release 2.6.7.1
- Fix a regression introduced by 2.6.7 where packed chunks were not padded correctly;
- tweak the data discovery and syncing a bit.
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.6.7.1
See the mining guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.
Release 2.6.7
- Reduce the overhead caused by the inefficient GET /recent_hash_list_diff handler,
essentially speeding up (re-)packing; - fix the bug introduced in 2.6.6 where the in-place repacking could store invalid data;
clean up the invalid records; - fixed the bug where the node would print the out-of-sync warning in the
console when only one (or several out of many) trusted peer(s) are lagging behind.
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.6.7
See the mining guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.
Release 2.6.6
The release introduces a few improvements and bug fixes.
- Fix the regression introduced in 2.6.5 where data synchronisation became very slow;
- speed up the in-place repacking of the 2.5 storage;
- choose the default packing rate based on the actual packing latency achieved by the node's processor;
- hard-code the trusted peers to use when no trusted peers are specified explicitly; filter out the peers which fell behind.
The release comes with the prebuilt binaries for the Linux x86_64 platforms.
If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.6.6
See the mining guide for further instructions.
If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.