Skip to content

Conversation

@JoseSK999
Copy link
Member

Description and Notes

  • Implement SwiftSyncAgg, which is just a u128 newtype.
  • Implement verify_block_transactions_swiftsync and process_block_swiftsync for AssumeValid SwiftSync validation.
  • New check_block function to reduce duplication (merkle root, BIP34 height, witness commitment, max weight).
  • New max_supply_at_height function required for SwiftSync supply validation.
  • Add the first 176 mainnet blocks in order to test the SwiftSync aggregator and supply.
  • Rename old blocks.txt to regtest_blocks.txt.

The aggregator addition/subtraction is implemented as wrapping arithmetic with the result of SHA256(salt || txid || vout). The hash uses only one compression round, as the salt size is fixed to 19 bytes (152 secret bits should be enough). In my old computer one hash takes ~380ns.

@JoseSK999 JoseSK999 added enhancement New feature or request consensus This changes something inside our consensus implementation labels Feb 9, 2026
@JoseSK999 JoseSK999 force-pushed the consensus-swift-sync branch from 03f52b9 to 138473f Compare February 9, 2026 01:02
- Implement `SwiftSyncAgg`, which is just a `u128` newtype.
- Implement `verify_block_transactions_swiftsync` and `process_block_swiftsync` for AssumeValid SwiftSync validation.
- New `check_block` function to reduce duplication (merkle root, BIP34 height, witness commitment, max weight).
- New `max_supply_at_height` function required for SwiftSync supply validation.
- Add the first 176 mainnet blocks in order to test the SwiftSync aggregator and supply.
- Rename old `blocks.txt` to `regtest_blocks.txt`.

The aggregator addition/subtraction is implemented as wrapping arithmetic with the result of `SHA256(salt || txid || vout)`. The hash uses only one compression round, as the salt size is fixed to 19 bytes (152 secret bits should be enough). In my old computer one hash takes ~380ns.
@rustaceanrob
Copy link

rustaceanrob commented Feb 10, 2026

Is there a particular reason SHA256 was used here? Another option would be Siphash24, which can be salted with random k0 and k1 values at startup. I only point this out as SHA256 has a lower throughput than Siphash24, which effects performance given the number of hashes being performed here. The Bitcoin Core reference implementation I implemented also uses Siphash24 via SaltedOutpointHasher.

Some results from a local benchmark of rust-bitcoin

sha256/engine_input/10  time:   [7.3749 ns 7.3926 ns 7.4070 ns]
                        thrpt:  [1.2574 GiB/s 1.2598 GiB/s 1.2628 GiB/s]
sha256/engine_input/1024
                        time:   [497.33 ns 497.40 ns 497.48 ns]
                        thrpt:  [1.9170 GiB/s 1.9173 GiB/s 1.9176 GiB/s]
siphash24/hash_with_keys/1k
                        time:   [331.07 ns 331.18 ns 331.30 ns]
                        thrpt:  [2.8786 GiB/s 2.8796 GiB/s 2.8806 GiB/s]
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) low severe
siphash24/hash_to_u64_with_keys/1k
                        time:   [331.27 ns 331.44 ns 331.63 ns]
                        thrpt:  [2.8757 GiB/s 2.8773 GiB/s 2.8789 GiB/s]
Found 4 outliers among 100 measurements (4.00%)
  4 (4.00%) high mild

@JoseSK999
Copy link
Member Author

JoseSK999 commented Feb 10, 2026

Hi @rustaceanrob, thanks for pointing this out! Are the 64 bits of output length good enough or are you doing something like hashing it 4 times (with a counter) to get the four 64-bit limbs?

I guess the output length is not that relevant here since it's keyed with 128 bits anyway

@rustaceanrob
Copy link

rustaceanrob commented Feb 10, 2026

As long as the salt is randomized I think 64 bits is sufficient, also considering this is an assume-valid implementation. SHA256 is certainly more secure, but I just wanted to point out the performance difference if sync speed is the priority for users.

@luisschwab
Copy link
Member

luisschwab commented Feb 10, 2026

SHA256 is certainly more secure, but I just wanted to point out the performance difference if sync speed is the priority for users.

I don't think this would make a difference in practice, since the bottleneck becomes fetching blocks from P2P instead of waiting on the CPU.

@JoseSK999
Copy link
Member Author

JoseSK999 commented Feb 10, 2026

So I have done some benches on my old computer and this would be the OutPoint hashing time in a block with 20,000 hinted-as-spent outputs and 20,000 inputs:

  • SHA256: ~14.4ms (360ns per OutPoint).
  • Two SipHash24 digests: ~2.3ms (57ns per OutPoint).
  • One SipHash24 digest: ~1.2ms (30ns per OutPoint).

Either way we will finish much faster than the blocks arrive (we can also parallelize), so I agree with @luisschwab.

But SipHash24 is nice as it's designed to be keyed. I have implemented this with an ugly 19-bytes salt, so that SHA256 runs in one compression round. But we can keep the agg with 128 bits, while actually using a salt of 32 bytes: two SipHash24 keys. That's perhaps a nicer approach?

@luisschwab
Copy link
Member

Indeed, I'd go with SipHash24. And it will be more efficient when it comes to CPU cycles and power consumption, even if minor.

@JoseSK999
Copy link
Member Author

Updated my comment, a better benchmark shows that two SipHash24 run 6x faster than a single SHA256 for me, and only one hash is 12x.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

consensus This changes something inside our consensus implementation enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants