Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks for ZK proofs #173

Merged
merged 32 commits into from
Feb 13, 2025
Merged

Conversation

dvdplm
Copy link
Contributor

@dvdplm dvdplm commented Jan 17, 2025

Adds a set of benchmarks for our ZK proofs, starting with FacProof and AffG.

Also contains a synthetic benchmark that measures the performance of exponentiations with large exponents vs reducing such exponents by the totient.

Given the proof code is not part of the public API (and shouldn't be), this PR introduces a "private_benches" feature that allows a thin(-ish) criterion benchmark wrapper ("benches/zk_proofs.rs") to call code in src/private_benches/mod.rs. Here we have access to everything that is pub(crate).

Run the benchmarks with cargo bench --bench zk_proofs --features private_benches.

The PR also initialize tracing during the benchmarks, which can be useful when doing optimization work.

Copy link

codecov bot commented Jan 17, 2025

Codecov Report

Attention: Patch coverage is 6.70241% with 696 lines in your changes missing coverage. Please review.

Project coverage is 89.33%. Comparing base (b20121d) to head (d462b01).

Files with missing lines Patch % Lines
synedrion/src/private_benches.rs 0.00% 696 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #173      +/-   ##
==========================================
- Coverage   95.39%   89.33%   -6.07%     
==========================================
  Files          38       39       +1     
  Lines       10170    10913     +743     
==========================================
+ Hits         9702     9749      +47     
- Misses        468     1164     +696     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dvdplm dvdplm self-assigned this Jan 20, 2025
@fjarri
Copy link
Member

fjarri commented Jan 21, 2025

Yes, unfortunately a feature is the only way to do it. There was one already, it was removed in #156 because all the relevant API became public.

@dvdplm dvdplm marked this pull request as ready for review January 21, 2025 10:55
@dvdplm dvdplm requested a review from fjarri January 21, 2025 10:55
This was referenced Jan 22, 2025
synedrion/Cargo.toml Outdated Show resolved Hide resolved
synedrion/benches/pow.rs Outdated Show resolved Hide resolved
b.iter_batched(
|| {
let rng2 = rng.clone();
proof_inputs(rng2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we can just create the inputs once - you won't need giant parameter lists, and the benchmarks will run faster. There's probably not much variation depending on the exact parameters, as long as the proof is correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a clear win, and works well. But I noticed something on this particular benchmark: a 10% regression on verification. I guess it means that the proof verification is quite dependent on how the actual numbers play out and there's significant variability there. That suggests we should probably not optimise the setup code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder why that would be the case. Sizes of resulting exponents probably shouldn't be that different, maybe it's the signs that account for it - the vartime operations during the verification can conditionally skip inversion.

Maybe iter_custom could work here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The signs seem more likely than the sizes indeed.

Bear in mind though that we are not setting the two MSBs in the primes yet so there's a little more variability in the modulus sizes than we'd like, but yeah, it's at most one bit more or less, so it's a bit odd.

Can you elaborate on why iter_custom would help here? I'm sure it'd work, just not sure I understand how it'd help clean up the code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You would be able to create the inputs and measure the execution time in the same closure, so you won't need proof_inputs() returning a tuple of ten things. I am not 100% sure it will work, but it might.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I don't think it will help us here though. Or I'm still misunderstanding. It'd looks sort of like this:

             b.iter_custom(|iters| {
				 let (a, b, c, d, e, f, g, h,,) = proof_inputs(rng);
                 let start = Instant::now();
                 for _i in 0..iters {
                     black_box(AffGProof::<Params>::new(&mut rng, AffGSecretInputs{ /* some inputs here */}, AffGPublicInputs{ /* more inputs here */ }, &rp_params, &aux));
                 }
                 start.elapsed()
             })

Wouldn't this have the same problem that performance measurments become inaccurate when using the same inputs repeatedly?

(There's also a stupid problem because of how this benchmark is setup: I do not have access to std::time::Instant here; for some reason it's not in core::time.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what I had in mind is that you just disband proof_inputs and create the values inplace in the benchmark closure (since it's only used there). The inputs will still be randomly generated on the closure run, so no repetition.

@fjarri
Copy link
Member

fjarri commented Feb 11, 2025

I guess this one goes in next now that #170 is merged?

@dvdplm
Copy link
Contributor Author

dvdplm commented Feb 13, 2025

@fjarri Good to go?

@fjarri fjarri merged commit 2587f5a into entropyxyz:master Feb 13, 2025
9 of 11 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Feb 13, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants