-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks for ZK proofs #173
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #173 +/- ##
==========================================
- Coverage 95.39% 89.33% -6.07%
==========================================
Files 38 39 +1
Lines 10170 10913 +743
==========================================
+ Hits 9702 9749 +47
- Misses 468 1164 +696 ☔ View full report in Codecov by Sentry. |
Yes, unfortunately a feature is the only way to do it. There was one already, it was removed in #156 because all the relevant API became public. |
b.iter_batched( | ||
|| { | ||
let rng2 = rng.clone(); | ||
proof_inputs(rng2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can just create the inputs once - you won't need giant parameter lists, and the benchmarks will run faster. There's probably not much variation depending on the exact parameters, as long as the proof is correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a clear win, and works well. But I noticed something on this particular benchmark: a 10% regression on verification. I guess it means that the proof verification is quite dependent on how the actual numbers play out and there's significant variability there. That suggests we should probably not optimise the setup code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why that would be the case. Sizes of resulting exponents probably shouldn't be that different, maybe it's the signs that account for it - the vartime operations during the verification can conditionally skip inversion.
Maybe iter_custom
could work here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The signs seem more likely than the sizes indeed.
Bear in mind though that we are not setting the two MSBs in the primes yet so there's a little more variability in the modulus sizes than we'd like, but yeah, it's at most one bit more or less, so it's a bit odd.
Can you elaborate on why iter_custom
would help here? I'm sure it'd work, just not sure I understand how it'd help clean up the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You would be able to create the inputs and measure the execution time in the same closure, so you won't need proof_inputs()
returning a tuple of ten things. I am not 100% sure it will work, but it might.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't think it will help us here though. Or I'm still misunderstanding. It'd looks sort of like this:
b.iter_custom(|iters| {
let (a, b, c, d, e, f, g, h, …, …) = proof_inputs(rng);
let start = Instant::now();
for _i in 0..iters {
black_box(AffGProof::<Params>::new(&mut rng, AffGSecretInputs{ /* some inputs here */}, AffGPublicInputs{ /* more inputs here */ }, &rp_params, &aux));
}
start.elapsed()
})
Wouldn't this have the same problem that performance measurments become inaccurate when using the same inputs repeatedly?
(There's also a stupid problem because of how this benchmark is setup: I do not have access to std::time::Instant
here; for some reason it's not in core::time
.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what I had in mind is that you just disband proof_inputs
and create the values inplace in the benchmark closure (since it's only used there). The inputs will still be randomly generated on the closure run, so no repetition.
I guess this one goes in next now that #170 is merged? |
@fjarri Good to go? |
Adds a set of benchmarks for our ZK proofs, starting with
FacProof
andAffG
.Also contains a synthetic benchmark that measures the performance of exponentiations with large exponents vs reducing such exponents by the totient.
Given the proof code is not part of the public API (and shouldn't be), this PR introduces a "private_benches" feature that allows a thin(-ish) criterion benchmark wrapper ("benches/zk_proofs.rs") to call code in
src/private_benches/mod.rs
. Here we have access to everything that ispub(crate)
.Run the benchmarks with
cargo bench --bench zk_proofs --features private_benches
.The PR also initialize
tracing
during the benchmarks, which can be useful when doing optimization work.