-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(pallet-api/fungibles): benchmark read functions #284
Conversation
|
||
// Proof size is based on `MaxEncodedLen`, not hardware. | ||
#[test] | ||
fn ensure_expected_proof_size_does_not_change() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to double check this, please try on your machine!
./target/release/pop-node \
benchmark \
pallet \
--chain=dev \
--wasm-execution=compiled \
--pallet=fungibles \
--steps=50 \
--repeat=20 \
--json \
--template \
./scripts/pallet-weights-template.hbs \
--output=./pallets/api/src/fungibles/weights.rs \
--extrinsic=
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice if we also assert to the maxencodedlen of the structs from pallet assets maybe? How you did it previously
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please see latest commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant a simple assert like the following:
assert_eq!(allowance.proof_size(), Approval::<Balance, Balance>::max_encoded_len() as u64);
But that is incorrect. I'm not so sure whether all the additional test code in your last commit is worth it. Increases the burden to maintain and writing benchmarks is tmo a reason that we don't need to test the results in utter most detail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Tbh when I saw your comment last night I thought the same, so had the same misunderstanding.
My view is that if we rely on the accuracy of benchmarks/weights for dispatchables, recreating it for reads seems unnecessary. There is a framework there for benchmarking and getting in-depth results out of it, so its reasonable to rely on it as that is what underpins the ecosystem. Fine for some additional sanity checks tho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whilst I admire the level of detail, clarity and accuracy of these tests, my only reservation would be whether this would become an expectation moving forward. My concern would be the additional time required to implement for all the other APIs to remain consistent versus the value it brings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, therefore I think it is best to remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clear detailed docs and tests though Peter, was able to immediately grasp the detailed workings of it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was quite tedious to do! Although thankful for the learnings :) The last test that verifies the proof size remains the same will effectively provide the same testing outcome (make sure the max_encoded_len does not change) please see the latest commit, reverting the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice! Left a few nitpicks but then good to go!
|
||
// Proof size is based on `MaxEncodedLen`, not hardware. | ||
#[test] | ||
fn ensure_expected_proof_size_does_not_change() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice if we also assert to the maxencodedlen of the structs from pallet assets maybe? How you did it previously
Codecov ReportAttention: Patch coverage is
@@ Coverage Diff @@
## daan/api #284 +/- ##
============================================
+ Coverage 42.59% 46.97% +4.37%
============================================
Files 47 47
Lines 4139 4460 +321
Branches 4139 4460 +321
============================================
+ Hits 1763 2095 +332
+ Misses 2319 2303 -16
- Partials 57 62 +5
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor suggestion.
Also think the level of detail of tests needs considering, as mentioned inline.
|
||
// Proof size is based on `MaxEncodedLen`, not hardware. | ||
#[test] | ||
fn ensure_expected_proof_size_does_not_change() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whilst I admire the level of detail, clarity and accuracy of these tests, my only reservation would be whether this would become an expectation moving forward. My concern would be the additional time required to implement for all the other APIs to remain consistent versus the value it brings.
Solves #127