-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"bencher" output format doesn't quite match what libtest creates #704
Comments
d-e-s-o
added a commit
to d-e-s-o/criterion.rs
that referenced
this issue
Jul 21, 2023
As mentioned in bheisler#704, the output generated when using the `bencher` output format does not quite match up with that of libtest. One reason being that thousands-separators are not emitted. With this change we adjust the formatting logic for integers (which is only used in `bencher` style reports) to emit thousands-separators, as libtest does. Signed-off-by: Daniel Müller <deso@posteo.net>
d-e-s-o
added a commit
to d-e-s-o/criterion.rs
that referenced
this issue
Jul 21, 2023
As mentioned in bheisler#704, the output generated when using the `bencher` output format does not quite match up with that of libtest. One reason being that thousands-separators are not emitted. With this change we adjust the formatting logic for integers (which is only used in `bencher` style reports) to emit thousands-separators, as libtest does. Signed-off-by: Daniel Müller <deso@posteo.net>
d-e-s-o
added a commit
to d-e-s-o/criterion.rs
that referenced
this issue
Jul 21, 2023
As mentioned in bheisler#704, the output generated when using the `bencher` output format does not quite match up with that of libtest. One reason being that thousands-separators are not emitted. With this change we adjust the formatting logic for integers (which is only used in `bencher` style reports) to emit thousands-separators, as libtest does. Signed-off-by: Daniel Müller <deso@posteo.net>
Thousands separators don't seem too hard to add. See #705. As for the padding/alignment, here is the relevant logic in I believe given the current state of the reporter logic that this requires a bit of a restructuring of internals to be feasible. Right now the reporter does not know of test names ahead of time from what I gather. If you have any suggestions, please let me know. |
d-e-s-o
added a commit
to d-e-s-o/blazesym
that referenced
this issue
Jul 21, 2023
Unfortunately criterion with the bencher output format does not align benchmark results properly ([[0]]). As a result it is quite hard to read the summary from a glimpse. To work around this problem, this change "manually" adjusts the names of our benchmark functions so that the corresponding results in the summary report are aligned properly. [[0]]: bheisler/criterion.rs#704 Signed-off-by: Daniel Müller <deso@posteo.net>
d-e-s-o
added a commit
to d-e-s-o/blazesym
that referenced
this issue
Jul 21, 2023
Unfortunately criterion with the bencher output format does not align benchmark results properly ([0]). As a result it is quite hard to read the summary from a glimpse. To work around this problem, this change "manually" adjusts the names of our benchmark functions so that the corresponding results in the summary report are aligned properly. [0]: bheisler/criterion.rs#704 Signed-off-by: Daniel Müller <deso@posteo.net>
d-e-s-o
added a commit
to d-e-s-o/blazesym
that referenced
this issue
Jul 21, 2023
Unfortunately criterion with the bencher output format does not align benchmark results properly ([0]). As a result it is quite hard to read the summary from a glimpse. To work around this problem, this change "manually" adjusts the names of our benchmark functions so that the corresponding results in the summary report are aligned properly. [0]: bheisler/criterion.rs#704 Signed-off-by: Daniel Müller <deso@posteo.net>
d-e-s-o
added a commit
to libbpf/blazesym
that referenced
this issue
Jul 21, 2023
Unfortunately criterion with the bencher output format does not align benchmark results properly ([0]). As a result it is quite hard to read the summary from a glimpse. To work around this problem, this change "manually" adjusts the names of our benchmark functions so that the corresponding results in the summary report are aligned properly. [0]: bheisler/criterion.rs#704 Signed-off-by: Daniel Müller <deso@posteo.net>
lemmih
pushed a commit
that referenced
this issue
Aug 10, 2023
As mentioned in #704, the output generated when using the `bencher` output format does not quite match up with that of libtest. One reason being that thousands-separators are not emitted. With this change we adjust the formatting logic for integers (which is only used in `bencher` style reports) to emit thousands-separators, as libtest does. Signed-off-by: Daniel Müller <deso@posteo.net>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks for adding the
bencher
output format. I have bothlibtest
based benchmarks as well as criterion based ones, and I use this output format to have better comparability.However, I noticed that the formats don't quite match up (which I will say kind of runs against the stated purpose: "it may be useful to support external tools that can parse this output"). Specifically, this is output generated by
libtest
benchmarks:Compare that to criterion based ones:
(some of those
criterion
benchmarks are in different groups; I haven't checked whether that has any impact)There are two main differences that I can make out:
Would it make sense and be possible to align the two more closely?
The text was updated successfully, but these errors were encountered: