Skip to content

Conversation

@amritamishra01
Copy link

Summary

This pull request adds support for a new --extra-metadata argument to the compare_to command.
It allows users to include selected metadata keys as additional columns in the comparison table output.

Motivation

The current compare_to output does not show metadata values such as CPU model, OS version, Python build info, etc.
This feature was requested in Issue #162 (“Display additional information through table”) to make analysis more informative.

What This PR Does

Adds a new CLI flag:
--extra-metadata "key1,key2,..."

Passes metadata keys into _compare.py

Updates table generation to show metadata for both reference and changed benchmarks

Ensures geometric mean rows remain aligned

Testing

Manually tested using sample benchmark files

Verified table output for:

No metadata requested

Single metadata key

Multiple metadata keys

Mixed significant / non-significant benchmarks

Notes

Feedback is welcome! I’m happy to update the PR if changes are needed.

Copy link
Member

@vstinner vstinner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add at least one test.

@amritamishra01
Copy link
Author

Hi! @vstinner I’ve pushed the updated changes to the add-extra-metadata branch.
Here’s what I did:

Integrated the --extra-metadata feature cleanly into the existing compare_to flow.

Fixed implementation issues that caused errors when formatting metadata output.

Resolved merge conflicts during branch sync and ensured the final code reflects only the correct logic.

Added a dedicated test file:
pyperf/tests/test_extra_metadata.py

Verified full test suite — all tests pass successfully (174 passed).

Ensured that:

Baseline behavior of compare_to is unchanged

Metadata output only appears when explicitly requested

No regressions in table, CLI, verbose, or non-significant cases.

Updated the PR branch cleanly using cherry-pick without opening a new PR.

Happy to make any additional adjustments if required!

help="Comma-separated metadata keys to include in comparison output."
)

>>>>>>> 868272e (Temp: save working changes before branch switch)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merge conflict?

Comment on lines 294 to 299
result = CompareResult(
ref,
changed,
min_speed,
extra_metadata=self.extra_metadata
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
result = CompareResult(
ref,
changed,
min_speed,
extra_metadata=self.extra_metadata
)
result = CompareResult(ref, changed, min_speed,
extra_metadata=self.extra_metadata)

Comment on lines 130 to 131
# EXTRA METADATA SUPPORT
if self.extra_metadata:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# EXTRA METADATA SUPPORT
if self.extra_metadata:
# Extra metadata support
if self.extra_metadata:

if (self.ref.benchmark.get_nvalue() > 1
or self.changed.benchmark.get_nvalue() > 1):

if (self.ref.benchmark.get_nvalue() > 1 or self.changed.benchmark.get_nvalue() > 1):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please leave this code unchanged, I prefer to fit into 80 columns.

import pyperf


def create_temp_benchmark(tmpdir, data):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to reuse one of the existing benchmark file in pyperf/tests:

mult_list_py36.json
mult_list_py36_tags.json
mult_list_py37.json
mult_list_py37_tags.json
mult_list_py38.json
telco.json

return stdout, stderr


def test_compare_to_with_extra_metadata(tmpdir):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test to pyperf/tests/test_perf_cli.py instead?

return path


def run_command(cmd):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pyperf/tests/test_perf_cli.py already has a run_command() method.

Comment on lines 116 to 120
cmd.add_argument(
"--extra-metadata",
type=str,
help="Comma-separated metadata keys to include in comparison output."
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cmd.add_argument(
"--extra-metadata",
type=str,
help="Comma-separated metadata keys to include in comparison output."
)
cmd.add_argument(
"--extra-metadata",
type=str,
help="Comma-separated metadata keys to include in comparison output")

@amritamishra01
Copy link
Author

Hi @vstinner , thanks for the review!

I have addressed all your comments:

Moved the test to pyperf/tests/test_perf_cli.py and used the existing create_bench helper.

Fixed the merge conflict in main.py and cleaned up the argument definition.

Fixed the code style in _compare.py (indentation, comments, and line wrapping) as requested.

Tests are passing locally. Please let me know if there is anything else!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants