Skip to content

[None][chore] re-enable benchmark test in post merge#12035

Open
zhenhuaw-me wants to merge 4 commits intoNVIDIA:mainfrom
zhenhuaw-me:enable-benchmark-test
Open

[None][chore] re-enable benchmark test in post merge#12035
zhenhuaw-me wants to merge 4 commits intoNVIDIA:mainfrom
zhenhuaw-me:enable-benchmark-test

Conversation

@zhenhuaw-me
Copy link
Member

@zhenhuaw-me zhenhuaw-me commented Mar 9, 2026

Also remove duplicate tests.

Summary by CodeRabbit

  • Tests
    • Consolidated visual generation benchmark tests by merging multiple online and offline test variants into single unified functions, improving test organization and simplifying benchmark execution.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@zhenhuaw-me
Copy link
Member Author

/bot run

Also remove duplicate tests.

Signed-off-by: Zhenhua Wang <zhenhuaw@nvidia.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

This PR consolidates multiple benchmark test variants in the visual_gen test suite into unified functions. The online benchmark test now accepts an additional tmp_path parameter and runs a single combined behavior. Offline benchmarks are similarly unified. Configuration is updated to reference the consolidated tests.

Changes

Cohort / File(s) Summary
Test Consolidation
tests/integration/defs/visual_gen/test_visual_gen_benchmark.py
Consolidates duplicate online and offline benchmark test variants into single unified functions. Removes test_online_benchmark_video, test_online_benchmark_save_result, test_offline_benchmark, and test_offline_benchmark_save_result. Introduces updated test_online_benchmark (with tmp_path parameter) and test_offline_benchmark that combine previous variant behavior and assertions.
Configuration Update
tests/integration/test_lists/test-db/l0_dgx_b200.yml
Adds two consolidated benchmark tests to the l0_dgx_b200 integration test list: test_offline_benchmark and test_online_benchmark.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is incomplete and lacks critical information required by the template. Fill in the Description section explaining the issue and solution. Add a Test Coverage section listing relevant tests. Complete the PR Checklist items as appropriate.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: re-enabling benchmark tests and removing duplicates in the test suite.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
tests/integration/defs/visual_gen/test_visual_gen_benchmark.py (3)

304-312: Same redundant assertion pattern.

Same issue as test_online_benchmark: the check=True parameter makes the returncode assertion redundant.

🔧 Proposed fix
     result = subprocess.run(
         cmd,
         stdout=subprocess.PIPE,
         stderr=subprocess.PIPE,
         text=True,
         check=True,
     )

-    assert result.returncode == 0
     assert "Benchmark Result (VisualGen)" in result.stdout
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/visual_gen/test_visual_gen_benchmark.py` around lines
304 - 312, The subprocess.run invocation sets check=True, so the subsequent
assertion assert result.returncode == 0 is redundant; remove the redundant
assert or change check to False if you intend to assert returncode
manually—specifically update the subprocess.run call in the test (the call that
assigns result from subprocess.run(..., check=True)) and delete the following
assert result.returncode == 0 to avoid duplicate checks.

240-248: Redundant assertion after check=True.

When subprocess.run() is called with check=True, it raises CalledProcessError on non-zero exit codes. The assertion on line 248 is therefore redundant and will never fail (if the returncode were non-zero, the exception would have already been raised).

🔧 Proposed fix
     result = subprocess.run(
         cmd,
         stdout=subprocess.PIPE,
         stderr=subprocess.PIPE,
         text=True,
         check=True,
     )

-    assert result.returncode == 0
     assert "Benchmark Result (VisualGen)" in result.stdout
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/visual_gen/test_visual_gen_benchmark.py` around lines
240 - 248, The assertion checking result.returncode is redundant because
subprocess.run is called with check=True; either remove the assert
result.returncode == 0 or change subprocess.run(..., check=False) and then
assert the return code and/or inspect stderr; locate the subprocess.run call in
this test and remove the final assert (or switch check to False if you intend to
assert manually) so the test behavior is consistent with how errors are handled.

36-36: Consider importing the module instead of the function directly.

As per the coding guidelines, prefer importing the module rather than individual functions:

-from tensorrt_llm._utils import get_free_port
+from tensorrt_llm import _utils

Then use _utils.get_free_port() at line 101.

As per coding guidelines: "Import the module, not individual classes or functions (e.g., use from package.subpackage import foo then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/visual_gen/test_visual_gen_benchmark.py` at line 36,
Replace the direct function import with a module import so callers use the
module namespace; change the import of get_free_port to import
tensorrt_llm._utils as _utils and update the call site(s) (e.g., where
get_free_port is invoked around line 101) to use _utils.get_free_port(). Ensure
any other references to get_free_port are updated to the module-qualified name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/integration/defs/visual_gen/test_visual_gen_benchmark.py`:
- Around line 304-312: The subprocess.run invocation sets check=True, so the
subsequent assertion assert result.returncode == 0 is redundant; remove the
redundant assert or change check to False if you intend to assert returncode
manually—specifically update the subprocess.run call in the test (the call that
assigns result from subprocess.run(..., check=True)) and delete the following
assert result.returncode == 0 to avoid duplicate checks.
- Around line 240-248: The assertion checking result.returncode is redundant
because subprocess.run is called with check=True; either remove the assert
result.returncode == 0 or change subprocess.run(..., check=False) and then
assert the return code and/or inspect stderr; locate the subprocess.run call in
this test and remove the final assert (or switch check to False if you intend to
assert manually) so the test behavior is consistent with how errors are handled.
- Line 36: Replace the direct function import with a module import so callers
use the module namespace; change the import of get_free_port to import
tensorrt_llm._utils as _utils and update the call site(s) (e.g., where
get_free_port is invoked around line 101) to use _utils.get_free_port(). Ensure
any other references to get_free_port are updated to the module-qualified name.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4cb1d9a9-89a7-4b4d-84d2-ff1fbd9fb030

📥 Commits

Reviewing files that changed from the base of the PR and between d704b5e and c8ea32e.

📒 Files selected for processing (2)
  • tests/integration/defs/visual_gen/test_visual_gen_benchmark.py
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml

@zhenhuaw-me zhenhuaw-me force-pushed the enable-benchmark-test branch from c8ea32e to 9d57ca7 Compare March 9, 2026 12:21
@tensorrt-cicd
Copy link
Collaborator

PR_Github #38263 [ run ] triggered by Bot. Commit: 9d57ca7 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38263 [ run ] completed with state SUCCESS. Commit: 9d57ca7
/LLM/main/L0_MergeRequest_PR pipeline #29646 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@zhenhuaw-me
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38274 [ run ] triggered by Bot. Commit: b082076 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38274 [ run ] completed with state SUCCESS. Commit: b082076
/LLM/main/L0_MergeRequest_PR pipeline #29656 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@zhenhuaw-me
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38344 [ run ] triggered by Bot. Commit: 47373af Link to invocation

Copy link
Collaborator

@chang-l chang-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should explicitly enable the post-merge stage CI to double-check before merging?

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38344 [ run ] completed with state SUCCESS. Commit: 47373af
/LLM/main/L0_MergeRequest_PR pipeline #29719 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@zhenhuaw-me
Copy link
Member Author

/bot help

@github-actions
Copy link

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants