Skip to content

[TESTS] Use FP32 inference precision, FP16 KV cache precision for pipelines #25

[TESTS] Use FP32 inference precision, FP16 KV cache precision for pipelines

[TESTS] Use FP32 inference precision, FP16 KV cache precision for pipelines #25

Triggered via pull request January 6, 2025 18:34
Status Cancelled
Total duration 21m 47s
Artifacts 1

genai-tools.yml

on: pull_request
Download OpenVINO
1m 5s
Download OpenVINO
Matrix: LLM bench tests
Matrix: WWB tests
ci/gha_overall_status_llm_bench
0s
ci/gha_overall_status_llm_bench
Fit to window
Zoom out
Zoom in

Annotations

5 errors and 1 warning
LLM bench tests (3.11)
Canceling since a higher priority waiting request for 'refs/pull/1485/merge-llm-bench-python' exists
LLM bench tests (3.11)
The operation was canceled.
WWB tests (3.11)
Canceling since a higher priority waiting request for 'refs/pull/1485/merge-llm-bench-python' exists
WWB tests (3.11)
The operation was canceled.
ci/gha_overall_status_llm_bench
Process completed with exit code 1.
ci/gha_overall_status_llm_bench
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
ubuntu22
138 MB