Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syncing latest changes from upstream master for rook #576

Merged
merged 12 commits into from
Feb 20, 2024
Merged

Conversation

df-build-team
Copy link

PR containing the latest commits from upstream master branch

travisn and others added 9 commits February 14, 2024 11:13
The pg_autoscaler is always enabled by ceph and cannot be disabled.
In a previous PR the pg_autoscaler config was removed from the main example.
Now the remaining config for the pg_autoscaler is removed as well.

Signed-off-by: travisn <tnielsen@redhat.com>
Rook has been setting the application automatically on all
pools to rbd for CephBlockPools, rook-ceph-rgw for
CephObjectStores, mgr on the built-in .mgr pool,
and nfs on the built-in .nfs pool.

The legacy pool device_health_metrics is long gone
from Pacific which is no longer supported, so we can
remove special handling for that pool in the upgrade
guide and in the code.

The application setting is now available on the pool spec
although it is not expected to commonly need to override
the default applications set by Rook.

The application for CephFilesystem pools is now being
set to cephfs, where it was previously blank.

Signed-off-by: travisn <tnielsen@redhat.com>
mgr: Remove remaining pg_autoscaler examples
pool: Allow setting the application on a pool
Removed = that was preventing the import-external-cluster.sh script from
importing external ceph cluster into k8s.

Fixes rook#13679

Signed-off-by: Tim Olow <tim@eth0.com>
External: fix syntax error import-external-cluster.sh
Make sure the canary suites are only run from nightly CI in the
rook/rook project.

Signed-off-by: Blaine Gardner <blaine.gardner@ibm.com>
ci: stop nightly canary from running in forks
The crush rule will be updated when the failure domain changes.
This update can be skipped when the expected crush rule is
already configured. Otherwise, the log has a confusing message
that makes it appear that the crush rule was updated
when in fact it was not.

Signed-off-by: travisn <tnielsen@redhat.com>
subhamkrai and others added 3 commits February 19, 2024 18:15
updating go.mod and pkgs/apis/go.mod file with
latest updates.

Signed-off-by: subhamkrai <srai@redhat.com>
pool: Skip crush rule update when not needed
Copy link

openshift-ci bot commented Feb 20, 2024

@df-build-team: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/unit bcab869 link true /test unit

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@travisn
Copy link

travisn commented Feb 20, 2024

/approve
/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 20, 2024
Copy link

openshift-ci bot commented Feb 20, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: df-build-team, travisn

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@travisn travisn merged commit b170ccf into master Feb 20, 2024
47 of 49 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants