Skip to content

"Cleaning up stale CnsVolumeOperationRequest instances" ignores block-volume-snapshot:false feature states #3146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sathieu opened this issue Dec 20, 2024 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@sathieu
Copy link

sathieu commented Dec 20, 2024

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

We have block-volume-snapshot:false as confirmed by the log:

 {
  "level": "info",
  "time": "2024-12-20T09:03:27.684711955Z",
  "caller": "k8sorchestrator/k8sorchestrator.go:440",
  "msg": "New internal feature states values stored successfully: map[block-volume-snapshot:false csi-migration:false csi-windows-support:false improved-volume-topology:false multi-vcenter-csi-topology:false pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:false trigger-csi-fullsync:false]",
  "TraceId": "7fdf059a-172b-4b44-a4da-584fbbf0406e"
}

But this evaluates to true:

We know this because we don't have VolumeSnapshotContents CRD and the following fails:

vscList, err := snapshotterClient.SnapshotV1().VolumeSnapshotContents().List(ctx, metav1.ListOptions{})
if err != nil {
log.Errorf("failed to list VolumeSnapshotContents with error %v. Abandoning "+
"CnsVolumeOperationRequests clean up ...", err)
return
}

What you expected to happen:

The snapshot code should not run when feature is disabled.

How to reproduce it (as minimally and precisely as possible):

Using vanilla, block-volume-snapshot:false and no VolumeSnapshotContents CRD.

Maybe there is some concurency problem because some parts are in different goroutines.

Anything else we need to know?:

Environment:

  • csi-vsphere version: 3.3.1 vanilla
  • vsphere-cloud-controller-manager version: N/1
  • Kubernetes version: 1.30.3
  • vSphere version: 8.0
  • OS (e.g. from /etc/os-release): Debian
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 20, 2025
@sathieu
Copy link
Author

sathieu commented Mar 21, 2025

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants