Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize downloads helm charts and then reuses them when new version specified #5097

Closed
justinsb opened this issue Mar 18, 2023 · 10 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@justinsb
Copy link
Contributor

What happened?

I believe this issue has come up a few times, but I think there is a very concrete incorrect behaviour bug that I think we can & should fix. (I'm happy to work on fixing it if need be).

I was using the helmCharts support, with an upstream cilium chart, version 1.12.5. When I run kustomize, it automatically downloads the cilium chart into ./charts/cilium-1.12.5.tar.gz, and inflates it into ./charts/cilium.

I then change the chart version to 1.13.1, but kustomize continues to use ./charts/cilium, which is version 1.12.5

What did you expect to happen?

I would expect it to use the version I specified, given I didn't directly populate ./charts/cilium (if I had created that directory myself, maybe the behaviour would be more understandable)

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
helmCharts:
- name: cilium
  repo: https://helm.cilium.io/
  version: 1.12.5
  releaseName: cilium
  namespace: kube-system
  valuesInline:
    hubble:
      enabled: false

Run kustomize build, change the version, the old version is used.

Expected output

No response

Actual output

No response

Kustomize version

main

Operating system

Linux

@justinsb justinsb added the kind/bug Categorizes issue or PR as related to a bug. label Mar 18, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 18, 2023
@justinsb
Copy link
Contributor Author

Currently helm downloads a helm chart every time it is used.

There's an upstream helm "KEP" where they talk about using a content cache: https://github.com/helm/community/pull/185/files

I think a simple fix here could be to download into a temporary directory and not cache it. That is consistent with the helm behaviour (until the helm KEP is approved / implemented):

Helm cannot use the chart version in the cache because the identifier is the name and charts version (e.g., wordpress-1.2.3.tgz). A name and version is insufficient as the same chart and version could come from two or more different repositories and have different content.

Although we could implement our own mechanisms here for more efficient caching / verification (there is a digest in the index.yaml), I suggest that should instead be done in helm (first), we don't want to build something incompatible.

@justinsb
Copy link
Contributor Author

Looks like there's even a PR that is a good starting point: #4999 , though I think we should first agree the behaviour (and maybe I should just be using the KRM function instead?)

@natasha41575
Copy link
Contributor

Thanks for pointing out #4999! We must have missed it.

@justinsb Do you have time to give it a quick review? I will find some time to review it as well.

Re: the KRM function, kustomize functions aren't quite mature enough to be the recommended solution, but it is our desired end goal some day.

@natasha41575 natasha41575 added triage/duplicate Indicates an issue is a duplicate of other open issue. triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Mar 20, 2023
@k8s-ci-robot k8s-ci-robot removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 20, 2023
@natasha41575
Copy link
Contributor

@justinsb This seems to be a duplicate of #4813 it seems, which I acknowledge is old, but I think we can consider it still active. Do you have any concerns with closing this issue and using #4813 to track the work?

@ChristianCiach
Copy link

ChristianCiach commented Jun 12, 2023

Also a duplicate of #3848 (issue by me)

@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jun 11, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 9, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 9, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 8, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

5 participants