Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing Phase 1, part 2/5: Refactor cluster-aws Helm values #2954

Closed
15 tasks done
Tracked by #2739
nprokopic opened this issue Nov 11, 2023 · 3 comments
Closed
15 tasks done
Tracked by #2739

Implementing Phase 1, part 2/5: Refactor cluster-aws Helm values #2954

nprokopic opened this issue Nov 11, 2023 · 3 comments
Assignees
Labels

Comments

@nprokopic
Copy link

nprokopic commented Nov 11, 2023

Motivation

Coming from #2739.

We have ported all provider-independent Cluster API resources to cluster chart, which was phase 1 of restructuring of cluster- apps, see #2742 for more details. Now we want to use cluster chart in cluster-aws and remove all provider-independent Cluster API resources from cluster-aws.

TODO

In order to do so, first we have to refactor cluster-aws Helm values, so that cluster chart can read provider-independent values it needs. For that, we have to move current top-level properties to be under Values.global.

Refactoring cluster-aws Helm values:

Add new Helm values for Helm releases:

Tasks

Preview Give feedback

Fixing schema linting and docs:

Fixing CI:

Tasks

Preview Give feedback
  1. skip/ci

Outcome

cluster-aws Helm values have new structure which enables cluster chart (a subchart of cluster-aws) to read required provider-independent Helm values.

@njuettner
Copy link
Member

njuettner commented Nov 21, 2023

Converts all current fields from WC userconfig underneath global and additionally changes the cluster app CR with a newer version e.g. v0.49.0 and merges both into a single file. It can be used to simply apply changes in one take.

Run script for cluster nick in organization giantswarm

./global-values-converter.sh giantswarm nick

#!/bin/bash

# Check if two arguments are provided
if [ $# -ne 2 ]
  then
    echo "Incorrect number of arguments supplied. Please provide the organization name and the cluster name."
    exit 1
fi

# Use the first argument as the organization name and the second as the cluster name
org=$1
cluster=$2

# Fetch the ConfigMap YAML
kubectl get cm -n org-$org ${cluster}-userconfig -o yaml > ${cluster}_cm.yaml

# Extract the values into a temporary file
yq eval '.data.values' ${cluster}_cm.yaml > tmp_cm_values.yaml

# Modify the values in tmp_cm_values.yaml as needed
yq eval --inplace 'with(select(.metadata != null);    .global.metadata = .metadata) |
  with(select(.connectivity != null);                 .global.connectivity = .connectivity) |
  with(select(.controlPlane != null);                 .global.controlPlane = .controlPlane) |
  with(select(.nodePools != null);                    .global.nodePools = .nodePools) |
  with(select(.managementCluster != null);            .global.managementCluster = .managementCluster ) |
  with(select(.baseDomain != null);                   .global.connectivity.baseDomain = .baseDomain) |
  with(select(.providerSpecific != null);                   .global.providerSpecific = .providerSpecific) |

  del(.metadata) |
  del(.connectivity) |
  del(.controlPlane) |
  del(.nodePools) |
  del(.managementCluster) |
  del(.baseDomain) |
  del(.providerSpecific)' tmp_cm_values.yaml

# Merge the modified values back into the ConfigMap YAML
yq eval-all 'select(fileIndex==0).data.values = select(fileIndex==1) | select(fileIndex==0)' ${cluster}_cm.yaml tmp_cm_values.yaml > app.yaml

## Multi-line 
sed -i '' 's/values:/values: \|/g' app.yaml

# Fetch the App YAML
kubectl get app -n org-$org $cluster -o yaml > ${cluster}_app.yaml

## Update the version of the App YAML
yq eval --inplace 'with(select(.spec.version != null); .spec.version = "0.49.0")' ${cluster}_app.yaml

# Merge the App YAML and ConfigMap YAML
echo "---" >> app.yaml

cat ${cluster}_app.yaml >> app.yaml

# Clean up
rm ${cluster}_cm.yaml
rm tmp_cm_values.yaml
rm ${cluster}_app.yaml

Output of app.yaml

apiVersion: v1
data:
  values: |
    providerSpecific: {}
    global:
      metadata:
        name: nick
        organization: giantswarm
      connectivity:
        availabilityZoneUsageLimit: 3
        bastion:
          enabled: true
        network: {}
        topology: {}
      controlPlane: {}
      nodePools:
        nodepool0:
          instanceType: m5.xlarge
          maxSize: 10
          minSize: 3
          rootVolumeSizeGB: 300
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"values":"connectivity:\n  availabilityZoneUsageLimit: 3\n  bastion:\n    enabled: true\n  network: {}\n  topology: {}\ncontrolPlane: {}\nmetadata:\n  name: nick\n  organization: giantswarm\nnodePools:\n  nodepool0:\n    instanceType: m5.xlarge\n    maxSize: 10\n    minSize: 3\n    rootVolumeSizeGB: 300\nproviderSpecific: {}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"giantswarm.io/cluster":"nick"},"name":"nick-userconfig","namespace":"org-giantswarm"}}
  creationTimestamp: "2023-11-21T12:47:37Z"
  labels:
    app-operator.giantswarm.io/watching: "true"
    giantswarm.io/cluster: nick
  name: nick-userconfig
  namespace: org-giantswarm
  resourceVersion: "85191476"
  uid: fc44a438-4956-441f-8d03-efd9227e3b0d
---
apiVersion: application.giantswarm.io/v1alpha1
kind: App
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"application.giantswarm.io/v1alpha1","kind":"App","metadata":{"annotations":{},"labels":{"app-operator.giantswarm.io/version":"0.0.0"},"name":"nick","namespace":"org-giantswarm"},"spec":{"catalog":"cluster","config":{"configMap":{"name":"","namespace":""},"secret":{"name":"","namespace":""}},"kubeConfig":{"context":{"name":""},"inCluster":true,"secret":{"name":"","namespace":""}},"name":"cluster-aws","namespace":"org-giantswarm","userConfig":{"configMap":{"name":"nick-userconfig","namespace":"org-giantswarm"}},"version":"0.47.0"}}
  creationTimestamp: "2023-11-21T12:47:37Z"
  finalizers:
    - operatorkit.giantswarm.io/app-operator-app
  generation: 1
  labels:
    app-operator.giantswarm.io/version: 0.0.0
    app.kubernetes.io/name: cluster-aws
  name: nick
  namespace: org-giantswarm
  resourceVersion: "85191667"
  uid: ea1d0285-864f-47cd-a334-89ac32ba23af
spec:
  catalog: cluster
  config:
    configMap:
      name: ""
      namespace: ""
    secret:
      name: ""
      namespace: ""
  kubeConfig:
    context:
      name: ""
    inCluster: true
    secret:
      name: ""
      namespace: ""
  name: cluster-aws
  namespace: org-giantswarm
  userConfig:
    configMap:
      name: nick-userconfig
      namespace: org-giantswarm
  version: 0.49.0
status:
  appVersion: ""
  release:
    lastDeployed: "2023-11-21T12:47:38Z"
    status: deployed
  version: 0.47.0

@njuettner
Copy link
Member

njuettner commented Nov 22, 2023

Slightly adjusted the script because we detected baseDomain and managementCluster are coming from https://github.com/giantswarm/giantswarm-management-clusters/blob/main/management-clusters/grizzly/catalogs/patches/appcatalog-default-test-patch.yaml

Taking the information from the catalog is mandatory for now if we wanna test things.

#!/bin/bash

# Check if two arguments are provided
if [ $# -ne 2 ]
  then
    echo "Incorrect number of arguments supplied. Please provide the organization name and the cluster name."
    exit 1
fi

# Use the first argument as the organization name and the second as the cluster name
org=$1
cluster=$2

# Fetch the ConfigMap YAML
kubectl get cm -n org-$org ${cluster}-userconfig -o yaml > ${cluster}_cm.yaml

# Extract the ConfigMap values into a temporary file
yq eval '.data.values' ${cluster}_cm.yaml > tmp_cm_values.yaml

##### OPTIONAL START

# Fetch AppCatalog YAML
kubectl get helmreleases.helm.toolkit.fluxcd.io -n flux-giantswarm appcatalog-cluster -o yaml > catalog.yaml

# Extract the AppCatalog values into a temporary file
yq eval '.spec.values.appCatalog.config.configMap.values' catalog.yaml >> tmp_cm_values.yaml

###### OPTIONAL END

# Modify the values in tmp_cm_values.yaml as needed
yq eval --inplace 'with(select(.metadata != null);    .global.metadata = .metadata) |
  with(select(.connectivity != null);                 .global.connectivity = .connectivity) |
  with(select(.controlPlane != null);                 .global.controlPlane = .controlPlane) |
  with(select(.nodePools != null);                    .global.nodePools = .nodePools) |
  with(select(.managementCluster != null);            .global.managementCluster = .managementCluster ) |
 
  with(select(.providerSpecific != null);                   .global.providerSpecific = .providerSpecific) |

  with(select(.baseDomain != null);                   .global.connectivity.baseDomain = .baseDomain) |
  with(select(.managementCluster != null);                 .global.managementCluster = .managementCluster) |

  del(.metadata) |
  del(.connectivity) |
  del(.controlPlane) |
  del(.nodePools) |
  del(.managementCluster) |
  del(.baseDomain) |
  del(.provider) |
  del(.providerSpecific)' tmp_cm_values.yaml


# Merge the modified values back into the ConfigMap YAML
yq eval-all 'select(fileIndex==0).data.values = select(fileIndex==1) | select(fileIndex==0)' ${cluster}_cm.yaml tmp_cm_values.yaml > app.yaml

## Multi-line
sed -i '' 's/values:/values: \|/g' app.yaml

# Fetch the App YAML
kubectl get app -n org-$org $cluster -o yaml > ${cluster}_app.yaml

## Update the version of the App YAML
yq eval --inplace 'with(select(.spec.version != null); .spec.version = "0.49.0")' ${cluster}_app.yaml

# Merge the App YAML and ConfigMap YAML
echo "---" >> app.yaml

cat ${cluster}_app.yaml >> app.yaml

# Clean up
rm ${cluster}_cm.yaml
rm tmp_cm_values.yaml
rm ${cluster}_app.yaml
rm catalog.yaml

@njuettner njuettner self-assigned this Nov 22, 2023
@njuettner
Copy link
Member

njuettner commented Nov 23, 2023

Next steps:

Part 1

  • Creating a PR to allow global additional property values in cluster-aws
  • Release another Minor version for the change in cluster-aws (v0.49.0)
  • Update Giantswarm CAPA MC's taking the latest minor release of cluster-aws (v0.49.0)
  • Update Customer CAPA MC's taking the latest minor release of cluster-aws (v0.49.0)
  • Ask customers to also update WC's with the latest minor release of cluster-aws

Part 2 (in parallel with Part 3)

  • Merge all PR's for moving fields to global

Part 3

Part 4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Archived in project
Development

No branches or pull requests

2 participants