-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing Phase 1, part 2/5: Refactor cluster-aws Helm values #2954
Comments
Converts all current fields from WC userconfig underneath Run script for cluster
#!/bin/bash
# Check if two arguments are provided
if [ $# -ne 2 ]
then
echo "Incorrect number of arguments supplied. Please provide the organization name and the cluster name."
exit 1
fi
# Use the first argument as the organization name and the second as the cluster name
org=$1
cluster=$2
# Fetch the ConfigMap YAML
kubectl get cm -n org-$org ${cluster}-userconfig -o yaml > ${cluster}_cm.yaml
# Extract the values into a temporary file
yq eval '.data.values' ${cluster}_cm.yaml > tmp_cm_values.yaml
# Modify the values in tmp_cm_values.yaml as needed
yq eval --inplace 'with(select(.metadata != null); .global.metadata = .metadata) |
with(select(.connectivity != null); .global.connectivity = .connectivity) |
with(select(.controlPlane != null); .global.controlPlane = .controlPlane) |
with(select(.nodePools != null); .global.nodePools = .nodePools) |
with(select(.managementCluster != null); .global.managementCluster = .managementCluster ) |
with(select(.baseDomain != null); .global.connectivity.baseDomain = .baseDomain) |
with(select(.providerSpecific != null); .global.providerSpecific = .providerSpecific) |
del(.metadata) |
del(.connectivity) |
del(.controlPlane) |
del(.nodePools) |
del(.managementCluster) |
del(.baseDomain) |
del(.providerSpecific)' tmp_cm_values.yaml
# Merge the modified values back into the ConfigMap YAML
yq eval-all 'select(fileIndex==0).data.values = select(fileIndex==1) | select(fileIndex==0)' ${cluster}_cm.yaml tmp_cm_values.yaml > app.yaml
## Multi-line
sed -i '' 's/values:/values: \|/g' app.yaml
# Fetch the App YAML
kubectl get app -n org-$org $cluster -o yaml > ${cluster}_app.yaml
## Update the version of the App YAML
yq eval --inplace 'with(select(.spec.version != null); .spec.version = "0.49.0")' ${cluster}_app.yaml
# Merge the App YAML and ConfigMap YAML
echo "---" >> app.yaml
cat ${cluster}_app.yaml >> app.yaml
# Clean up
rm ${cluster}_cm.yaml
rm tmp_cm_values.yaml
rm ${cluster}_app.yaml Output of apiVersion: v1
data:
values: |
providerSpecific: {}
global:
metadata:
name: nick
organization: giantswarm
connectivity:
availabilityZoneUsageLimit: 3
bastion:
enabled: true
network: {}
topology: {}
controlPlane: {}
nodePools:
nodepool0:
instanceType: m5.xlarge
maxSize: 10
minSize: 3
rootVolumeSizeGB: 300
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"values":"connectivity:\n availabilityZoneUsageLimit: 3\n bastion:\n enabled: true\n network: {}\n topology: {}\ncontrolPlane: {}\nmetadata:\n name: nick\n organization: giantswarm\nnodePools:\n nodepool0:\n instanceType: m5.xlarge\n maxSize: 10\n minSize: 3\n rootVolumeSizeGB: 300\nproviderSpecific: {}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"giantswarm.io/cluster":"nick"},"name":"nick-userconfig","namespace":"org-giantswarm"}}
creationTimestamp: "2023-11-21T12:47:37Z"
labels:
app-operator.giantswarm.io/watching: "true"
giantswarm.io/cluster: nick
name: nick-userconfig
namespace: org-giantswarm
resourceVersion: "85191476"
uid: fc44a438-4956-441f-8d03-efd9227e3b0d
---
apiVersion: application.giantswarm.io/v1alpha1
kind: App
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"application.giantswarm.io/v1alpha1","kind":"App","metadata":{"annotations":{},"labels":{"app-operator.giantswarm.io/version":"0.0.0"},"name":"nick","namespace":"org-giantswarm"},"spec":{"catalog":"cluster","config":{"configMap":{"name":"","namespace":""},"secret":{"name":"","namespace":""}},"kubeConfig":{"context":{"name":""},"inCluster":true,"secret":{"name":"","namespace":""}},"name":"cluster-aws","namespace":"org-giantswarm","userConfig":{"configMap":{"name":"nick-userconfig","namespace":"org-giantswarm"}},"version":"0.47.0"}}
creationTimestamp: "2023-11-21T12:47:37Z"
finalizers:
- operatorkit.giantswarm.io/app-operator-app
generation: 1
labels:
app-operator.giantswarm.io/version: 0.0.0
app.kubernetes.io/name: cluster-aws
name: nick
namespace: org-giantswarm
resourceVersion: "85191667"
uid: ea1d0285-864f-47cd-a334-89ac32ba23af
spec:
catalog: cluster
config:
configMap:
name: ""
namespace: ""
secret:
name: ""
namespace: ""
kubeConfig:
context:
name: ""
inCluster: true
secret:
name: ""
namespace: ""
name: cluster-aws
namespace: org-giantswarm
userConfig:
configMap:
name: nick-userconfig
namespace: org-giantswarm
version: 0.49.0
status:
appVersion: ""
release:
lastDeployed: "2023-11-21T12:47:38Z"
status: deployed
version: 0.47.0 |
Slightly adjusted the script because we detected Taking the information from the catalog is mandatory for now if we wanna test things. #!/bin/bash
# Check if two arguments are provided
if [ $# -ne 2 ]
then
echo "Incorrect number of arguments supplied. Please provide the organization name and the cluster name."
exit 1
fi
# Use the first argument as the organization name and the second as the cluster name
org=$1
cluster=$2
# Fetch the ConfigMap YAML
kubectl get cm -n org-$org ${cluster}-userconfig -o yaml > ${cluster}_cm.yaml
# Extract the ConfigMap values into a temporary file
yq eval '.data.values' ${cluster}_cm.yaml > tmp_cm_values.yaml
##### OPTIONAL START
# Fetch AppCatalog YAML
kubectl get helmreleases.helm.toolkit.fluxcd.io -n flux-giantswarm appcatalog-cluster -o yaml > catalog.yaml
# Extract the AppCatalog values into a temporary file
yq eval '.spec.values.appCatalog.config.configMap.values' catalog.yaml >> tmp_cm_values.yaml
###### OPTIONAL END
# Modify the values in tmp_cm_values.yaml as needed
yq eval --inplace 'with(select(.metadata != null); .global.metadata = .metadata) |
with(select(.connectivity != null); .global.connectivity = .connectivity) |
with(select(.controlPlane != null); .global.controlPlane = .controlPlane) |
with(select(.nodePools != null); .global.nodePools = .nodePools) |
with(select(.managementCluster != null); .global.managementCluster = .managementCluster ) |
with(select(.providerSpecific != null); .global.providerSpecific = .providerSpecific) |
with(select(.baseDomain != null); .global.connectivity.baseDomain = .baseDomain) |
with(select(.managementCluster != null); .global.managementCluster = .managementCluster) |
del(.metadata) |
del(.connectivity) |
del(.controlPlane) |
del(.nodePools) |
del(.managementCluster) |
del(.baseDomain) |
del(.provider) |
del(.providerSpecific)' tmp_cm_values.yaml
# Merge the modified values back into the ConfigMap YAML
yq eval-all 'select(fileIndex==0).data.values = select(fileIndex==1) | select(fileIndex==0)' ${cluster}_cm.yaml tmp_cm_values.yaml > app.yaml
## Multi-line
sed -i '' 's/values:/values: \|/g' app.yaml
# Fetch the App YAML
kubectl get app -n org-$org $cluster -o yaml > ${cluster}_app.yaml
## Update the version of the App YAML
yq eval --inplace 'with(select(.spec.version != null); .spec.version = "0.49.0")' ${cluster}_app.yaml
# Merge the App YAML and ConfigMap YAML
echo "---" >> app.yaml
cat ${cluster}_app.yaml >> app.yaml
# Clean up
rm ${cluster}_cm.yaml
rm tmp_cm_values.yaml
rm ${cluster}_app.yaml
rm catalog.yaml |
Next steps: Part 1
Part 2 (in parallel with Part 3)
Part 3
Part 4
|
Motivation
Coming from #2739.
We have ported all provider-independent Cluster API resources to
cluster
chart, which was phase 1 of restructuring of cluster- apps, see #2742 for more details. Now we want to usecluster
chart incluster-aws
and remove all provider-independent Cluster API resources fromcluster-aws
.TODO
In order to do so, first we have to refactor
cluster-aws
Helm values, so thatcluster
chart can read provider-independent values it needs. For that, we have to move current top-level properties to be underValues.global
.Refactoring cluster-aws Helm values:
Tasks
Add new Helm values for Helm releases:
Tasks
Fixing schema linting and docs:
Tasks
Fixing CI:
Tasks
Outcome
cluster-aws
Helm values have new structure which enablescluster
chart (a subchart ofcluster-aws
) to read required provider-independent Helm values.The text was updated successfully, but these errors were encountered: