From c7de6030361f2a6b38185b3053e211c37191d7fe Mon Sep 17 00:00:00 2001 From: madisonewebb Date: Fri, 2 Jan 2026 09:48:06 -0800 Subject: [PATCH] docs: Add tidbit about the rendered manifests pattern, + a couple other tips --- docs/10-platform-engineering/10.2-platforms.md | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/docs/10-platform-engineering/10.2-platforms.md b/docs/10-platform-engineering/10.2-platforms.md index a55cb6fd..bb0f378c 100644 --- a/docs/10-platform-engineering/10.2-platforms.md +++ b/docs/10-platform-engineering/10.2-platforms.md @@ -150,12 +150,16 @@ system needs direct cluster access, better auditability as all changes are track ArgoCD also provides a powerful UI for visualizing your applications' deployment status and health, supports multiple config management tools like Helm and Kustomize, and can manage applications across multiple clusters from a single interface. Its declarative nature aligns perfectly with Kubernetes' own paradigms, making it a natural fit for cloud-native deployments. +It's also worth knowing about the [Rendered Manifests Pattern](https://akuity.io/blog/the-rendered-manifests-pattern). Instead of having ArgoCD render Helm or Kustomize templates at deploy time, you render them in CI and commit the plain YAML to Git. This makes pull requests easier to review since you see the actual manifests, not just template changes. It also catches templating errors earlier. The downside is an extra CI step, but many teams prefer this tradeoff. + - Create and install ArgoCD into the cluster through a Terrafrom module (write this in the Library repo) - Actually Deploy and ArgoCD into the cluster via the Deploy repo with Terragrunt - Verify that all of you can access the ArgoCD UI by port forwarding the `argocd-service` ?> Default admin password is stored as a Kubernetes secret on the cluster +?> As you add more applications, consider the [App of Apps pattern](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) to manage them hierarchically from a single root Application. + ## Exercise 3 Now that we have stood up ArgoCD we can let Argo manage our Kubernetes applications. There are many core applications that all @@ -174,6 +178,7 @@ The list is long but the thing to note here is that these are all Kubernetes app via manifest files (Helm charts just result in manifest files). So given our cluster now has a Kubernetes GitOps Controller installed and configured we are going to let ArgoCD own the deployment of these Kubernetes applications. +- Enable IRSA (IAM Roles for Service Accounts) on your EKS cluster. IRSA lets pods authenticate to AWS services without storing long-lived credentials. It works by linking a Kubernetes service account to an IAM role via an OIDC provider. When a pod uses that service account, it receives temporary AWS credentials automatically. You'll need this for AWS Load Balancer Controller and External Secrets Operator. - In your library repo create a new directory `applications/metrics-server` - Using what you have learned add file(s) that will source manifests from a pinned version of the [official repo for metrics-server](https://github.com/kubernetes-sigs/metrics-server) > Verify that pushing the metrics-server application to your library repo generates a tagged version in the library repo @@ -204,8 +209,6 @@ injects a new Argo Application into our Deploy repo that references an external ?> You will need to add some scaffolder plugins to accomplish this software template. [Roadie publishes some good scaffolder plugins](https://roadie.io/docs/scaffolder/scaffolder-actions-directory/#roadiehqutilsfswrite) -#### UNCHARTED TERRITORY - ## Exercise 5 Congratulations you have a very minimal yet powerful platform now. You have a centrally managed Kubernetes cluster that can be developed @@ -239,8 +242,8 @@ In order to make the app healthy again you needed to upload some credentials/tok configure the cluster or ArgoCD so that you can access private resources. This is pretty common but the problem is this approach is not declarative. If you needed to rebuild the cluster there would be some manual intervention required. We can do better. -1) Install and configure a new platform application for the [External Secrets Operator](https://external-secrets.io/latest/introduction/overview/) -2) Upload the Pat that you use in your Argo Credential Template to AWS Secret Manager -3) Declaratively configure ArgoCD Credential Template leveraging your secret stored in AWS Secret Manager -4) Add the secrets needed to pull from the private container registry to AWS Secret Manager -5) Configure Kubernetes to pull get the ImagePullSecret from External Secret Operator +1) Install and configure a new platform application for the [External Secrets Operator](https://external-secrets.io/latest/introduction/overview/) (this will use the IRSA you set up in Exercise 3) +2) Upload the PAT that you use in your Argo Credential Template to AWS Secrets Manager +3) Declaratively configure ArgoCD Credential Template leveraging your secret stored in AWS Secrets Manager +4) Add the secrets needed to pull from the private container registry to AWS Secrets Manager +5) Configure Kubernetes to get the ImagePullSecret from External Secrets Operator