From 0748f4562004dd3a84ef10ad1f03213fd71fedec Mon Sep 17 00:00:00 2001 From: Flynn Date: Wed, 22 May 2024 22:41:04 -0400 Subject: [PATCH 1/3] Demo tweak Signed-off-by: Flynn --- DEMO.md | 215 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 208 insertions(+), 7 deletions(-) diff --git a/DEMO.md b/DEMO.md index 6ddde2a..e44ecf2 100644 --- a/DEMO.md +++ b/DEMO.md @@ -27,18 +27,219 @@ When you use `demosh` to run this file, your cluster will be checked for you. We're going to show various resilience techniques using the Faces demo (from https://github.com/BuoyantIO/faces-demo): +- _Retries_ automatically repeat requests that fail; +- _Timeouts_ cut off requests that take too long; and - _Rate limits_ protect services by restricting the amount of traffic that can - flow through to a service; -- _Retries_ automatically repeat requests that fail; and -- _Timeouts_ cut off requests that take too long. + flow through to a service. All are important techniques for resilience, and all can be applied - at various points in the call stack - by infrastructure components like the ingress controller and/or service mesh. -Let's start with a quick look at Faces in the web browser. You'll be able to -see that it's in pretty sorry shape, and you'll be able to look at the Linkerd -dashboard to see how much traffic it generates. + + +## Installing Linkerd + +We're going to install Linkerd first -- that lets us install Emissary and +Faces directly into the mesh, rather than installing and then meshing as a +separate step. + +### A digression on Linkerd releases + +There are two kinds of Linkerd releases: _edge_ and _stable_. The Linkerd +project itself only produces edge releases, which show up every week or so and +always have the latest and greatest features and fixes directly from the +`main` branch. Stable releases are produced by the vendor community around +Linkerd, and are the way to go for full support. + +We're going to use the latest edge release for this demo, but **either will +work**. (If you want to use a stable release instead, check out +`https://linkerd.io/releases/` for more information.) + + + +### Installing the CLI + +Installing Linkerd starts with installing the Linkerd CLI. This command-line +tool makes it easy to work with Linkerd, and it's installed with this +one-liner that will download the latest edge CLI and get it set up to run. + +```bash +curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh +``` + +Once that's done, you'll need to add the CLI directory to your PATH: + +```bash +export PATH=$PATH:$HOME/.linkerd2/bin +``` + +and then we can make sure that this cluster really can run Linkerd: + +```bash +linkerd check --pre +``` + + + +## Installing the Linkerd CRDs + +Linkerd uses Custom Resource Definitions (CRDs) to extend Kubernetes. After +verifying that the cluster is ready to run Linkerd, we next need to install +the CRDs. We do this by running `linkerd install --crds`, which will output +the CRDs that need to be installed so that we can apply them to the cluster. +(The Linkerd CLI will never directly modify the cluster.) + +```bash +linkerd install --crds | kubectl apply -f - +``` + +As you can see in the output above, Linkerd doesn't actually install many +CRDs, and in fact it can add security and observability to an application +without using _any_ of these CRDs. However, they're necessary for more +advanced usage. + + + +## Installing Linkerd and Linkerd Viz + +Now that the CRDs are installed, we can install Linkerd itself. + +```bash +linkerd install | kubectl apply -f - +``` + +We're also going to install Linkerd Viz: this is an optional component that +provides a web-based dashboard for Linkerd. It's a great way to see what's +happening in your cluster, so we'll install it as well. + +```bash +linkerd viz install | kubectl apply -f - +``` + +Just like Linkerd itself, this will start the installation and return +immediately, so - again - we'll use `linkerd check` to make sure all is well. + +```bash +linkerd check +``` + +So far so good -- let's take a look at the Viz dashboard just to make sure. + +```bash +linkerd viz dashboard +``` + + + +## Installing Emissary + +At this point, Linkerd is up and running, so we'll continue by installing +Emissary-ingress, which works pretty much the same way as Linkerd: we install +Emissary's CRDs first, then we install Emissary itself. + +We want Emissary to be in the Linkerd mesh from the start, so we'll begin by +creating Emissary's namespace and annotating it such that any new Pods in that +namespace will automatically be injected with the Linkerd proxy. + +```bash +kubectl create namespace emissary +kubectl annotate namespace emissary linkerd.io/inject=enabled +``` + +After that, we can install Emissary's CRDs. We're going to use Helm for this, +using Emissary's unofficial OCI charts to give ourselves a lightweight demo +installation. (These charts are still experimental, to be clear -- this is +_not_ a production-ready installation!) + +```bash +helm install emissary-crds -n emissary \ + oci://ghcr.io/emissary-ingress/emissary-crds-chart \ + --version 0.0.0-test \ + --wait +``` + +Once that's done, we can install Emissary itself. We'll deliberately run just +a single replica (this makes things simpler if you're running a local +cluster!), and we'll wait for Emissary to be running before continuing. + +```bash +helm install emissary -n emissary \ + oci://ghcr.io/emissary-ingress/emissary-chart \ + --version 0.0.0-test \ + --set replicaCount=1 + +kubectl rollout status -n emissary deploy --timeout 90s +``` + +With this, Emissary is running -- but it needs some configuration to be +useful. + + + +## Configuring Emissary + +First things first: let's tell Emissary which ports and protocols we want to +use. Specifically, we'll tell it to listen for HTTP on port 8080 and 8443, and +to accept any hostname. This is not great for production, but it's fine for +us. + +```bash +bat emissary-yaml/listeners-and-hosts.yaml +kubectl apply -f emissary-yaml/listeners-and-hosts.yaml +``` + +Next up, we need to set up rate limiting. Since rate limiting usually needs to +be closely tailored to the application, Emissary handles it using an external +rate limiting service: for every request, Emissary asks the external service +if rate limiting should be applied. So we need to install the rate limit +service, then tell Emissary how to talk to it. + +```bash +bat emissary-yaml/ratelimit-service.yaml +kubectl apply -f emissary-yaml/ratelimit-service.yaml +``` + +Finally, we want Emissary to give us access to the Linkerd Viz dashboard. + +```bash +bat emissary-yaml/linkerd-viz-mapping.yaml +kubectl apply -f emissary-yaml/linkerd-viz-mapping.yaml +``` + +With that, Emissary should be good to go! We can test it by going to check out +the Linkerd Viz dashboard again _without_ using the `linkerd viz dashboard` +command -- just going to the IP address of the `emissary` service from a +browser should load up the dashboard. + + + +## Installing Faces + +Finally, let's install Faces! This is pretty simple: we'll create and annotate +the namespace as before, then use Helm to install Faces: + +```bash +kubectl create namespace faces +kubectl annotate namespace faces linkerd.io/inject=enabled + +helm install faces -n faces \ + oci://ghcr.io/buoyantio/faces-chart --version 1.3.0 + +kubectl rollout status -n faces deploy +``` + +We'll also install basic Mappings and ServiceProfiles for the Faces workloads: + +```bash +bat k8s/01-base/*-mapping.yaml +bat k8s/01-base/*-profile.yaml +kubectl apply -f k8s/01-base +``` + +And with that, let's take a quick look at Faces in the web browser. You'll be +able to see that it's in pretty sorry shape, and you'll be able to look at the +Linkerd dashboard to see how much traffic it generates. @@ -210,7 +411,7 @@ abilities under load when we set the `MAX_RATE` environment variable, so we'll do that now: ```bash -kubectl set env deploy -n faces face MAX_RATE=8.5 +kubectl set env deploy -n faces face MAX_RATE=9.0 ``` Once that's done, we can take a look in the browser to see what happens. From 9177e6bb41fb8e9a2ab30eba84b504ba5362bd2f Mon Sep 17 00:00:00 2001 From: Flynn Date: Wed, 22 May 2024 23:03:59 -0400 Subject: [PATCH 2/3] Add BEL demo script Signed-off-by: Flynn --- DEMO-BEL.md | 459 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 459 insertions(+) create mode 100644 DEMO-BEL.md diff --git a/DEMO-BEL.md b/DEMO-BEL.md new file mode 100644 index 0000000..faa939d --- /dev/null +++ b/DEMO-BEL.md @@ -0,0 +1,459 @@ +# Emissary and Linkerd Resilience Patterns + +This is the documentation - and executable code! - for a demo of resilience +patterns using Emissary-ingress and Linkerd. The easiest way to use this file +is to execute it with [demosh]. + +Things in Markdown comments are safe to ignore when reading this later. When +executing this with [demosh], things after the horizontal rule below (which +is just before a commented `@SHOW` directive) will get displayed. + +[demosh]: https://github.com/BuoyantIO/demosh + +When you use `demosh` to run this file, your cluster will be checked for you. + + + + + + +--- + + +# Emissary and Linkerd Resilience Patterns +## Rate Limits, Retries, and Timeouts + +We're going to show various resilience techniques using the Faces demo (from +https://github.com/BuoyantIO/faces-demo): + +- _Retries_ automatically repeat requests that fail; +- _Timeouts_ cut off requests that take too long; and +- _Rate limits_ protect services by restricting the amount of traffic that can + flow through to a service. + +All are important techniques for resilience, and all can be applied - at +various points in the call stack - by infrastructure components like the +ingress controller and/or service mesh. + + + +## Installing Linkerd + +We're going to install Linkerd first -- that lets us install Emissary and +Faces directly into the mesh, rather than installing and then meshing as a +separate step. + +### A digression on Linkerd releases + +There are two kinds of Linkerd releases: _edge_ and _stable_. The Linkerd +project itself only produces edge releases, which show up every week or so and +always have the latest and greatest features and fixes directly from the +`main` branch. Stable releases are produced by the vendor community around +Linkerd, and are the way to go for full support. + +For this demo, we'll use the stable Buoyant Enterprise for Linkerd (BEL) +distribution. You'll need a free Buoyant account for this -- if you don't +already have one, hit up https://enterprise.buoyant.io to get set up. + + + +### Installing the CLI + +Once you have your Buoyant account set up, and you've set the environment +variables it'll tell you about, it's time to get the Linkerd CLI installed! +The CLI makes it easy to work with Linkerd, and it's installed with this +one-liner that will download the latest BEL CLI and get it set up to run: + +```bash +curl --proto '=https' --tlsv1.2 -sSfL https://enterprise.buoyant.io/install | sh +linkerd version --proxy --client --short +``` + +Once that's done, you'll need to add the CLI directory to your PATH: + +```bash +export PATH=$PATH:$HOME/.linkerd2/bin +``` + +and then we can make sure that this cluster really can run Linkerd: + +```bash +linkerd check --pre +``` + + + +## Installing the Linkerd CRDs + +Linkerd uses Custom Resource Definitions (CRDs) to extend Kubernetes. After +verifying that the cluster is ready to run Linkerd, we next need to install +the CRDs. We do this by running `linkerd install --crds`, which will output +the CRDs that need to be installed so that we can apply them to the cluster. +(The Linkerd CLI will never directly modify the cluster.) + +```bash +linkerd install --crds | kubectl apply -f - +``` + +As you can see in the output above, Linkerd doesn't actually install many +CRDs, and in fact it can add security and observability to an application +without using _any_ of these CRDs. However, they're necessary for more +advanced usage. + + + +## Installing Linkerd and Linkerd Viz + +Now that the CRDs are installed, we can install Linkerd itself. + +```bash +linkerd install | kubectl apply -f - +``` + +We're also going to install Linkerd Viz: this is an optional component that +provides a web-based dashboard for Linkerd. It's a great way to see what's +happening in your cluster, so we'll install it as well. + +(For the moment, we need to tell the CLI to explicitly install Viz 2.14.10.) + +```bash +linkerd viz install --set linkerdVersion=stable-2.14.10 | kubectl apply -f - +``` + +Just like Linkerd itself, this will start the installation and return +immediately, so - again - we'll use `linkerd check` to make sure all is well. + +```bash +linkerd check +``` + +So far so good -- let's take a look at the Viz dashboard just to make sure. + +```bash +linkerd viz dashboard +``` + + + +## Installing Emissary + +At this point, Linkerd is up and running, so we'll continue by installing +Emissary-ingress, which works pretty much the same way as Linkerd: we install +Emissary's CRDs first, then we install Emissary itself. + +We want Emissary to be in the Linkerd mesh from the start, so we'll begin by +creating Emissary's namespace and annotating it such that any new Pods in that +namespace will automatically be injected with the Linkerd proxy. + +```bash +kubectl create namespace emissary +kubectl annotate namespace emissary linkerd.io/inject=enabled +``` + +After that, we can install Emissary's CRDs. We're going to use Helm for this, +using Emissary's unofficial OCI charts to give ourselves a lightweight demo +installation. (These charts are still experimental, to be clear -- this is +_not_ a production-ready installation!) + +```bash +helm install emissary-crds -n emissary \ + oci://ghcr.io/emissary-ingress/emissary-crds-chart \ + --version 0.0.0-test \ + --wait +``` + +Once that's done, we can install Emissary itself. We'll deliberately run just +a single replica (this makes things simpler if you're running a local +cluster!), and we'll wait for Emissary to be running before continuing. + +```bash +helm install emissary -n emissary \ + oci://ghcr.io/emissary-ingress/emissary-chart \ + --version 0.0.0-test \ + --set replicaCount=1 + +kubectl rollout status -n emissary deploy --timeout 90s +``` + +With this, Emissary is running -- but it needs some configuration to be +useful. + + + +## Configuring Emissary + +First things first: let's tell Emissary which ports and protocols we want to +use. Specifically, we'll tell it to listen for HTTP on port 8080 and 8443, and +to accept any hostname. This is not great for production, but it's fine for +us. + +```bash +bat emissary-yaml/listeners-and-hosts.yaml +kubectl apply -f emissary-yaml/listeners-and-hosts.yaml +``` + +Next up, we need to set up rate limiting. Since rate limiting usually needs to +be closely tailored to the application, Emissary handles it using an external +rate limiting service: for every request, Emissary asks the external service +if rate limiting should be applied. So we need to install the rate limit +service, then tell Emissary how to talk to it. + +```bash +bat emissary-yaml/ratelimit-service.yaml +kubectl apply -f emissary-yaml/ratelimit-service.yaml +``` + +Finally, we want Emissary to give us access to the Linkerd Viz dashboard. + +```bash +bat emissary-yaml/linkerd-viz-mapping.yaml +kubectl apply -f emissary-yaml/linkerd-viz-mapping.yaml +``` + +With that, Emissary should be good to go! We can test it by going to check out +the Linkerd Viz dashboard again _without_ using the `linkerd viz dashboard` +command -- just going to the IP address of the `emissary` service from a +browser should load up the dashboard. + + + +## Installing Faces + +Finally, let's install Faces! This is pretty simple: we'll create and annotate +the namespace as before, then use Helm to install Faces: + +```bash +kubectl create namespace faces +kubectl annotate namespace faces linkerd.io/inject=enabled + +helm install faces -n faces \ + oci://ghcr.io/buoyantio/faces-chart --version 1.3.0 + +kubectl rollout status -n faces deploy +``` + +We'll also install basic Mappings and ServiceProfiles for the Faces workloads: + +```bash +bat k8s/01-base/*-mapping.yaml +bat k8s/01-base/*-profile.yaml +kubectl apply -f k8s/01-base +``` + +And with that, let's take a quick look at Faces in the web browser. You'll be +able to see that it's in pretty sorry shape, and you'll be able to look at the +Linkerd dashboard to see how much traffic it generates. + + + +## RETRIES + +Let's start by going after the red frowning faces: those are the ones where +the face service itself is failing. We can tell Emissary to retry those when +they fail, by adding a `retry_policy` to the Mapping for `/face/`: + +```bash +diff -u99 --color k8s/{01-base,02-retries}/face-mapping.yaml +``` + +We'll apply those... + +```bash +kubectl apply -f k8s/02-retries/face-mapping.yaml +``` + +...then go take a look at the results in the browser. + + + +## RETRIES continued + +So that helped quite a bit: it's not perfect, because Emissary will only retry +once, but it definitely cuts down on problems! Let's continue by adding a +retry for the smiley service, too, to try to get rid of the cursing faces: + +```bash +diff -u99 --color k8s/{01-base,02-retries}/smiley-mapping.yaml +``` + +Let's apply those and go take a look in the browser. + +```bash +kubectl apply -f k8s/02-retries/smiley-mapping.yaml +``` + + + +## RETRIES continued + +That... had no effect. If we take a look back at the overall application +diagram, the reason is clear... + + + + + + + +...Emissary never talks to the smiley service! so telling Emissary to retry +the failed call will never work. + +Instead, we need to tell Linkerd to do the retries, by adding `isRetryable` to +the `ServiceProfile` for the smiley service: + +```bash +diff -u99 --color k8s/{01-base,02-retries}/smiley-profile.yaml +``` + +This is different from the Emissary version because Linkerd uses a _retry +budget_ instead of a counter: as long as the total number of retries doesn't +exceed the budget, Linkerd will just keep retrying. Let's apply that and take +a look. + +```bash +kubectl apply -f k8s/02-retries/smiley-profile.yaml +``` + + + +## RETRIES continued + +That works great. Let's do the same for the color service. + +```bash +diff -u99 --color k8s/{01-base,02-retries}/color-profile.yaml +kubectl apply -f k8s/02-retries/color-profile.yaml +``` + +And, again, back to the browser to check it out. + + + +## RETRIES continued + +Finally, let's go back to the browser to take a look at the load on the +services now. Retries actually _increase_ the load on the services, since they +cause more requests: they're not about protecting the service, they're about +**improving the experience of the client**. + + + +## TIMEOUTS + +Things are a lot better already! but... still too slow, which we can see as +those cells that are fading away. Let's add some timeouts, starting from the +bottom of the call graph this time. + +Again, timeouts are not about protecting the service: they are about +**providing agency to the client** by giving the client a chance to decide +what to do when things take too long. In fact, like retries, they _increase_ +the load on the service. + +We'll start by adding a timeout to the color service. This timeout will give +agency to the face service, as the client of the color service: when a call to +the color service takes too long, the face service will show a pink background +for that cell. + +```bash +diff -u99 --color k8s/{02-retries,03-timeouts}/color-profile.yaml +``` + +Let's apply that and then switch back to the browser to see what's up. + +```bash +kubectl apply -f k8s/03-timeouts/color-profile.yaml +``` + + + +## TIMEOUTS continued + +Let's continue by adding a timout to the smiley service. The face service +will show a smiley-service timeout as a sleeping face. + +```bash +diff -u99 --color k8s/{02-retries,03-timeouts}/smiley-profile.yaml +kubectl apply -f k8s/03-timeouts/smiley-profile.yaml +``` + + + +## TIMEOUTS continued + +Finally, we'll add a timeout that lets the GUI decide what to do if the face +service itself takes too long. We'll use Emissary for this (although we +could've used Linkerd, since Emissary is itself in the mesh). + +When the GUI sees a timeout talking to the face service, it will just keep +showing the user the old data for awhile. There are a lot of applications +where this makes an enormous amount of sense: if you can't get updated data, +the most recent data may still be valuable for some time! Eventually, though, +the app should really show the user that something is wrong: in our GUI, +repeated timeouts eventually lead to a faded sleeping-face cell with a pink +background. + +For the moment, too, the GUI will show a counter of timed-out attempts, to +make it a little more clear what's going on. + +```bash +diff -u99 --color k8s/{02-retries,03-timeouts}/face-mapping.yaml +kubectl apply -f k8s/03-timeouts/face-mapping.yaml +``` + + + +## RATELIMITS + +Given retries and timeouts, things look better -- still far from perfect, but +better. Suppose, though, that someone now adds some code to the face service +that makes it just completely collapse under heavy load? Sadly, this is often +all-too-easy to mistakenly do. + +Let's simulate this. The face service has internal functionality to limit its +abilities under load when we set the `MAX_RATE` environment variable, so we'll +do that now: + +```bash +kubectl set env deploy -n faces face MAX_RATE=9.0 +``` + +Once that's done, we can take a look in the browser to see what happens. + + + +## RATELIMITS continued + +Since the face service is right on the edge, we can have Emissary enforce a +rate limit on requests to the face service. This is both protecting the +service (by reducing the traffic) **and** providing agency to the client (by +providing a specific status code when the limit is hit). Here, our web app is +going to handle rate limits just like it handles timeouts. + +Actually setting the rate limit is one of the messier bits of Emissary: the +most important thing here is to realize that we're actually providing a +**label** on the requests, and that the external rate limit service is +counting traffic with that label to decide what response to hand back. + +```bash +diff -u99 --color k8s/{03-timeouts,04-ratelimits}/face-mapping.yaml +``` + +For this demo, our rate limit service is preconfigured to allow 8 requests per +second. Let's apply this and see how things look: + +```bash +kubectl apply -f k8s/04-ratelimits/face-mapping.yaml +``` + + + +# SUMMARY + +We've used both Emissary and Linkerd to take a very, very broken application +and turn it into something the user might actually have an OK experience with. +Fixing the application is, of course, still necessary!! but making the user +experience better is a good thing. + + + From c6d2e4e404f5bceda3535f14eb6ce72ea0fe4e89 Mon Sep 17 00:00:00 2001 From: Flynn Date: Fri, 7 Jun 2024 17:35:41 -0400 Subject: [PATCH 3/3] Update README; clean up some unneeded files. Signed-off-by: Flynn --- README.md | 48 ++++++++------------------ create-cluster.sh | 7 +--- reset.sh | 12 ------- setup-cluster.sh | 88 ----------------------------------------------- 4 files changed, 16 insertions(+), 139 deletions(-) delete mode 100755 reset.sh delete mode 100755 setup-cluster.sh diff --git a/README.md b/README.md index a1376f2..2c50ab4 100644 --- a/README.md +++ b/README.md @@ -8,30 +8,23 @@ of the demo is let you try to fix things. In here you will find: -- `create-cluster.sh`, a shell script to create a `k3d` cluster and prep it by - running `setup-cluster.sh`. +- `DEMO.md`, a Markdown file containing the resilience demo presented live at + a couple of events. This uses [Emissary-ingress] and the latest edge release + of [Linkerd]. -- `setup-cluster.sh`, a shell script to set up an empty cluster with [Linkerd], - [Emissary-ingress], and the Faces app. - - These things are installed in a demo configuration: read and think - **carefully** before using this demo as background for a production - installation! In particular: +- `DEMO-BEL.md`, a Markdown file for the same resilience demo, but using + [Buoyant Enterprise for Linkerd]. - - We deploy Emissary with only one replica of everything, using a - currently-unofficial chart to also skip support for `v1` and `v2` - Emissary CRDs. +The easiest way to use either demo is to run it with [demosh]. Both demos +assume that you have an empty k3d cluster to play with! If you don't have one, +you can create one with `bash create-cluster.sh` (this will delete any +existing `k3d` cluster named "faces"). - - We only configure HTTP, not HTTPS. - - These are likely both bad ideas for a production installation. - -- `DEMO.md`, a Markdown file for the resilience demo presented live for a - couple of events. The easiest way to use `DEMO.md` is to run it with - [demosh]. - - - (You can also run `create-cluster.sh` and `setup-cluster.sh` with - [demosh], but they're fine with `bash` as well. Realize that all the - `#@` comments are special to [demosh] and ignored by `bash`.) +**Note**: most of the demo doesn't actually care what kind of cluster you use. +The sole dependency is that, as written, the demo assumes that it will be able +to reach the `emissary-ingress` service in the `emissary` namespace on +localhost port 80. If you're using something other than k3d, you'll need to +tweak the demo to talk to the correct URL. ## To try this yourself: @@ -42,18 +35,6 @@ In here you will find: - **Note:** `create-cluster.sh` will delete any existing `k3d` cluster named "faces". -- If you already have an empty cluster to use, you can run `bash setup-cluster.sh` - to initialize it. - -- Play around!! Assuming that you're using k3d, the Faces app is reachable at - http://localhost/faces/ and the Linkerd Viz dashboard is available at - http://localhost/ - - - If you're not using k3d, instead of `localhost` use the IP or DNS name of - the `emissary-ingress` service in the `emissary` namespace. - - - Remember, HTTPS is **not** configured. - - To run the demo as we've given it before, check out [DEMO.md]. The easiest way to use that is to run it with [demosh]. @@ -87,6 +68,7 @@ The Faces architecture is fairly simple: Faces authors have normal color vision... [Linkerd]: https://linkerd.io +[Buoyant Enterprise for Linkerd]: https://buoyant.io/linkerd-enterprise [Emissary-ingress]: https://www.getambassador.io/docs/emissary/ [DEMO.md]: DEMO.md [demosh]: https://github.com/BuoyantIO/demosh diff --git a/create-cluster.sh b/create-cluster.sh index a66e945..4951ae5 100755 --- a/create-cluster.sh +++ b/create-cluster.sh @@ -23,16 +23,13 @@ clear CLUSTER=${CLUSTER:-faces} # echo "CLUSTER is $CLUSTER" -SETUP=${SETUP:-setup-cluster.sh} -# echo "SETUP is $SETUP" - # Ditch any old cluster... k3d cluster delete $CLUSTER &>/dev/null #@SHOW # Expose ports 80 and 443 to the local host, so that our ingress can work. -# Also, don't install traefik, since we'll be putting Linkerd on instead. +# Also, don't install traefik, since we don't need it. k3d cluster create $CLUSTER \ -p "80:80@loadbalancer" -p "443:443@loadbalancer" \ --k3s-arg '--disable=traefik@server:*;agents:*' @@ -42,5 +39,3 @@ k3d cluster create $CLUSTER \ # if [ -f images.tar ]; then k3d image import -c ${CLUSTER} images.tar; fi # #@wait - -# $SHELL $SETUP diff --git a/reset.sh b/reset.sh deleted file mode 100755 index 6edcf45..0000000 --- a/reset.sh +++ /dev/null @@ -1,12 +0,0 @@ -kubectl apply -f k8s/01-base/color-mapping.yaml -kubectl apply -f k8s/01-base/color-profile.yaml -kubectl apply -f k8s/01-base/smiley-mapping.yaml -kubectl apply -f k8s/01-base/smiley-profile.yaml - -kubectl apply -f k8s/01-base/face-mapping.yaml -kubectl apply -f k8s/01-base/face-profile.yaml - -kubectl apply -f k8s/01-base/faces-gui-mapping.yaml - -kubectl delete -n faces deploy/face || true -linkerd inject k8s/01-base/faces.yaml | kubectl apply --overwrite -f - diff --git a/setup-cluster.sh b/setup-cluster.sh deleted file mode 100755 index da7f3b2..0000000 --- a/setup-cluster.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env bash -# -# SPDX-FileCopyrightText: 2022 Buoyant Inc. -# SPDX-License-Identifier: Apache-2.0 -# -# Copyright 2022-2024 Buoyant Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http:#www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -clear - -# Make sure that we're in the namespace we expect. -kubectl ns default - -# Tell demosh to show commands as they're run. -#@SHOW - -#@clear -# Install Linkerd, per the quickstart. -#### LINKERD_INSTALL_START -curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh - -linkerd install --crds | kubectl apply -f - -linkerd install | kubectl apply -f - -linkerd check -#### LINKERD_INSTALL_END - -linkerd viz install | kubectl apply -f - -linkerd check - -#@wait -#@clear -# Next up: install Emissary-ingress 3.9.1 as the ingress. -# -# This is actually cheating quite a bit, by using an unofficial Helm chart -# for the CRDs so that we can disable Emissary's conversion webhook to speed -# up the deployment. We also force every Deployment to one replica to reduce -# the load on k3d. - -kubectl create ns emissary -kubectl annotate ns emissary linkerd.io/inject=enabled - -helm install emissary-crds \ - oci://registry-1.docker.io/dwflynn/emissary-ingress-crds-chart \ - -n emissary \ - --version 3.9.1 \ - --wait - -helm install emissary-ingress \ - oci://ghcr.io/emissary-ingress/emissary-chart \ - -n emissary \ - --version 0.0.0-test \ - --set nameOverride=emissary \ - --set fullnameOverride=emissary \ - --set replicaCount=1 - -kubectl -n emissary wait --for condition=available --timeout=90s deploy -lproduct=aes - -#@wait -#@clear -# Finally, configure Emissary for HTTP - not HTTPS! - routing to our cluster. -kubectl apply -f emissary-yaml - -#@wait -#@clear -# Once that's done, install Faces, being sure to inject it into the mesh. -# Install its ServiceProfiles and Mappings too: all of these things are in -# the k8s directory. - -kubectl create ns faces -kubectl annotate ns faces linkerd.io/inject=enabled - -helm install faces -n faces \ - oci://ghcr.io/buoyantio/faces-chart --version 1.2.0 - -kubectl rollout status -n faces deploy -kubectl apply -f k8s/01-base -