diff --git a/oke-with-agones/0-workshop-introduction/0-workshop-introduction.md b/oke-with-agones/0-workshop-introduction/0-workshop-introduction.md new file mode 100644 index 000000000..65fa14fd0 --- /dev/null +++ b/oke-with-agones/0-workshop-introduction/0-workshop-introduction.md @@ -0,0 +1,55 @@ +# Introduction + +## **What is OKE and Agones?** + +### OKE Intro + +Oracle Kubernetes Engine (OKE) is the runtime within OCI for the running and operations of enterprise-grade Kubernetes at scale. You can easily deploy and manage resource-intensive workloads such as dedicated game servers with automatic scaling, patching, and upgrades. + +### Agones Intro + +Agones is an open source platform, for deploying, hosting, scaling, and orchestrating dedicated game servers for large scale multiplayer games, built on top of the industry standard, distributed system platform Kubernetes. + +Agones replaces bespoke or proprietary cluster management and game server scaling solutions with an open source solution that can be utilized and communally developed - so that you can focus on the important aspects of building a multiplayer game, rather than developing the infrastructure to support it. + +### Workshop Lab Objectives + +* Ensure you have installed the prerequisites +* Create the OCI infrastructure +* Setup OKE Autoscaling +* Setup Agones system pods with Helm +* Deploy an Agones Fleet (dedicated game servers) +* Scale a Agones Fleet and OKE nodes +* Teardown + +### Labs + +| Module | Est. Time | +| ------------- | :-----------: | +| [Workshop Introduction](?lab=0-workshop-introduction) | 5 minutes | +| [Get Started](?lab=1-get-started) | 15 minutes | +| [Creating OCI Resources With Terraform](?lab=2-create-infrastructure-with-terraform) | 30 minutes | +| [Installing the OKE Autoscaler Addon](?lab=3-install-oke-autoscaler-addon) | 20 minutes | +| [Create the Agones System Pods with Helm](?lab=4-create-agones-system) | 15 minutes | +| [Deploy an Agones Fleet and Autoscale OKE Nodes](?lab=5-create-scale-agones-fleet) | 25 minutes | +| [Teardown](?lab=6-teardown) | 10 minutes | + +Total estimated time: 120 minutes + +## Task 1: Begin The Labs + +Use the left navigation on this page to begin the labs in this workshop. + +You may now **proceed to the next lab** + +## Learn More - *Useful Links* + +- [Kubernetes](https://kubernetes.io/) +- [OKE](https://www.oracle.com/cloud/cloud-native/kubernetes-engine/) +- [OKE Terraform Module](https://github.com/oracle-terraform-modules/terraform-oci-oke) +- [Agones](https://agones.dev/site/docs/) + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 \ No newline at end of file diff --git a/oke-with-agones/1-get-started/1-get-started.md b/oke-with-agones/1-get-started/1-get-started.md new file mode 100644 index 000000000..b23282e04 --- /dev/null +++ b/oke-with-agones/1-get-started/1-get-started.md @@ -0,0 +1,89 @@ +# Get Started + +In this lab you will install the necessary components for this workshop. + +## Introduction + +In order to complete this workshop you will need to have the necessary tooling to connect to and deploy OCI resources. + +You will be using Terraform to deploy to OCI and will also need the OCI CLI. + +Estimated Time: 15 minutes + +### Objectives + +In this lab, you will: + - Install the OCI CLI + - Install Terraform + - Download the Terraform files + - Initialize Terraform + +### Prerequisites + +Please ensure you have the following before continuing + + - An OCI Tenancy + - A Shell ([OCI Cloud Shell](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/cloudshellintro.htm), Linux, MacOS, Windows with WSL) + - A user in a group with Tenancy Admin and downloaded API Key + +## Task 1: Install the OCI CLI + +Install the OCI CLI + +1. Make sure you have the policy for Tenancy admin. This is required because the Terraform OKE module creates a dynamic group policy, other than that everything gets created in a OCI Compartment you specify in the `terraform.tfvars` file. + +1. Follow the [install steps from Oracle](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm) + +2. Make sure you followed the steps above and are fully setup with API Keys (this will be the case if you did `oci setup config`) for the user of the above mentioned Tenancy admin. + +## Task 2: Install Terraform + +Complete the [install steps form Hashicorp](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) + +## Task 3: Download the Terraform Files + +Download the Terraform files + +1. Create a directory called `infrastructure` in your system + + ````shell + + mkdir infrastructure + cd infrastructure + + ```` + +2. Download the terraform files from [terraform.tar.gz](./files/terraform.tar.gz) to `infrastructure`. + +3. Untar the downloaded file + + ````shell + + tar -xvzf terraform.tar.gz + + ```` + +3. From within the infrastructure folder, initialize the Terraform + + ````shell + + terraform init + + ```` + +You may now **proceed to the next lab** + +## Learn More - *Useful Links* + +- [Kubernetes](https://kubernetes.io/) +- [OKE](https://www.oracle.com/cloud/cloud-native/kubernetes-engine/) +- [OKE Terraform Module](https://oracle-terraform-modules.github.io/terraform-oci-oke/) + +## **Summary** + +You have now initialized the dependencies for this Workshop. + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 \ No newline at end of file diff --git a/oke-with-agones/1-get-started/files/terraform.tar.gz b/oke-with-agones/1-get-started/files/terraform.tar.gz new file mode 100644 index 000000000..92c4bfc12 Binary files /dev/null and b/oke-with-agones/1-get-started/files/terraform.tar.gz differ diff --git a/oke-with-agones/2-create-infrastructure-with-terraform/2-create-infrastructure-with-terraform.md b/oke-with-agones/2-create-infrastructure-with-terraform/2-create-infrastructure-with-terraform.md new file mode 100644 index 000000000..ecd06420d --- /dev/null +++ b/oke-with-agones/2-create-infrastructure-with-terraform/2-create-infrastructure-with-terraform.md @@ -0,0 +1,91 @@ +# Create OCI Resources With Terraform + +In this lab you will create the OCI Network, Bastion, Operator and OKE cluster using Terraform and the OKE Terraform module. It's important to read through the [OKE Module documentation](https://oracle-terraform-modules.github.io/terraform-oci-oke/) as there are numerous options that can apply to your specific OCI deployment. + +## Introduction + +This Terraform deployment creates the following resources + +- Private OKE Control plane +- Private Operator (with kubectl installed) +- Public bastion (for SSH tunneling to Operator) +- Three node pools (one public for game servers, and two private for the Autoscaler and Agones system pods respectively) +- Security Group Rules for UDP access (game server to game client connectivity) +- VCN Logs for logging traffic + +Estimated Time: 30 minutes + +### Objectives + +In this lab, you will: + - Configure your Terraform variables + - Run a Terraform plan and apply changes + +### Prerequisites + + - Completed Lab 1 which walked through sourcing the Terraform files + +## Task 1: Update Terraform Variables + +Customize the infrastructure to fit your tenancy and compartment. + +1. You will want to edit `infrastructre/terraform.tfvars` with relevant information to match your account OCID's and API Keys. For a full description of each variable in that file refer to `infrasctucture/variables.tf`. + +2. You can (optional) edit `infrastructure/module.tf` to tweak the OKE settings as needed for your deployment. The current settings will work as is for the purpose of this lab. Its a good idea to have a look at these settings since the OKE Terraform module does a lot and is very customizable. + +## Task 2: Create a Terraform Plan + +Get a Terraform plan and check the output of that plan to make sure its what you expect. After you validate the plan you can move onto the next task. + +````shell + +cd infrastructure +terraform plan + +```` + +## Task 3: Apply the Terraform Plan + +You can now apply the plan and the infrastructure will get created. Wait some time for the apply to complete. + +````shell + +terraform apply + +```` + +## Task 4: Connect to the bastion + +Connect to the Bastion to ensure you have access to the OKE control plane. + +1. Get the terraform output. It will give an example command that you can use to SSH to the bastion and jump to the operator. + + ````shell + + terraform output + + ```` + +2. The operator has kubectl installed with connectivity to the OKE control plane. **An example of that output is below**. You should test this command and make sure it works before proceeding to the next section of this workshop. + + ````shell + + ssh -J opc@ opc@ + + ```` + +You may now **proceed to the next lab** + +## **Summary** + +You have now deployed the necessary infrastructure and connected to the Bastion and jumped to the Operator. You are ready to begin installing the autoscaler and more. + +## Learn More - *Useful Links* + +- [OKE Terraform Module](https://oracle-terraform-modules.github.io/terraform-oci-oke/) +- [Terraform Variables](https://developer.hashicorp.com/terraform/language/values/variables) + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 \ No newline at end of file diff --git a/oke-with-agones/3-install-oke-autoscaler-addon/3-install-oke-autoscaler-addon.md b/oke-with-agones/3-install-oke-autoscaler-addon/3-install-oke-autoscaler-addon.md new file mode 100644 index 000000000..e48976ffb --- /dev/null +++ b/oke-with-agones/3-install-oke-autoscaler-addon/3-install-oke-autoscaler-addon.md @@ -0,0 +1,124 @@ +# Install the OKE Autoscaler Addon + +In this lab you will install and verify the OKE Cluster Autoscaler Add-on. + +## Introduction + +The OKE Cluster Autoscaler Add-on is what is used to watch and manage specified node pools that should be autoscaled. The node pool you will configure this add on for is the pool that will be running the Agones fleet in future labs of this workshop. + +The add-on itself is installed into its own node pool to isolate it from its own scaling events. + + +Estimated Time: 20 minutes + +### Objectives + +In this lab, you will: + - Verify the installation of the Autoscaler + - Create a config file for the Autoscaler Add-on + - Install the Autoscaler Add-on + +### Prerequisites + + - Completed Lab 2 which walked through deploying the infrastructure + +## Task 1: Verify if the Autoscaler is already installed + +Depending on the OKE Terraform module used and your connectivity when creating the infrastructure the OKE Terraform may have installed the addon for you. Its configured to do so, but does not always work in some scenarios. + +You should verify the current state of addons to see if the autoscaler was installed by following the steps below. + +1. SSH to your Operater using the output from `terraform output`, example below. + + ```bash + + ssh -J opc@ opc@ + + ``` + +2. Get the OCID of your cluster by running this command and looking for the `id` (which is the OCID) of the cluster you just created. This can also be obtained from the web console. + + ````shell + + oci ce cluster list -c + + ```` + +3. Get the Addons installed on your cluster + + ````shell + + oci ce cluster list-addons --cluster-id + + ```` + + If `Autoscaler` is listed as one of the addons you can go to the next lab in this workshop. If not, proceed with the remaining tasks here to install it. + +## Task 2: Install the Autoscaler Addon + +Assuming the task before this indicated the addon was not installed you can now install the addon. You can also optionally do the following steps manually in the web console for OKE. + +1. SSH to your Operater using the output from `terraform output`, example below. + + ```bash + + ssh -J opc@ opc@ + + ``` + +2. Get the OCID of the `node_pool_workers` node pool. This is the pool that will run the Agones fleet in subsequent labs of this workshop. + + ````shell + + kubectl get node -l oke.oraclecloud.com/pool.name=node_pool_workers -o json |grep node-pool-id + + ```` + + **This OCID is the Node Pool OCID that you will use in the next step.** + +3. The file [addon.json](./files/addon.json) will be used as an example. It's format is `::`. It's important to remember that as your node pools change (renaming, changing terraform etc) their respective OCID's will change and you will need to update this config. + + Create the config as `addon.json` and paste the contents from [addon.json](./files/addon.json). Replace from the file `` with the OCID from the previous step above. + + ```bash + + # using vim or nano + vim addon.json + + # paste from addon.json into this new file and save + # Paste in the correct Node Pool OCID + + ``` + +4. Install the addon using the newly created config file. This should run without error and a resulting work request ID will be displayed. + + + ````shell + + oci ce cluster install-addon --addon-name ClusterAutoscaler --from-json file://addon.json --cluster-id + + ```` + +5. Verify there are no errors with the newly installed addon. The result should say `ACTIVE`. + + ````shell + + oci ce cluster get-addon --addon-name ClusterAutoscaler --cluster-id | grep lifecycle-state + + ```` + +You may now **proceed to the next lab** + +## **Summary** + +You have installed the OKE Cluster Autoscaler addon on and configured it to watch the node pool that will be running the Agones fleet in subsequent labs of this workshop. + +## Learn More - *Useful Links* + +- [Working with the OKE Cluster Autoscaler Add-on](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengusingclusterautoscaler_topic-Working_with_Cluster_Autoscaler_as_Cluster_Add-on.htm) +- ["oci ce cluster addon" documentation](https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.50.3/oci_cli_docs/cmdref/ce/cluster.html) + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 \ No newline at end of file diff --git a/oke-with-agones/3-install-oke-autoscaler-addon/files/addon.json b/oke-with-agones/3-install-oke-autoscaler-addon/files/addon.json new file mode 100644 index 000000000..ff145a222 --- /dev/null +++ b/oke-with-agones/3-install-oke-autoscaler-addon/files/addon.json @@ -0,0 +1,9 @@ +{ + "addonName": "ClusterAutoscaler", + "configurations": [ + { + "key": "nodes", + "value": "2:10:" + } + ] +} \ No newline at end of file diff --git a/oke-with-agones/4-create-agones-system/4-create-agones-system.md b/oke-with-agones/4-create-agones-system/4-create-agones-system.md new file mode 100644 index 000000000..399899c07 --- /dev/null +++ b/oke-with-agones/4-create-agones-system/4-create-agones-system.md @@ -0,0 +1,136 @@ +# Create the Agones System Pods with Helm + +In this lab you will deploy Agones using Helm and test UDP connectivity to a dedicated Agones game server. + +## Introduction + +You will install the Agones components into your OKE cluster using Helm. This will deploy pods for Agones itself such as its Allocator and other services. You will also quickly deploy a single dedicated game server that you will use to test UDP connectivity to the public node pool that serves UDP. + +Estimated Time: 15 minutes + +### Objectives + +In this lab, you will: + - Install Agones by using Helm + - Install an Agones dedicated game server + - Make a test connection to the game server + +### Prerequisites + + - Completed the previous labs + +## Task 1: Install Agones Helm Chart + +In this task you will create Agones components and create LoadBalancer services for the Allocator and Ping HTTP Service. No games or game servers are deployed in this step. Note that Agones respects the node labels we set in `module.tf` so the end result is the agones system pods all run on a node pool separate from the worker node pool and separate from the autoscaler node pool. + +1. SSH to your Operater using the output from `terraform output`, example below. + + ```bash + + ssh -J opc@ opc@ + + ``` + +2. Deploy the Agones system using Helm + + ````shell + + helm repo add agones https://agones.dev/chart/stable + helm repo update + + helm install my-release --namespace agones-system --create-namespace agones/agones + + helm test my-release -n agones-system + + ```` + +3. Get the status of all the agones pods, they should all be running (allocator, controller, extensions, ping) + + ````shell + + kubectl get pods --namespace agones-system + + ```` + + Example output... + + ```bash + + [opc@o-xiteaz ~]$ kubectl get pods --namespace agones-system + + NAME READY STATUS RESTARTS AGE + agones-allocator-79d8dbfcbb-r5k4j 1/1 Running 0 2m23s + agones-allocator-79d8dbfcbb-sf6bt 1/1 Running 0 2m23s + agones-allocator-79d8dbfcbb-xk4h5 1/1 Running 0 2m23s + agones-controller-657c48fdfd-bfl67 1/1 Running 0 2m23s + agones-controller-657c48fdfd-gvt2m 1/1 Running 0 2m23s + agones-extensions-7bbbf98956-bcjkk 1/1 Running 0 2m23s + agones-extensions-7bbbf98956-tbbrx 1/1 Running 0 2m23s + agones-ping-6848778bd7-7z76r 1/1 Running 0 2m23s + agones-ping-6848778bd7-dg5wp 1/1 Running 0 2m23s + + ``` + +## Task 2: Test Agones with A Game Server and Client + +This step can be skipped, but its a good step to test simple connectivity from game clients without having to create an Agones Fleet and before we try autoscaling, this acts as a proof of concept dedicated game server. + +The steps here follow the [guide built by Agones](https://agones.dev/site/docs/getting-started/create-gameserver/). + +1. From the Operator after you SSH create the game server (by default this will go into the `default` namespace and that namespace is using the `node_pool_workers` node pool) + + ````shell + + kubectl create -f https://raw.githubusercontent.com/googleforgames/agones/release-1.45.0/examples/simple-game-server/gameserver.yaml + + ```` + +2. Get the IP of a `gameserver` for the next step + + ````shell + + kubectl get gameserver + + ```` + +3. Make a UDP connection and test. You are testing this from the Operator, which is in the private subnet. But, you should also test this in another shell that is on the internet and you should get the same results. + + ````shell + + nc -uv 7043 + + ```` + +4. Now type the following line and hit enter, you will see a response of `ACK: HELLO WORLD!` + + ```bash + + HELLO WORLD! + + ``` + +5. Delete the `gameserver` when done + + ````shell + + kubectl get gameserver + kubectl delete gameserver + + ```` + +You may now **proceed to the next lab** + +## **Summary** + +You installed the Agones system pods using Helm. You then created a dedicated gameserver deployment which exposes a UDP port. You successfully made a UDP connection to the dedicated gameserver's public IP. + +## Learn More - *Useful Links* + +- [Agones](https://agones.dev/site/docs/) +- [Kubernetes](https://kubernetes.io/) +- [Helm](https://helm.sh/) + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 diff --git a/oke-with-agones/5-create-scale-agones-fleet/5-create-scale-agones-fleet.md b/oke-with-agones/5-create-scale-agones-fleet/5-create-scale-agones-fleet.md new file mode 100644 index 000000000..f1fe4d1c0 --- /dev/null +++ b/oke-with-agones/5-create-scale-agones-fleet/5-create-scale-agones-fleet.md @@ -0,0 +1,149 @@ +# Deploy an Agones Fleet and Autoscale OKE Nodes + +You will deploy an Agones fleet and observe your configured OKE Autoscaler create more nodes to meet the demand of the Fleet. + +## Introduction + +In this lab you will leverage the OKE cluster you created in previous labs and deploy an Agones Fleet of dedicated game servers using YAML and kubectl. Once this Fleet is deployed, you will then scale the Fleet and observe OKE nodes auto scale to meet the scheduling demand of a large amount of pods that make up the fleet. Once completed, you will scale down the fleet and observe the autoscaling OKE node pool scale down as well. + +Estimated Time: 25 minutes + +### Objectives + +In this lab, you will: + - Deploy an Agones Fleet of Agones GameServers + - Scale up the Fleet + - Watch and troubleshoot scaling pods and nodes + +### Prerequisites + + - Completed the previous lab which deployed the Agones system pods + +## Task 1: Create an Agones Fleet + +You will deploy an Agones Fleet of dedicated game servers. + +1. SSH to your Operater using the output from `terraform output`, example below. + + ```bash + + ssh -J opc@ opc@ + + ``` + +2. Create a file called `fleet.yaml` with the contents from [fleet.yaml](./files/fleet.yaml). This was sourced from [Agones v1.45.0](https://raw.githubusercontent.com/googleforgames/agones/release-1.45.0/install/yaml/install.yaml). The changes made in this file ensure that game servers get deployed to the matching label on our nodes as defined in your `module.tf` file. + + ```bash + + # using vim or nano + vim fleet.yaml + + # paste from the downloaded fleet.json into this new file and save + + ``` + +3. Apply the fleet + + ````shell + + kubectl apply -f fleet.yaml + + ```` + +4. Verify the game servers are running, you should see pods with public IP addresses with each having their own port. + + ````shell + + kubectl get gameserver + + ```` + +5. To actually use these game servers in production, the typical use case would be to have your match making server return to the game clients the IP and port of the game server for a connection and leverage the Agones Allocator to create new on demand game servers. + +## Task 2: Scale the Fleet and Node Pool + +You will now scale the Agones Fleet and watch the node pool auto scale. + +1. Scale the fleet to 300, this will trigger node autoscaling as the required resources to run 300 game servers cant be achieved with the current size of the worker node pool. + + ````shell + + kubectl scale fleet simple-game-server --replicas=300 + + ```` + +2. After a few moments, get the `gameserver`'s and nodes, you should see a lot of `gameserver`'s in Starting or Pending state, and a new node starting up automatically. Initially you will see the node pool in the console updating as well and new compute instances being added to the node pool before they start to show in `kubectl get nodes` results. + + ```bash + + # grep for pods that have 0 containers running + kubectl get pods |grep 0/2 + + # similarly, view the gameserver ip's and status + kubectl get gameserver + + ``` + +3. To inspect, get the status of a given pod that is NOT running. Ideally you should see an Event that says "pod triggered scale-up" and you can skip the next numbered step here. If you don't see that event the next step should be looked at. + + ````shell + + kubectl describe pod + + ```` + +4. You may have issues with the pod not triggering autoscaling. If so, make sure your addon was installed and configured to watch the correct node pool OCID (see previous lab) and that your `fleet.yaml` has the correct affinity settings (see steps above) + +5. After some time you should see new nodes listed with a much younger value for age than the original nodes. + + ````shell + + kubectl get nodes + + ```` + +6. Once the new nodes are fully running, we should see zero pods listed when we run our pod list and grep command again. + + ````shell + + # grep for pods that have 0 containers running + kubectl get pods |grep 0/2 + + ```` + +7. Now scale down. + + ````shell + + kubectl scale fleet simple-game-server --replicas=3 + + ```` + + We should now see `gameserver`'s automatically start to be removed and put into Shutdown status. Nodes wont scale down unless minimums are met, one minimum is that the node must have been running for 10 minutes, also if you have other workloads deployed they must not prevent themselves from eviction (Agones game servers by default will not be evicted unless you scale down the fleet first). + +7. After some time, we should see the nodes start to disappear according to the [scale down rules of the autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work). + + ````shell + + kubectl get nodes + + ```` + +8. Lastly, it's important to keep in mind the custom work that is needed to coordinate user (game clients) demand of your game servers, the type of game servers you will run and the scaling of the fleet. We manually scaled the fleet via the CLI, but, you should integrate that with on demand or predictable game server allocation. When doing that the scaling of the nodes itself will be automatic just as you saw here. + +You may now **proceed to the next lab** + +## **Summary** + +You deployed and Agones fleet with affinity for a node pool that is dedicated to running dedicated game servers. You then scaled the fleet and watched as new pods were being scheduled. While waiting on scheduling to complete you saw new nodes added to the node pool to serve the scheduled demand of the scaled up Fleet. + +## Learn More - *Useful Links* + +- [Agones](https://agones.dev/site/docs/) +- [Kubernetes](https://kubernetes.io/) +- [OKE Autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md) + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 \ No newline at end of file diff --git a/oke-with-agones/5-create-scale-agones-fleet/files/fleet.yaml b/oke-with-agones/5-create-scale-agones-fleet/files/fleet.yaml new file mode 100644 index 000000000..d5fd6e32a --- /dev/null +++ b/oke-with-agones/5-create-scale-agones-fleet/files/fleet.yaml @@ -0,0 +1,35 @@ +apiVersion: agones.dev/v1 +kind: Fleet +metadata: + name: simple-game-server +spec: + replicas: 3 + template: + spec: + ports: + - name: default + containerPort: 7654 + template: + spec: + containers: + - name: simple-game-server + image: us-docker.pkg.dev/agones-images/examples/simple-game-server:0.35 + resources: + requests: + memory: 64Mi + cpu: 20m + limits: + memory: 64Mi + cpu: 20m + # Affinity so that game servers get scheduler on the correct node pool + # The key agones.dev/agones-worker is not one created or managed by agones + # ...this key simply needs to match what is defined in module.tf for the worker node pool + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: agones.dev/agones-worker + operator: In + values: + - "true" diff --git a/oke-with-agones/6-teardown/6-teardown.md b/oke-with-agones/6-teardown/6-teardown.md new file mode 100644 index 000000000..7c337f08b --- /dev/null +++ b/oke-with-agones/6-teardown/6-teardown.md @@ -0,0 +1,76 @@ +# Create the Agones System Pods with Helm + +In this lab you will teardown all resources that were created during this workshop. + +## Introduction + +To make sure you dent leave infrastructure running that isn't being used you can use the steps in this guide to teardown some or all of what you deployed in this workshop. + +Estimated Time: 10 minutes + +### Objectives + +In this lab, you will: + - Delete the Agones Fleet + - Delete the Agones System + - Delete the Terraformed infrastructure + +### Prerequisites + + - Completed all the previous labs + +## Task 1: Delete The Fleet + +Here we will delete the Agones Fleet + +1. SSH to your Operater using the output from `terraform output`, example below. + + ```bash + + ssh -J opc@ opc@ + + ``` + +2. Delete the fleet + + ````shell + + kubectl delete fleets --all --all-namespaces + kubectl delete gameservers --all --all-namespaces + + ```` + +## Task 2: Delete The Agones System + +Within your SSH session of Task 1, delete the Agones chart, using te same name when you created it, `my-release` in this workshop. + +````shell + +helm uninstall my-release --namespace agones-system + +```` + +## Task 3: Delete the Infrastructure + +Now Terraform destroy the infrastructure from the directory you created it in. This could take some time to complete. + +````shell + +exit +terraform destroy + +```` + +## **Summary** + +In this lab we tore down everything we created. The Agones Fleet and any remaining `gameserver`'s, the Agones system pods and finally the terraformed infrastructure. + +## Learn More - *Useful Links* + +- [Agones](https://agones.dev/site/docs/) +- [Kubernetes](https://kubernetes.io/) + +## **Acknowledgements** + + - **Author** - Marcellus Miles, Master Cloud Architect + - **Last Updated By/Date** - Marcellus Miles, Dec 2024 \ No newline at end of file diff --git a/oke-with-agones/workshops/tenancy/index.html b/oke-with-agones/workshops/tenancy/index.html new file mode 100644 index 000000000..6acdb69d1 --- /dev/null +++ b/oke-with-agones/workshops/tenancy/index.html @@ -0,0 +1,62 @@ + + + + + + + + + Oracle LiveLabs + + + + + + + + + + + + +
+
+
+
+
+
+
+
+ + + + + diff --git a/oke-with-agones/workshops/tenancy/manifest.json b/oke-with-agones/workshops/tenancy/manifest.json new file mode 100644 index 000000000..f7cef6a75 --- /dev/null +++ b/oke-with-agones/workshops/tenancy/manifest.json @@ -0,0 +1,42 @@ +{ + "workshoptitle": "Autoscale OKE Node Pools with Agones Game Servers and Fleets", + "help": "livelabs-help-oci_us@oracle.com", + "tutorials": [ + { + "title": "Lab 0: Workshop Introduction", + "description": "The will introduce you to OKE and Agones and list all labs in this Workshop", + "filename": "../../0-workshop-introduction/0-workshop-introduction.md" + }, + { + "title": "Labe 1: Get Started", + "description": "This is the prerequisites for the Workshop to install the OCI CLI and Terraform", + "filename": "../../1-get-started/1-get-started.md" + }, + { + "title": "Lab 2: Create Infrastructure with Terraform", + "description": "Provisioning Oracle Container Engine for Kubernetes (OKE) with Virtual Nodes", + "filename": "../../2-create-infrastructure-with-terraform/2-create-infrastructure-with-terraform.md" + }, + { + "title": "Lab 3: Install OKE Autoscaler Addon", + "filename": "../../3-install-oke-autoscaler-addon/3-install-oke-autoscaler-addon.md" + }, + { + "title": "Lab 4: Create the Agones System", + "filename": "../../4-create-agones-system/4-create-agones-system.md" + }, + { + "title": "Lab 5: Create and Scale an Agones Fleet", + "filename": "../../5-create-scale-agones-fleet/5-create-scale-agones-fleet.md" + }, + { + "title": "Lab 6: Teardown", + "filename": "../../6-teardown/6-teardown.md" + }, + { + "title": "Need Help?", + "description": "Solutions to Common Problems and Directions for Receiving Live Help", + "filename":"https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md" + } + ] +}