|
| 1 | +--- |
| 2 | +sidebar_position: 2 |
| 3 | +title: "Playbook Deployment Guide" |
| 4 | +sidebar_label: "Playbook Deployment Guide" |
| 5 | +draft: false |
| 6 | +--- |
| 7 | + |
| 8 | +We strongly advise to start development from a copy of `.dummy_component/`, reading the insightful comments and modifying the playbook as you desire. We also suggest leaving the `.tnlcm/` files to the end, as you will discover during development if you need more or less variables from the experimenter. |
| 9 | + |
| 10 | +This guide contemplates 2 families of deployments: |
| 11 | +- Based on a VM Deployment |
| 12 | +- Based on a Helm Chart (depends on a previously deployed OneKE) |
| 13 | + |
| 14 | +Any playbook can be structured as you wish, but we do believe most components can start from one of these approaches |
| 15 | + |
| 16 | +## Based on a VM Deployment |
| 17 | + |
| 18 | +Examples of components based on VM deployments are the ueransim, vm_kvm, the aforementioned .dummy_component, or even tn_bastion/tn_init |
| 19 | + |
| 20 | +These components can be structured in 4 different stages. These stages are separated with [Ansible plays](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html) to improve logical segmentation and gain readability, but as you may realize, if two consecutive stages have the same hosts (localhost) there is no need to add new ones. |
| 21 | +Remember these are just suggested stages, your component deployment might not fit into these and you can add and remove whichever you want. |
| 22 | + |
| 23 | +You can complete any STAGE with your own task files. Just drop them at `{{ component_type }}/code/{{ site_hypervisor }}/cac/` and import them from the main playbook. In components with a lot of custom files, we suggest including subdirectories such as `01_pre` (for preparation and deployment stages), `02_install` (for configuration stages) and `03_post` (for result publishing stages). These subdirectory names are "legacy" from previous stages of the 6G-Library so you can freely use other names. |
| 24 | + |
| 25 | +### STAGE 1: Apply IaC to deploy the component |
| 26 | + |
| 27 | +These stage includes all pre-deployment tasks as well as the actual deployment. Usual steps include |
| 28 | +- Importing task file `.global/cac/load_variables.yaml`: Import variables as hostvars into the playbook. Previously mentioned in the landing page [EXTRA section](https://github.com/6G-SANDBOX/6G-Library/wiki#extra) |
| 29 | +- Importing task file `.global/cac/terraform_workdir.yaml`: Prepares directory `.terraform` in the Jenkins workspace, which will serve as the Terraform workspace. Sets the [.tfstate backend as S3](https://developer.hashicorp.com/terraform/language/settings/backends/s3) (MinIO) and downloads the manifests of previous components. |
| 30 | +- Importing task file `.global/cac/terraform_apply.yaml`: Prepares the corresponding [terraform manifests](https://registry.terraform.io/providers/OpenNebula/opennebula/latest/docs/resources/virtual_machine), applies the `terraform apply` command and stops execution in case something went wrong. |
| 31 | + |
| 32 | +:::note |
| 33 | +The terraform manifest template/s is/are taken from `{{ component_type }}/code/{{ site_hypervisor }}/iac/*.tf.j2`. |
| 34 | + |
| 35 | +You will mostly only need to modify the `template_id` parameter of the dummy_component sample terraform template (as well as other variable names). Note that the privileged starting user default to all components is 'jenkins' |
| 36 | + |
| 37 | +The template has to be specified in the sites repository, and point to an existing template in your OpenNebula. The template can use the image you want, ranging from a base Ubuntu image (where all configuration is to be done), or a preconfigured custom image. These custom images are called appliances and you design your own ones following the documentation found in [this repository](https://github.com/6G-SANDBOX/marketplace-community). All custom appliances created by the project collaborators will be published in the official [marketplace](https://marketplace.mobilesandbox.cloud:9443/appliance) |
| 38 | +::: |
| 39 | + |
| 40 | +Components that use variables inherited from other component outputs require extra complexity. The intended way to import outputs (e.g. the IP in the default network of component `vm_kvm-ubuntu`) is as follows: |
| 41 | +```yaml |
| 42 | +- name: Retrieve terraform outputs |
| 43 | + ansible.builtin.shell: |
| 44 | + args: |
| 45 | + chdir: "{{ workspace }}/.terraform/" |
| 46 | + cmd: "set -o pipefail && terraform output --json | jq 'with_entries(.value |= .value)'" |
| 47 | + executable: /bin/bash |
| 48 | + register: terraform_outputs |
| 49 | + changed_when: false |
| 50 | +- name: Set Terraform outputs as playbook facts |
| 51 | + ansible.builtin.set_fact: |
| 52 | + bastion_ip: "{{ (terraform_outputs.stdout | from_json)['vm_kvm-ubuntu-ip'][site_networks_id.default | string] }}" |
| 53 | +``` |
| 54 | +To have a clean playbook, try to move these segments to custom task files. |
| 55 | +
|
| 56 | +### STAGE 2: Prepare to access the component |
| 57 | +
|
| 58 | +Stage with steps between the component deployment and its configuration. Usual steps include: |
| 59 | +- Retrieving terraform outputs (old + new ones, all at once) to use the generated values in the configuration (e.g. IP asigned to the deployed VM) |
| 60 | +- Setting the desired outputs as ansible facts (variables usable during the playbook) |
| 61 | +- Process facts to generate others. Sometimes the terraform output is in json or other nested format and the usable information needs to be parsed. |
| 62 | +- Add the new VM into Ansible Inventory: Inventory is how Ansible calls the available targets for configuration. In order to access the newly created VM for configuration, it has to be previously included into the Inventory. Default user used to access the Inventory hosts is 'jenkins' |
| 63 | +- Register the component in to the sshconfig file of the Trial Network. This step is optional, as sshconfig is not intended to be used during the playbook, but it makes debugging easier from the Jenkins server. In future releases of the 6G-Library, an equal SSH config file will be passed to the bastion or to the user to facilitate SSH access. |
| 64 | +
|
| 65 | +### STAGE 3: Apply CaC to prepare the component |
| 66 | +
|
| 67 | +You can include in this stage all configuration needed to be done to the deployed VM. However and for unity, we ask you to include the following steps: |
| 68 | +- (OPTIONAL) Importing task file `.global/cac/load_variables.yaml`: In ansible, facts are tied to a specific host/target. Run the load_variables tasks again to have the same files in the new host. |
| 69 | +- Add the (optional) `site_admin_ssh_public_key` public key into the jenkins user in the VMs. |
| 70 | +- Create a new user for the experimenter to access the VM. Default name is 'tnuser' |
| 71 | +- Add the TN public ssh key to user 'tnuser'. This public ssh key is created during tn_bation/tn_init, and accesible as a terraform output in all deployments. |
| 72 | + |
| 73 | +### STAGE 4: Publish execution results |
| 74 | + |
| 75 | +After successfully configuring the component, perform tasks to publish what has been done. Usual tasks are: |
| 76 | +- Importing task file `.global/cac/custom_tf_outputs.yaml`: It reads the custom outputs from variable `custom_outputs`, and incorporates them into the file `.terraform/tf-custom_outputs.tf` in the terraform workspace. |
| 77 | +- Importing task file `.global/cac/publish_ok_results.yaml`: Publishes the terraform manifest/s to the S3 storage. It also reads the custom outputs from variable `output`, and uses it to: |
| 78 | + - Complete the markdown template at `{{ component_type }}/result_templates/ok_result.md.j2` with component information, and publish it to the S3 storage. |
| 79 | + - Complete the JSON template at `.global/json_templates/ok_result.json.j2` with component information and the previous markdown content, and send it in a POST request back to the TNLCM. |
| 80 | + |
| 81 | +With a few exceptions, facts `custom_outputs` and `output` include the same variables. Just remember `custom_outputs` is the information available in other deployments and `output` the information available to the experimenter in the TNLCM. Variables in `output` need to be the ones described in `{{ component_type }}/.tnlcm/public.yaml` |
| 82 | + |
| 83 | +## Based on a Helm Chart |
| 84 | + |
| 85 | +An example of a component based on a Helm Chart is the open5gs. Any component based on a Helm Chart needs to specify an OneKE component where it will be deployed. |
| 86 | + |
| 87 | +These components can be structured in 3 different stages. These stages are separated with [Ansible plays](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html) to improve logical segmentation and gain readability, but as you may realize, if two consecutive stages have the same hosts (localhost) there is no need to add new ones. Remember these are just suggested stages, your component deployment might not fit into these and you can add and remove whichever you want. |
| 88 | + |
| 89 | +You can complete any STAGE with your own task files. Just drop them at `{{ component_type }}/code/all/cac/` and import them from the main playbook. |
| 90 | + |
| 91 | +### STAGE 1: Apply IaC to deploy the component |
| 92 | + |
| 93 | +These stage includes all pre-deployment tasks as well as the actual deployment. Usual steps include: |
| 94 | +- Importing task file `.global/cac/load_variables.yaml`: Import variables as hostvars into the playbook. Previously mentioned in the landing page [EXTRA section](https://github.com/6G-SANDBOX/6G-Library/wiki#extra) |
| 95 | +- Importing task file `.global/cac/terraform_workdir.yaml`: Prepares directory `.terraform` in the Jenkins workspace, which will serve as the Terraform workspace. Sets the [.tfstate backend as S3](https://developer.hashicorp.com/terraform/language/settings/backends/s3) (MinIO) and downloads the manifests of previous components. |
| 96 | +- Retrieve terraform outputs with: |
| 97 | +```yaml |
| 98 | +- name: Retrieve terraform outputs |
| 99 | + ansible.builtin.shell: |
| 100 | + args: |
| 101 | + chdir: "{{ workspace }}/.terraform/" |
| 102 | + cmd: "set -o pipefail && terraform output --json | jq 'with_entries(.value |= .value)'" |
| 103 | + executable: /bin/bash |
| 104 | + register: terraform_outputs |
| 105 | + changed_when: false |
| 106 | +``` |
| 107 | + |
| 108 | +- Set as playbook facts the bastion IP and the chosen OneKE's VNF IP. |
| 109 | +```yaml |
| 110 | +- name: Set Terraform outputs as playbook facts |
| 111 | + ansible.builtin.set_fact: |
| 112 | + bastion_ip: "{{ (terraform_outputs.stdout | from_json)['tn_bastion-ips'][site_networks_id.default | string] }}" |
| 113 | + node_ips: "{{ (terraform_outputs.stdout | from_json)[one_open5gs_oneKE + '-node_ips'] }}" |
| 114 | +``` |
| 115 | +- Add the chosen OneKE's master to Ansible Inventory: Inventory is how Ansible calls the available targets for configuration. In order to access the K8s master for configuration, it has to be previously included into the Inventory. OneKE's master needs to be accessed with the 'root' user |
| 116 | + |
| 117 | +### STAGE 2: Apply CaC to prepare the component |
| 118 | + |
| 119 | +Deploy the helm chart into the K8s cluster. Usual steps include: |
| 120 | +- (OPTIONAL) Importing task file `.global/cac/load_variables.yaml`: In ansible, facts are tied to a specific host/target. Run the load_variables tasks again to have the same files in the new host. |
| 121 | +- Importing a custom file from `{{ component_type }}/code/all/cac/`: At least one custom task aplying the Chart using Ansible's helm module. |
| 122 | + |
| 123 | +### STAGE 3: Publish execution results |
| 124 | + |
| 125 | +After successfully applying the chart, perform tasks to publish what has been done. Usual tasks are: |
| 126 | +- Importing task file `.global/cac/custom_tf_outputs.yaml`: It reads the custom outputs from variable `custom_outputs`, and incorporates them into the file `.terraform/tf-custom_outputs.tf` in the terraform workspace. |
| 127 | +- Importing task file `.global/cac/publish_ok_results.yaml`: Publishes the terraform manifest/s to the S3 storage. It also reads the custom outputs from variable `output`, and uses it to: |
| 128 | + - Complete the markdown template at `{{ component_type }}/result_templates/ok_result.md.j2` with component information, and publish it to the S3 storage. |
| 129 | + - Complete the JSON template at `.global/json_templates/ok_result.json.j2` with component information and the previous markdown content, and send it in a POST request back to the TNLCM. |
| 130 | + |
| 131 | +With a few exceptions, facts `custom_outputs` and `output` include the same variables. Just remember `custom_outputs` is the information available in other deployments and `output` the information available to the experimenter in the TNLCM. Variables in `output` need to be the ones described in `{{ component_type }}/.tnlcm/public.yaml` |
0 commit comments