Skip to content

Commit 15674a5

Browse files
committed
add: new version 0.4.0
1 parent 66d92b9 commit 15674a5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+1865
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
{
2+
"position": 4,
3+
"label": "6G-Library",
4+
"collapsible": true,
5+
"collapsed": true,
6+
"className": "red",
7+
"link": {
8+
"type": "generated-index",
9+
"title": "6G-Library Overview"
10+
},
11+
"customProps": {
12+
"description": ""
13+
}
14+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
---
2+
sidebar_position: 1
3+
title: "Home"
4+
sidebar_label: "Home"
5+
draft: false
6+
---
7+
8+
The 6G-Library repository the includes all the necessary code and data to deploy and configure each component of a Trial Network. Each component's metadata and code are used by both the TNLCM and the Jenkins of a Site.
9+
10+
Every directory represents an available component in the 6G Sandbox project, except the `.global/`, which includes general information to all components such as:
11+
- `cac/`: Ansible task files importable from any component
12+
- `iac/`: Terraform provider and backend files.
13+
- `json_templates/`: JSON templates to perform a callback to the TNLCM after each deployment
14+
- `pac/`: The Jenkinsfiles defining the pipelines available from the Jenkins
15+
16+
## How a component is deployed
17+
18+
Prior to the development of a component, is crucial to understand how the basic workflow of a Trial Network deployment works:
19+
20+
<p align="center">
21+
![workflow](./images/workflow.png)
22+
</p>
23+
24+
### TNCLM sends request to Jenkins
25+
26+
When a TN descriptor is registered in the TNLCM, it begins making a succession of requests to the Jenkins, to start the deployment of each component one at a time.
27+
Each request contains the parameters required by the [TN_DEPLOY](https://github.com/6G-SANDBOX/6G-Library/blob/main/.global/pac/TN_DEPLOY.groovy) pipeline, including the corresponding input file. This file includes the input variables listed in the TNLCM's TN descriptor. You can find an example of a usable input file for each component in this repo as `{component_type}/sample_input_file.yaml`.
28+
29+
Most inputs just serve to overwrite a private value, but others (mainly mandatory ones) serve to define dependencies between componentes.
30+
The available inputs for each component are described at `{component_type}/.tnlcm/public.yaml`
31+
32+
### Jenkins starts the corresponding Ansible playbook
33+
34+
The TN_DEPLOY pipeline first writes its parameters and the component's inputs as variable files loadable by Ansible, then clones the [6G-Sandbox-Sites repository](https://github.com/6G-SANDBOX/6G-Sandbox-Sites) and finally launches the playbook `{COMPONENT_TYPE}/code/component_playbook.yaml`.
35+
36+
### Ansible playbook
37+
38+
Ansible executes the playbook `{component_type}/code/component_playbook.yaml` which will load the necessary inputs, deploy the component (using terraform or Helm Charts), and generate a list of outputs.
39+
40+
A Trial Network uses terraform as a way to achieve Infrastructure as Code (IaC), and as such, component's manifests along with the .tfstate file are uploaded to the chosen S3 storage backend (currently only MinIO).
41+
42+
The steps on how to structure a `component_playbook.yaml` are described in the [Playbook Development Guide](https://github.com/6G-SANDBOX/6G-Library/wiki/Playbook-Development-Guide)
43+
44+
### TNLCM callback
45+
46+
As the last step of the Ansible playbook, a final callback to the TNLCM is sent using the format of the files in `.global/json_templates/` with the execution result and the generated outputs.
47+
48+
The outputs expected by the TNLCM are described at `{component_type}/.tnlcm/public.yaml`
49+
50+
## Extra
51+
52+
Expanding step 3 and prior to the [Playbook Development Guide](https://github.com/6G-SANDBOX/6G-Library/wiki/Playbook-Development-Guide), it must be noted that there are multiple sources of variables available for a playbook.
53+
54+
The first and most important one is the [`.global/cac/load_variables.yaml`](https://github.com/6G-SANDBOX/6G-Library/blob/ansible-vault/.global/cac/load_variables.yaml) task file, which loads the following variable files in order, overwriting the previous ones:
55+
- **`6G-Sandbox-Sites/{{ deployment_site }}/core.yaml"`**: Variables unique to each site loaded from repository [6G-Sandbox-Sites](https://github.com/6G-SANDBOX/6G-Sandbox-Sites)
56+
- **`{{ component_type }}/variables/{{ site_hypervisor }}/private.yaml`**: Default variables unique to each component.
57+
- **`{{ component_type }}/variables/input_file.yaml`**: Component inputs. File created during the Jenkins pipeline. Most of them simply overwrite some of the private variables.
58+
- **`{{ component_type }}/variables/pipeline_parameters.yaml`**: Jenkins parameters. File created during the Jenkins pipeline.
59+
60+
Environment variables aren't a reliable source of information, but some of them can also be used inside pipelines. Useful environment variables include the instanced Jenkins credentials (as seen in the environment field in [TN_DEPLOY](https://github.com/6G-SANDBOX/6G-Library/blob/main/.global/pac/TN_DEPLOY.groovy)). The WORKSPACE does not need to be referenced as an environmental variable, as is passed as an argument when the playbook is launched.
61+
62+
However, to address the problem of variable dependencies between components, we can use another source of variables: **terraform outputs**.
63+
In the same way we generate a list of outputs for the TNLCM callback, we can also write them as as terraform outputs and apply them to be available from any deployment.
64+
65+
E.g. these are the outputs of an end2end demo as gathered by step `Retrieve terraform outputs` (command `terraform output --json | jq 'with_entries(.value |= .value)`)
66+
67+
```json
68+
{
69+
"oneKE-k8s-id": "247",
70+
"oneKE-k8s-node_ids": "{'vnf_0': '1534', 'master_0': '1535', 'worker_0': '1536', 'storage_0': '1537', 'storage_1': '1538', 'storage_2': '1539'}",
71+
"oneKE-k8s-node_ips": "{'vnf_0': '192.168.199.2', 'master_0': '10.10.10.2', 'worker_0': '10.10.10.3', 'storage_0': '10.10.10.4', 'storage_1': '10.10.10.5', 'storage_2': '10.10.10.6'}",
72+
"oneKE-k8s-roles": [
73+
{
74+
"cardinality": 1,
75+
"name": "vnf",
76+
"nodes": [
77+
1534
78+
],
79+
"state": 2
80+
},
81+
{
82+
"cardinality": 1,
83+
"name": "master",
84+
"nodes": [
85+
1535
86+
],
87+
"state": 2
88+
},
89+
{
90+
"cardinality": 1,
91+
"name": "worker",
92+
"nodes": [
93+
1536
94+
],
95+
"state": 2
96+
},
97+
{
98+
"cardinality": 3,
99+
"name": "storage",
100+
"nodes": [
101+
1537,
102+
1538,
103+
1539
104+
],
105+
"state": 2
106+
}
107+
],
108+
"open5gs-core-metadata": "{'oneKE': 'oneKE-k8s', 'proxy': '192.168.199.2', 'mcc': '001', 'mnc': '01', 'msin': '0000000001', 'key': '465B5CE8B199B49FAA5F0A2EE238A6BC', 'opc': 'E8ED289DEBA952E4283B54E88E6183CA', 'apn': 'internet', 'tac': '200', 's_nssai_sst': '1', 's_nssai_sd': '1', 'amf_ip': '10.10.10.200', 'upf_ip': '10.10.10.200'}",
109+
"tn_bastion-id": "1533",
110+
"tn_bastion-ips": {
111+
"0": "10.11.28.148",
112+
"489": "192.168.199.1"
113+
},
114+
"tn_ssh_public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII8QSQdOy3LAS7EG1F19eiOtjGVO6+I7NY+94JrMfIaw tnuser@e2enueva",
115+
"tn_vxlan-id": "489",
116+
"ueransim-gnb-gnb_metadata": "{'proxy': '192.168.199.2', 'amf_address': '10.10.10.200', 'mcc': '001', 'mnc': '01', 'msin': '0000000001', 'key': '465B5CE8B199B49FAA5F0A2EE238A6BC', 'opc': 'E8ED289DEBA952E4283B54E88E6183CA', 'apn': 'internet', 'tac': '200', 'sst': '1', 'sd': '1', 'gnb_address': '192.168.199.3'}",
117+
"ueransim-gnb-id": "1540",
118+
"ueransim-gnb-ips": {
119+
"489": "192.168.199.3"
120+
},
121+
"ueransim-gnb-run_gnb": "YES",
122+
"ueransim-ue-id": "1541",
123+
"ueransim-ue-ips": {
124+
"489": "192.168.199.4"
125+
},
126+
"ueransim-ue-run_ue": "YES",
127+
"ueransim-ue-ue_metadata": "{'supi': 'imsi-001010000000001', 'mcc': '001', 'mnc': '01', 'key': '465B5CE8B199B49FAA5F0A2EE238A6BC', 'opc': 'E8ED289DEBA952E4283B54E88E6183CA', 'gnbSearchList': '192.168.199.3', 'apn': 'internet', 'sst': '1', 'sd': '1'}",
128+
"vnet-private_oneKE-id": "490"
129+
}
130+
```
131+
132+
:::note
133+
outputs syntax is `<component_name>[-custom_name]-<output_name>`. Some of them are generated in the terraform apply steps, others are defined explicitly in file `tf-custom_outputs.tf`. oneKE-k8s-roles is obviously generated by terraform apply, its useful information is compiled into outputs oneKE-k8s-node_ids and oneKE-k8s-node_ips.
134+
:::
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
---
2+
sidebar_position: 2
3+
title: "Playbook Deployment Guide"
4+
sidebar_label: "Playbook Deployment Guide"
5+
draft: false
6+
---
7+
8+
We strongly advise to start development from a copy of `.dummy_component/`, reading the insightful comments and modifying the playbook as you desire. We also suggest leaving the `.tnlcm/` files to the end, as you will discover during development if you need more or less variables from the experimenter.
9+
10+
This guide contemplates 2 families of deployments:
11+
- Based on a VM Deployment
12+
- Based on a Helm Chart (depends on a previously deployed OneKE)
13+
14+
Any playbook can be structured as you wish, but we do believe most components can start from one of these approaches
15+
16+
## Based on a VM Deployment
17+
18+
Examples of components based on VM deployments are the ueransim, vm_kvm, the aforementioned .dummy_component, or even tn_bastion/tn_init
19+
20+
These components can be structured in 4 different stages. These stages are separated with [Ansible plays](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html) to improve logical segmentation and gain readability, but as you may realize, if two consecutive stages have the same hosts (localhost) there is no need to add new ones.
21+
Remember these are just suggested stages, your component deployment might not fit into these and you can add and remove whichever you want.
22+
23+
You can complete any STAGE with your own task files. Just drop them at `{{ component_type }}/code/{{ site_hypervisor }}/cac/` and import them from the main playbook. In components with a lot of custom files, we suggest including subdirectories such as `01_pre` (for preparation and deployment stages), `02_install` (for configuration stages) and `03_post` (for result publishing stages). These subdirectory names are "legacy" from previous stages of the 6G-Library so you can freely use other names.
24+
25+
### STAGE 1: Apply IaC to deploy the component
26+
27+
These stage includes all pre-deployment tasks as well as the actual deployment. Usual steps include
28+
- Importing task file `.global/cac/load_variables.yaml`: Import variables as hostvars into the playbook. Previously mentioned in the landing page [EXTRA section](https://github.com/6G-SANDBOX/6G-Library/wiki#extra)
29+
- Importing task file `.global/cac/terraform_workdir.yaml`: Prepares directory `.terraform` in the Jenkins workspace, which will serve as the Terraform workspace. Sets the [.tfstate backend as S3](https://developer.hashicorp.com/terraform/language/settings/backends/s3) (MinIO) and downloads the manifests of previous components.
30+
- Importing task file `.global/cac/terraform_apply.yaml`: Prepares the corresponding [terraform manifests](https://registry.terraform.io/providers/OpenNebula/opennebula/latest/docs/resources/virtual_machine), applies the `terraform apply` command and stops execution in case something went wrong.
31+
32+
:::note
33+
The terraform manifest template/s is/are taken from `{{ component_type }}/code/{{ site_hypervisor }}/iac/*.tf.j2`.
34+
35+
You will mostly only need to modify the `template_id` parameter of the dummy_component sample terraform template (as well as other variable names). Note that the privileged starting user default to all components is 'jenkins'
36+
37+
The template has to be specified in the sites repository, and point to an existing template in your OpenNebula. The template can use the image you want, ranging from a base Ubuntu image (where all configuration is to be done), or a preconfigured custom image. These custom images are called appliances and you design your own ones following the documentation found in [this repository](https://github.com/6G-SANDBOX/marketplace-community). All custom appliances created by the project collaborators will be published in the official [marketplace](https://marketplace.mobilesandbox.cloud:9443/appliance)
38+
:::
39+
40+
Components that use variables inherited from other component outputs require extra complexity. The intended way to import outputs (e.g. the IP in the default network of component `vm_kvm-ubuntu`) is as follows:
41+
```yaml
42+
- name: Retrieve terraform outputs
43+
ansible.builtin.shell:
44+
args:
45+
chdir: "{{ workspace }}/.terraform/"
46+
cmd: "set -o pipefail && terraform output --json | jq 'with_entries(.value |= .value)'"
47+
executable: /bin/bash
48+
register: terraform_outputs
49+
changed_when: false
50+
- name: Set Terraform outputs as playbook facts
51+
ansible.builtin.set_fact:
52+
bastion_ip: "{{ (terraform_outputs.stdout | from_json)['vm_kvm-ubuntu-ip'][site_networks_id.default | string] }}"
53+
```
54+
To have a clean playbook, try to move these segments to custom task files.
55+
56+
### STAGE 2: Prepare to access the component
57+
58+
Stage with steps between the component deployment and its configuration. Usual steps include:
59+
- Retrieving terraform outputs (old + new ones, all at once) to use the generated values in the configuration (e.g. IP asigned to the deployed VM)
60+
- Setting the desired outputs as ansible facts (variables usable during the playbook)
61+
- Process facts to generate others. Sometimes the terraform output is in json or other nested format and the usable information needs to be parsed.
62+
- Add the new VM into Ansible Inventory: Inventory is how Ansible calls the available targets for configuration. In order to access the newly created VM for configuration, it has to be previously included into the Inventory. Default user used to access the Inventory hosts is 'jenkins'
63+
- Register the component in to the sshconfig file of the Trial Network. This step is optional, as sshconfig is not intended to be used during the playbook, but it makes debugging easier from the Jenkins server. In future releases of the 6G-Library, an equal SSH config file will be passed to the bastion or to the user to facilitate SSH access.
64+
65+
### STAGE 3: Apply CaC to prepare the component
66+
67+
You can include in this stage all configuration needed to be done to the deployed VM. However and for unity, we ask you to include the following steps:
68+
- (OPTIONAL) Importing task file `.global/cac/load_variables.yaml`: In ansible, facts are tied to a specific host/target. Run the load_variables tasks again to have the same files in the new host.
69+
- Add the (optional) `site_admin_ssh_public_key` public key into the jenkins user in the VMs.
70+
- Create a new user for the experimenter to access the VM. Default name is 'tnuser'
71+
- Add the TN public ssh key to user 'tnuser'. This public ssh key is created during tn_bation/tn_init, and accesible as a terraform output in all deployments.
72+
73+
### STAGE 4: Publish execution results
74+
75+
After successfully configuring the component, perform tasks to publish what has been done. Usual tasks are:
76+
- Importing task file `.global/cac/custom_tf_outputs.yaml`: It reads the custom outputs from variable `custom_outputs`, and incorporates them into the file `.terraform/tf-custom_outputs.tf` in the terraform workspace.
77+
- Importing task file `.global/cac/publish_ok_results.yaml`: Publishes the terraform manifest/s to the S3 storage. It also reads the custom outputs from variable `output`, and uses it to:
78+
- Complete the markdown template at `{{ component_type }}/result_templates/ok_result.md.j2` with component information, and publish it to the S3 storage.
79+
- Complete the JSON template at `.global/json_templates/ok_result.json.j2` with component information and the previous markdown content, and send it in a POST request back to the TNLCM.
80+
81+
With a few exceptions, facts `custom_outputs` and `output` include the same variables. Just remember `custom_outputs` is the information available in other deployments and `output` the information available to the experimenter in the TNLCM. Variables in `output` need to be the ones described in `{{ component_type }}/.tnlcm/public.yaml`
82+
83+
## Based on a Helm Chart
84+
85+
An example of a component based on a Helm Chart is the open5gs. Any component based on a Helm Chart needs to specify an OneKE component where it will be deployed.
86+
87+
These components can be structured in 3 different stages. These stages are separated with [Ansible plays](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html) to improve logical segmentation and gain readability, but as you may realize, if two consecutive stages have the same hosts (localhost) there is no need to add new ones. Remember these are just suggested stages, your component deployment might not fit into these and you can add and remove whichever you want.
88+
89+
You can complete any STAGE with your own task files. Just drop them at `{{ component_type }}/code/all/cac/` and import them from the main playbook.
90+
91+
### STAGE 1: Apply IaC to deploy the component
92+
93+
These stage includes all pre-deployment tasks as well as the actual deployment. Usual steps include:
94+
- Importing task file `.global/cac/load_variables.yaml`: Import variables as hostvars into the playbook. Previously mentioned in the landing page [EXTRA section](https://github.com/6G-SANDBOX/6G-Library/wiki#extra)
95+
- Importing task file `.global/cac/terraform_workdir.yaml`: Prepares directory `.terraform` in the Jenkins workspace, which will serve as the Terraform workspace. Sets the [.tfstate backend as S3](https://developer.hashicorp.com/terraform/language/settings/backends/s3) (MinIO) and downloads the manifests of previous components.
96+
- Retrieve terraform outputs with:
97+
```yaml
98+
- name: Retrieve terraform outputs
99+
ansible.builtin.shell:
100+
args:
101+
chdir: "{{ workspace }}/.terraform/"
102+
cmd: "set -o pipefail && terraform output --json | jq 'with_entries(.value |= .value)'"
103+
executable: /bin/bash
104+
register: terraform_outputs
105+
changed_when: false
106+
```
107+
108+
- Set as playbook facts the bastion IP and the chosen OneKE's VNF IP.
109+
```yaml
110+
- name: Set Terraform outputs as playbook facts
111+
ansible.builtin.set_fact:
112+
bastion_ip: "{{ (terraform_outputs.stdout | from_json)['tn_bastion-ips'][site_networks_id.default | string] }}"
113+
node_ips: "{{ (terraform_outputs.stdout | from_json)[one_open5gs_oneKE + '-node_ips'] }}"
114+
```
115+
- Add the chosen OneKE's master to Ansible Inventory: Inventory is how Ansible calls the available targets for configuration. In order to access the K8s master for configuration, it has to be previously included into the Inventory. OneKE's master needs to be accessed with the 'root' user
116+
117+
### STAGE 2: Apply CaC to prepare the component
118+
119+
Deploy the helm chart into the K8s cluster. Usual steps include:
120+
- (OPTIONAL) Importing task file `.global/cac/load_variables.yaml`: In ansible, facts are tied to a specific host/target. Run the load_variables tasks again to have the same files in the new host.
121+
- Importing a custom file from `{{ component_type }}/code/all/cac/`: At least one custom task aplying the Chart using Ansible's helm module.
122+
123+
### STAGE 3: Publish execution results
124+
125+
After successfully applying the chart, perform tasks to publish what has been done. Usual tasks are:
126+
- Importing task file `.global/cac/custom_tf_outputs.yaml`: It reads the custom outputs from variable `custom_outputs`, and incorporates them into the file `.terraform/tf-custom_outputs.tf` in the terraform workspace.
127+
- Importing task file `.global/cac/publish_ok_results.yaml`: Publishes the terraform manifest/s to the S3 storage. It also reads the custom outputs from variable `output`, and uses it to:
128+
- Complete the markdown template at `{{ component_type }}/result_templates/ok_result.md.j2` with component information, and publish it to the S3 storage.
129+
- Complete the JSON template at `.global/json_templates/ok_result.json.j2` with component information and the previous markdown content, and send it in a POST request back to the TNLCM.
130+
131+
With a few exceptions, facts `custom_outputs` and `output` include the same variables. Just remember `custom_outputs` is the information available in other deployments and `output` the information available to the experimenter in the TNLCM. Variables in `output` need to be the ones described in `{{ component_type }}/.tnlcm/public.yaml`

0 commit comments

Comments
 (0)