Skip to content

Commit

Permalink
Merge branch 'main' into deadlock-final
Browse files Browse the repository at this point in the history
  • Loading branch information
aedan authored Jun 11, 2024
2 parents 350eefd + b0c27ba commit 3854961
Show file tree
Hide file tree
Showing 30 changed files with 2,625 additions and 1,287 deletions.
2 changes: 1 addition & 1 deletion ansible/inventory/openstack-flex/inventory.yaml.example
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ all:
children:
k8s_cluster:
vars:
cluster_name: rackerlabs.dev.local # This clustername should be changed to match your environment domain name.
cluster_name: cluster.local # If cluster_name is modified then cluster_domain_suffix will need to be modified for all the helm charts and for infrastructure operator configs too
kube_ovn_iface: vlan206 # see the netplan snippet in etc/netplan/default-DHCP.yaml for more info.
kube_ovn_default_interface_name: vlan206 # see the netplan snippet in etc/netplan/default-DHCP.yaml for more info.
kube_ovn_central_hosts: "{{ groups['ovn_network_nodes'] }}"
Expand Down
8 changes: 5 additions & 3 deletions docs/build-test-envs.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# Lab Build Demo

[![asciicast](https://asciinema.org/a/629776.svg)](https://asciinema.org/a/629776)
!!! Example "This section is only for test environments"

The information on this page is only needed when building an environment in Virtual Machines.

The information on this page is only needed when building an environment in Virtual Machines.
[![asciicast](https://asciinema.org/a/629776.svg)](https://asciinema.org/a/629776)

## Prerequisites

Expand All @@ -12,7 +14,7 @@ Take a moment to orient yourself, there are a few items to consider before movin

!!! note

Your local genestack repository will be transferred to the eventual launcher instance for convenience **perfect for development**. See [Getting Started](quickstart.md) for an example on how to recursively clone the repository and its submodules.
Your local genestack repository will be transferred to the eventual launcher instance for convenience **perfect for development**. See [Getting Started](genestack-getting-started.md) for an example on how to recursively clone the repository and its submodules.

### Create a VirtualEnv

Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart.md → docs/genestack-getting-started.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quick Start Guide
# Getting the Genestack Repository

Before you can do anything we need to get the code. Because we've sold our soul to the submodule devil, you're going to need to recursively clone the repo into your location.

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ hide:

Start building now.

[:octicons-play-24: Deployment Guide](quickstart.md)
[:octicons-play-24: Deployment Guide](genestack-getting-started.md)

</div>

Expand Down
8 changes: 7 additions & 1 deletion docs/infrastructure-gateway-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/downloa

Next, Install the NGINX Gateway Fabric controller
```
cd /opt/genestack/submodules/nginx-gateway-fabric
cd /opt/genestack/submodules/nginx-gateway-fabric/deploy/helm-chart
helm upgrade --install nginx-gateway-fabric . --namespace=nginx-gateway -f /opt/genestack/helm-configs/nginx-gateway-fabric/helm-overrides.yaml
```
Expand All @@ -50,6 +50,12 @@ Helm install does not automatically upgrade the crds for this resource. To upgra

In this example we will look at how Prometheus UI is exposed through the gateway. For other services the gateway kustomization file for the service.

Rackspace specific gateway kustomization files can be applied like so
```
cd /opt/genestack/kustomize/gateway
kubectl kustomize | kubectl apply -f -
```

First, create the shared gateway and then the httproute resource for prometheus.
```
apiVersion: gateway.networking.k8s.io/v1
Expand Down
15 changes: 4 additions & 11 deletions docs/infrastructure-letsencrypt.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,9 @@ EOF
## Use the proper TLS issuerRef
!!! danger "Important for later helm installations!"
You must ensure your helm configuration is such that you set the
`endpoints.$service.host_fqdn_override.public.tls.issuerRef.name` for any
given endpoint to use our `letsencrypt-prod` ClusterIssuer. Similarly,
ensure that `endpoints.$service.host_fqdn_override.public.host`
is set to the external DNS hostname you plan to expose for a given
service endpoint.
The `letsencrypt-prod` ClusterIssuer is used to generate the certificate through cert-manager. This ClusterIssuer is applied using a Kustomize patch. However, to ensure that the certificate generation process is initiated, it is essential to include `endpoints.$service.host_fqdn_override.public.tls: {}` in the service helm override file.
Similarly, ensure that `endpoints.$service.host_fqdn_override.public.host` is set to the external DNS hostname you plan to expose for a given service endpoint.
This configuration is necessary for proper certificate generation and to ensure the service is accessible via the specified hostname.

!!! example
You can find several examples of this in the
Expand All @@ -48,11 +45,7 @@ EOF
image:
host_fqdn_override:
public:
tls:
secretName: glance-tls-api
issuerRef:
name: letsencrpyt-prod
kind: ClusterIssuer
tls: {}
host: glance.api.your.domain.tld
port:
api:
Expand Down
12 changes: 1 addition & 11 deletions docs/infrastructure-libvirt.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,5 @@ kubectl kustomize --enable-helm /opt/genestack/kustomize/libvirt | kubectl apply
Once deployed you can validate functionality on your compute hosts with `virsh`

``` shell
root@openstack-flex-node-3:~# virsh
Welcome to virsh, the virtualization interactive terminal.

Type: 'help' for help with commands
'quit' to quit

virsh # list
Id Name State
--------------------

virsh #
kubectl exec -it $(kubectl get pods -l application=libvirt -o=jsonpath='{.items[0].metadata.name}' -n openstack) -n openstack -- virsh list
```
6 changes: 0 additions & 6 deletions docs/infrastructure-ovn.md

This file was deleted.

20 changes: 0 additions & 20 deletions docs/k8s-kubespray.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,23 +140,3 @@ ansible-playbook --inventory /etc/genestack/inventory/openstack-flex-inventory.i
Given the use of a venv, when running with `sudo` be sure to use the full path and pass through your environment variables; `sudo -E /home/ubuntu/.venvs/genestack/bin/ansible-playbook`.

Once the cluster is online, you can run `kubectl` to interact with the environment.

## Installing Kubernetes

Currently only the k8s provider kubespray is supported and included as submodule into the code base.
A default inventory file for kubespray is provided at `/etc/genestack/inventory` and must be modified.

!!! tip

Existing OpenStack Ansible inventory can be converted using the `/opt/genestack/scripts/convert_osa_inventory.py`
script which provides a `hosts.yml`

Once the inventory is updated and configuration altered (networking etc), the Kubernetes cluster can be initialized with
the `setup-kubernetes.yml` playbook which in addition will also label nodes for OpenStack installation.

``` shell
source /opt/genestack/scripts/genestack.rc
cd /opt/genestack/ansible/playbooks

ansible-playbook setup-kubernetes.yml
```
207 changes: 207 additions & 0 deletions docs/openstack-floating-ips.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,210 @@ To remove the floating IP address from a project:
``` shell
openstack floating ip delete FLOATING_IP_ADDRESS
```
#### Floating Ip Example
Below is a quick example of how we can assign floating ips.
You will need to get your cloud name from your clouds.yaml. More information on this can be found [here](build-test-envs.md). Underneath "clouds:" you will find your cloud name.
First create a floating ip either from PUBLICNET or the public ip pool.
``` shell
openstack --os-cloud={cloud_name} floating ip create PUBLICNET
```
Second get the cloud server UUID.
``` shell
openstack --os-cloud={cloud_name} server list
```
Third add the floating ip to the server
``` shell
openstack --os-cloud={cloud_name} server add floating ip {cloud_server_uuid} {floating_ip}
```
#### Shared floating IP and virtual IP
You can often use a load balancer instead of a shared floating IP or virtual IP.
For advanced networking needs, using an instance that does something like you
might do with a network appliance operating system, you might need a real shared
floating IP that two instances can share with something like _keepalived_, but
you should probably use a load balancer unless you actually need the additional
capabilities from a shared floating IP or virtual IP.
In _Genestack_ Flex, with OVN, you can implement a shared floating IP mostly as
standard for OpenStack, but Neutron's `allowed-address-pairs` depends on your
Neutron plugin, _ML2/OVN_ in this case, so while most OpenStack documentation
will show altering `allowed-address-pairs` with a CIDR as seen
[here](https://docs.openstack.org/neutron/latest/admin/archives/introduction.html#allowed-address-pairs),
OVN doesn't support CIDRs on its equivalent to port security on logical switch
ports in its NB database, so you just have to use a single IP address instead of
a CIDR.
With that caveat, you can set up a shared floating IP like this:
1. Create a Neutron network
``` shell
openstack network create tester-network
```
2. Create a subnet for the network
``` shell
openstack subnet create --network tester-network --subnet-range 192.168.0.0/24 tester-subnet
```
3. Create servers on the network
``` shell
openstack server create tester1 --flavor m1.tiny --key-name keypair --network tester-network --image $IMAGE_UUID
openstack server create tester2 --flavor m1.tiny --key-name keypair --network tester-network --image $IMAGE_UUID
```
4. Create a port with a fixed IP for the VIP.
``` shell
openstack port create --fixed-ip subnet=tester-subnet \
--network tester-network --no-security-group tester-vip-port
```
You will probably want to note the IP on the port here as your VIP.
5. Create a router
You will typically need a router with an external gateway to use any
public IP, depending on your configuration.
```
openstack router create tester-router
```
6. Add at external Internet gateway to the router
At Rackspace, we usually call the public Internet network for instances
PUBLICNET. You can use the name or ID that provides external networks
for your own installation.
```
openstack router set --external-gateway PUBLICNET tester-router
```
7. Add the subnet to the router
``` shell
openstack router add subnet tester-router tester-subnet
```
8. Create a floating IP for the port
You can't do this step until you've created the router as above, because
Neutron requires reachability between the subnet for the port and the
floating IP for the network. If you followed in order, this should work
here.
``` shell
openstack floating ip create --port tester-vip-port PUBLICNET
```
Note and retain the ID and/or IP returned, since you will need it for the
next step.
9. Put the floating IP in the `allowed-address-pair` list of the ports for your
two instances.
Here, **specify only the VIP IP address**/**omit the netmask**. This deviates
from other examples you may see, which may include a netmask, because it can
vary with details of the plugin used with Neutron. For Neutron with ML2/OVN,
you only specify the IP address here, without a netmask.
You use the private VIP because the DNAT occurs before it reaches the
instances.
```
openstack port list server tester1 # retrieve port UUID
openstack port list server tester2 # retrieve port UUID
openstack port set --allowed-address ip-address=<VIP> <port1UUID>
openstack port set --allowed-address ip-address=<VIP> <port2UUID>
```
The steps above complete creating the shared floating IP and VIP. The following
steps allow you to test it.
1. Create a bastion server.
With the two test instances connected to a subnet on a router with an
external gateway, they can reach the Internet, but you will probably need
a server with a floating IP to reach these two servers to install and
configure _keepalived_ and test your shared floating IP / VIP. This example
shows only a test.
``` shell
openstack server create tester-bastion --flavor m1.tiny \
--key-name keypair --network tester-network --image $IMAGE_UUID
```
2. Add floating IP to bastion server.
You can specify the UUID or IP of the floating IP.
```
openstack server add floating ip tester-bastion \
8a991c65-24c6-4125-a9c8-38d15e851c78
```
3. Alter security group rules to allow SSH and ICMP:
You will likely find you can't SSH to the floating IP you added to the
instance unless you've altered your default security group or taken other
steps because the default security group will prevent all ingress traffic.
We also add ICMP here for testing.
```
openstack security group rule create --proto tcp --dst-port 22 \
--remote-ip 0.0.0.0/0 default
openstack security group rule create --proto icmp --dst-port -1 default
```
4. SSH to the first test instance from the bastion.
5. Configure the VIP on the interface as a test on the first test instance:
```
sudo ip address add <VIP>/24 dev enp3s0
```
Note that you add the internal VIP here, not the floating public IP. Use
the appropriate netmask (usually /24 unless you picked something else.)
6. Ping the floating IP.
Ping should now work. For a general floating IP on the Internet, you can
usually ping from any location, so you don't necessarily have to use your
bastion.
``` shell
ping <floating IP>
```
Since the ports for the two servers look almost identical, if it works on
one, it should work on the other, so you can delete the IP from the first
instance and try it on the second:
``` shell
sudo ip address del <VIP>/24 dev enp3s0
```
You may need to ping the internal IP address from your bastion server or
take other steps to take care of the ARP caches. You can use arping on
the instance with the VIP for that:
``` shell
sudo arping -i enp3s0 -U -S <VIP> <VIP> # VIP twice
```
and ^C/break out of it once ping starts working with the address.
Loading

0 comments on commit 3854961

Please sign in to comment.