Skip to content

Commit

Permalink
Add more description about IP pool
Browse files Browse the repository at this point in the history
  • Loading branch information
yaocw2020 committed Jul 10, 2023
1 parent 9b089ab commit ad07024
Show file tree
Hide file tree
Showing 3 changed files with 100 additions and 33 deletions.
69 changes: 69 additions & 0 deletions docs/networking/ippool.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,75 @@ An IP pool can have specific scopes, and you can specify the corresponding requi
- `spec.selector.priority` specifies the priority of the IP Pool. The larger the number, the higher the priority. If the priority is not `0`, the value should differ. The priority helps you to migrate the old IP pool to the new one.
- If the IP Pool has a scope that matches all projects, namespaces, and guest clusters, it's called a global IP pool. It's only allowed to have one global IP pool. If there is no IP pool matching the requirements of the LB, the IPAM will allocate an IP from the global IP pool if it exists.

### Examples
- Example 1: We want to configure an IP pool with range `192.168.100.0/24` for the namespace `default`. All the load balancer in the namespace `default` can get an IP from this IP pool. The IP pool will be like that:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: default-ip-pool
spec:
ranges:
- subnet: 192.168.100.0/24
selector:
scope:
namespace: default
```
- Example 2: We have a guest cluster `rke2` deployed with network `default/vlan1` in the project/namespace `product/default` and want to configure an exclusive IP pool with range `192.168.10.10-192.168.10.20` for it. The IP pool will be like that:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: rke2-ip-pool
spec:
ranges:
- subnet: 192.168.10.0/24
rangeStart: 192.168.10.10
rangeEnd: 192.168.10.20
selector:
network: default/vlan1
scope:
project: product
namespace: default
cluster: rke2
```

- Example 3: We want to migrate the IP pool `default-ip-pool` to `default-ip-pool-2` with range `192.168.200.0/24`. The IP pool `default-ip-pool` has a higher priority than `default-ip-pool`. The IP pool will be like that:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: default-ip-pool-2
spec:
ranges:
- subnet: 192.168.200.0/24
selector:
priority: 1
scope:
namespace: default
```

- Example 4: We want to configure a global IP pool with range `192.168.20.0/24`. The IP pool will be like that:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: global-ip-pool
spec:
ranges:
- subnet: 192.168.20.0/24
selector:
scope:
project: "*"
namespace: "*"
cluster: "*"
```

## Allocation policy
- The IP pool prefers to allocate the previously assigned IP according to the given history.
- IP allocation follows the round-robin policy.
Expand Down
2 changes: 1 addition & 1 deletion docs/networking/loadbalancer.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Harvester VM load balancer has the following limitations:
To create a new VM load balancer:
1. Go to the **Networks > Load Balancer** page and select **Create**.
1. Select the **Namespace** and specify the **Name**.
1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, you must prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB will automatically select an IP pool according to the matching rules.
1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, you must prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB will automatically select an IP pool according to [the IP pool selection policy](/networking/ippool.md/#selection-policy).
1. Go to the **Listeners** tab to add listeners. You must specify the **Port**, **Protocol**, and **Backend Port** for each listener.
1. Go to the **Backend Server Selector** tab to add label selectors. If you want to add the VM to the LB, go to the **Virtual Machine > Instance Labels** tab to add the corresponding labels to the VM.
1. Go to the **Health Check** tab to enable health check and specify the parameters, including the **Port**, **Success Threshold**, **Failure Threshold**, **Interval**, and **Timeout** if the backend service supports health check.
Expand Down
62 changes: 30 additions & 32 deletions docs/rancher/cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,32 +44,41 @@ When spinning up an RKE2 cluster using the Harvester node driver, select the `Ha

![](/img/v1.2/rancher/rke2-cloud-provider.png)

### Deploying to the K3s Cluster with Harvester Node Driver [Experimental]
### Deploying to the RKE2 Custom Cluster

When spinning up a K3s cluster using the Harvester node driver, you can perform the following steps to deploy the harvester cloud provider:
1. Use `generate_addon.sh` to generate cloud config and place it into directory `/var/lib/rancher/rke2/etc/config-files/cloud-config` on every node.

1. Generate and inject cloud config for `harvester-cloud-provider`
```
curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s <serviceaccount name> <namespace>
```

The cloud provider needs a kubeconfig file to work, a limited scoped one can be generated using the [generate_addon.sh](https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh) script available in the [harvester/cloud-provider-harvester](https://github.com/harvester/cloud-provider-harvester) repo.
:::note

:::note
The `generate_addon.sh` script depends on `kubectl` and `jq` to operate the Harvester cluster

The script depends on `kubectl` and `jq` to operate the Harvester cluster
The script needs access to the `Harvester Cluster` kubeconfig to work.

The script needs access to the `Harvester Cluster` kubeconfig to work.
The namespace needs to be the namespace in which the guest cluster will be created.

The namespace needs to be the namespace in which the guest cluster will be created.
:::

:::

2. Select the `Harvester` cloud provider

### Deploying to the K3s Cluster with Harvester Node Driver [Experimental]

When spinning up a K3s cluster using the Harvester node driver, you can perform the following steps to deploy the harvester cloud provider:

1. Generate and inject cloud config for `harvester-cloud-provider`

```
./deploy/generate_addon.sh <serviceaccount name> <namespace>
curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s <serviceaccount name> <namespace>
```

The output will look as follows:

```
# ./deploy/generate_addon.sh harvester-cloud-provider default
# curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s harvester-cloud-provider default
Creating target directory to hold files in ./tmp/kube...done
Creating a service account in default namespace: harvester-cloud-provider
W1104 16:10:21.234417 4319 helpers.go:663] --dry-run is deprecated and can be replaced with --dry-run=client.
Expand Down Expand Up @@ -147,7 +156,7 @@ spec:
bootstrap: true
repo: https://charts.harvesterhci.io/
chart: harvester-cloud-provider
version: 0.1.13
version: 0.2.2
helmVersion: v3
```

Expand Down Expand Up @@ -179,6 +188,7 @@ spec:

With these settings in place a K3s cluster should provision successfully while using the external cloud provider.


## Upgrade Cloud Provider

### Upgrade RKE2
Expand All @@ -202,31 +212,19 @@ After deploying the `Harvester Cloud provider`, you can use the Kubernetes `Load


### IPAM
Harvester's built-in load balancer supports both `pool` and `dhcp` modes. You can select the mode in the Rancher UI. Harvester adds the annotation `cloudprovider.harvesterhci.io/ipam` to the service behind.
Harvester's built-in load balancer supports both `DHCP` and `Pool` mode. You can select the mode in the Rancher UI. Harvester adds the annotation `cloudprovider.harvesterhci.io/ipam` to the service behind. Additionally, Harvester cloud provider provides a special mode `Share IP` where a service will share its load balancer IP with other services.

- pool: You should configure an IP address pool in Harvester's `Settings` in advance. The Harvester LoadBalancer controller will allocate an IP address from the IP address pool for the load balancer.

![](/img/v1.2/rancher/vip-pool.png)

- dhcp: A DHCP server is required. The Harvester LoadBalancer controller will request an IP address from the DHCP server.
- DCHP: A DHCP server is required. The Harvester Load Balancer controller will request an IP address from the DHCP server.

- Pool: You need an IP pool configured in the Harvester UI. The Harvester Load Balancer controller will allocate an IP for the load balancer service following [the IP pool selection policy](/networking/ippool.md/#selection-policy).

- Share IP: When creating a new load balancer service, you can select an existing load balancer service to get its load balancer IP. The new service is called secondary service, and the selected service is called primary service. You can specify the primary service in the secondary service by annotation `cloudprovider.harvesterhci.io/primary-service`. There are two limitations.
- It's not allowed for the secondary service to share the IP with other services.
- All the services sharing the same IP could not have duplicated ports.

:::note

It is not allowed to modify the IPAM mode. You need to create a new service if you want to modify the IPAM mode.

:::

### Health Checks
The Harvester load balancer supports TCP health checks. You can specify the parameters in the Rancher UI if you enable the `Health Check` option.

![](/img/v1.2/rancher/health-check.png)

Alternatively, you can specify the parameters by adding annotations to the service manually. The following annotations are supported:

| Annotation Key | Value Type | Required | Description |
|:---|:---|:---|:---|
| `cloudprovider.harvesterhci.io/healthcheck-port` | string | true | Specifies the port. The prober will access the address composed of the backend server IP and the port.
| `cloudprovider.harvesterhci.io/healthcheck-success-threshold` | string | false | Specifies the health check success threshold. The default value is 1. The backend server will start forwarding traffic if the number of times the prober continuously detects an address successfully reaches the threshold.
| `cloudprovider.harvesterhci.io/healthcheck-failure-threshold` | string | false | Specifies the health check failure threshold. The default value is 3. The backend server will stop forwarding traffic if the number of health check failures reaches the threshold.
| `cloudprovider.harvesterhci.io/healthcheck-periodseconds` | string | false | Specifies the health check period. The default value is 5 seconds.
| `cloudprovider.harvesterhci.io/healthcheck-timeoutseconds` | string | false | Specifies the timeout of every health check. The default value is 3 seconds.

0 comments on commit ad07024

Please sign in to comment.