Skip to content

Commit

Permalink
Merge commit '31362664f3bf55b822347f198307cda3bf4af6e9' into sync_us-…
Browse files Browse the repository at this point in the history
…-master

Signed-off-by: Ceph Jenkins <ceph-jenkins@redhat.com>
  • Loading branch information
Ceph Jenkins committed Sep 24, 2024
2 parents c45b7f5 + 3136266 commit 62facfb
Show file tree
Hide file tree
Showing 43 changed files with 709 additions and 397 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ jobs:
strategy:
fail-fast: false
matrix:
go-version: ["1.22"]
go-version: ["1.22", "1.23"]
steps:
- name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/canary-test-config/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ runs:
- name: Setup Minikube
shell: bash --noprofile --norc -eo pipefail -x {0}
run: |
tests/scripts/github-action-helper.sh install_minikube_with_none_driver v1.30.0
tests/scripts/github-action-helper.sh install_minikube_with_none_driver v1.31.0
- name: install deps
shell: bash --noprofile --norc -eo pipefail -x {0}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/scorecards.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@8214744c546c1e5c8f03dde8fab3a7353211988d # v3.26.7
uses: github/codeql-action/upload-sarif@294a9d92911152fe08befb9ec03e240add280cb3 # v3.26.8
with:
sarif_file: results.sarif
3 changes: 3 additions & 0 deletions Documentation/CRDs/Cluster/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ If this value is empty, each pod will get an ephemeral directory to store their
* `externalMgrPrometheusPort`: external prometheus manager module port. See [external cluster configuration](./external-cluster/external-cluster.md) for more details.
* `port`: The internal prometheus manager module port where the prometheus mgr module listens. The port may need to be configured when host networking is enabled.
* `interval`: The interval for the prometheus module to to scrape targets.
* `exporter`: Ceph exporter metrics config.
* `perfCountersPrioLimit`: Specifies which performance counters are exported. Corresponds to `--prio-limit` Ceph exporter flag. `0` - all counters are exported, default is `5`.
* `statsPeriodSeconds`: Time to wait before sending requests again to exporter server (seconds). Corresponds to `--stats-period` Ceph exporter flag. Default is `5`.
* `network`: For the network settings for the cluster, refer to the [network configuration settings](#network-configuration-settings)
* `mon`: contains mon related options [mon settings](#mon-settings)
For more details on the mons and when to choose a number other than `3`, see the [mon health doc](../../Storage-Configuration/Advanced/ceph-mon-health.md).
Expand Down
4 changes: 4 additions & 0 deletions Documentation/CRDs/Cluster/external-cluster/.pages
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
nav:
- external-cluster.md
- provider-export.md
- consumer-import.md
- upgrade-external.md
- advance-external.md
- topology-for-external-mode.md
70 changes: 70 additions & 0 deletions Documentation/CRDs/Cluster/external-cluster/advance-external.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# External Cluster Options

## NFS storage

Rook suggests a different mechanism for making use of an [NFS service running on the external Ceph standalone cluster](../../../Storage-Configuration/NFS/nfs-csi-driver.md#consuming-nfs-from-an-external-source), if desired.

## Exporting Rook to another cluster

If you have multiple K8s clusters running, and want to use the local `rook-ceph` cluster as the central storage,
you can export the settings from this cluster with the following steps.

1. Copy create-external-cluster-resources.py into the directory `/etc/ceph/` of the toolbox.

```console
toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}')
kubectl -n rook-ceph cp deploy/examples/external/create-external-cluster-resources.py $toolbox:/etc/ceph
```

2. Exec to the toolbox pod and execute create-external-cluster-resources.py with needed options to create required [users and keys](/Documentation/CRDs/Cluster/external-cluster/provider-export.md#1-create-all-users-and-keys).

!!! important
For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs.

## Admin privileges

If in case the cluster needs the admin keyring to configure, update the admin key `rook-ceph-mon` secret with client.admin keyring

!!! note
Sharing the admin key with the external cluster is not generally recommended

1. Get the `client.admin` keyring from the ceph cluster

```console
ceph auth get client.admin
```

2. Update two values in the `rook-ceph-mon` secret:
- `ceph-username`: Set to `client.admin`
- `ceph-secret`: Set the client.admin keyring

After restarting the rook operator (and the toolbox if in use), rook will configure ceph with admin privileges.

## Connect to an External Object Store

Create the [external object store CR](https://github.com/rook/rook/blob/master/deploy/examples/external/object-external.yaml) to configure connection to external gateways.

```console
cd deploy/examples/external
kubectl create -f object-external.yaml
```

Consume the S3 Storage, in two different ways:

1. Create an [Object store user](https://github.com/rook/rook/blob/master/deploy/examples/object-user.yaml) for credentials to access the S3 endpoint.

```console
cd deploy/examples
kubectl create -f object-user.yaml
```

2. Create a [bucket storage class](https://github.com/rook/rook/blob/master/deploy/examples/external/storageclass-bucket-delete.yaml) where a client can request creating buckets and then create the [Object Bucket Claim](https://github.com/rook/rook/blob/master/deploy/examples/external/object-bucket-claim-delete.yaml), which will create an individual bucket for reading and writing objects.

```console
cd deploy/examples/external
kubectl create -f storageclass-bucket-delete.yaml
kubectl create -f object-bucket-claim-delete.yaml
```

!!! hint
For more details see the [Object Store topic](../../../Storage-Configuration/Object-Storage-RGW/object-storage.md#connect-to-an-external-object-store)
76 changes: 76 additions & 0 deletions Documentation/CRDs/Cluster/external-cluster/consumer-import.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Import Ceph configuration to the Rook consumer cluster

## Installation types

Install Rook in the the consumer cluster, either with [Helm](#helm-installation)
or the [manifests](#manifest-installation).

### Helm Installation

To install with Helm, the rook cluster helm chart will configure the necessary resources for the external cluster with the example `values-external.yaml`.

```console
clusterNamespace=rook-ceph
operatorNamespace=rook-ceph
cd deploy/examples/charts/rook-ceph-cluster
helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace $clusterNamespace rook-ceph rook-release/rook-ceph -f values.yaml
helm install --create-namespace --namespace $clusterNamespace rook-ceph-cluster \
--set operatorNamespace=$operatorNamespace rook-release/rook-ceph-cluster -f values-external.yaml
```

### Manifest Installation

If not installing with Helm, here are the steps to install with manifests.

1. Deploy Rook, create [common.yaml](https://github.com/rook/rook/blob/master/deploy/examples/common.yaml), [crds.yaml](https://github.com/rook/rook/blob/master/deploy/examples/crds.yaml) and [operator.yaml](https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml) manifests.

2. Create [common-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/external/common-external.yaml) and [cluster-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/external/cluster-external.yaml)

## Import the Provider Data

1. Paste the above output from `create-external-cluster-resources.py` into your current shell to allow importing the provider data.

2. The import script in the next step uses the current kubeconfig context by
default. If you want to specify the kubernetes cluster to use without
changing the current context, you can specify the cluster name by setting
the KUBECONTEXT environment variable.

```console
export KUBECONTEXT=<cluster-name>
```

3. Here is the link for [import](https://github.com/rook/rook/blob/master/deploy/examples/external/import-external-cluster.sh) script. The script has used the `rook-ceph` namespace and few parameters that also have referenced from namespace variable. If user's external cluster has a different namespace, change the namespace parameter in the script according to their external cluster. For example with `new-namespace` namespace, this change is needed on the namespace parameter in the script.

```console
NAMESPACE=${NAMESPACE:="new-namespace"}
```

4. Run the import script.

!!! note
If your Rook cluster nodes are running a kernel earlier than or equivalent to 5.4, remove
`fast-diff, object-map, deep-flatten,exclusive-lock` from the `imageFeatures` line.

```console
. import-external-cluster.sh
```

## Cluster Verification

1. Verify the consumer cluster is connected to the provider ceph cluster:

```console
$ kubectl -n rook-ceph get CephCluster
NAME DATADIRHOSTPATH MONCOUNT AGE STATE HEALTH
rook-ceph-external /var/lib/rook 162m Connected HEALTH_OK
```

2. Verify the creation of the storage class depending on the rbd pools and filesystem provided.
`ceph-rbd` and `cephfs` would be the respective names for the RBD and CephFS storage classes.

```console
kubectl -n rook-ceph get sc
```

3. Create a [persistent volume](https://github.com/rook/rook/tree/master/deploy/examples/csi) based on these StorageClass.
Loading

0 comments on commit 62facfb

Please sign in to comment.