Skip to content

Commit

Permalink
Merge commit '58a0c834bf5bce6782de5c387f3dac732393ef33' into sync_us-…
Browse files Browse the repository at this point in the history
…-master

Signed-off-by: Ceph Jenkins <ceph-jenkins@redhat.com>
  • Loading branch information
Ceph Jenkins committed Sep 11, 2024
2 parents 043f365 + 58a0c83 commit caca245
Show file tree
Hide file tree
Showing 29 changed files with 4,484 additions and 433 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/commitlint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
fetch-depth: 0
- uses: wagoid/commitlint-github-action@a2bc521d745b1ba127ee2f8b02d6afaa4eed035c # v6.1.1
- uses: wagoid/commitlint-github-action@3d28780bbf0365e29b144e272b2121204d5be5f3 # v6.1.2
with:
configFile: "./.commitlintrc.json"
helpURL: https://rook.io/docs/rook/latest/Contributing/development-flow/#commit-structure
2 changes: 1 addition & 1 deletion .github/workflows/docs-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
!Documentation/Helm-Charts
- name: Check helm-docs
run: make check-helm-docs
run: make check.helm-docs
- name: Check docs
run: make check.docs
- name: Install mkdocs and dependencies
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/snyk.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
fetch-depth: 0

- name: run Snyk to check for code vulnerabilities
uses: snyk/actions/golang@9213221444c2dc9e8b2502c1e857c26d851e84a7 # master
uses: snyk/actions/golang@cdb760004ba9ea4d525f2e043745dfe85bb9077e # master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
GOFLAGS: "-buildvcs=false"
146 changes: 146 additions & 0 deletions Documentation/CRDs/specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -9418,6 +9418,7 @@ string
</em>
</td>
<td>
<em>(Optional)</em>
<p>The metadata pool used for creating RADOS namespaces in the object store</p>
</td>
</tr>
Expand All @@ -9429,6 +9430,7 @@ string
</em>
</td>
<td>
<em>(Optional)</em>
<p>The data pool used for creating RADOS namespaces in the object store</p>
</td>
</tr>
Expand All @@ -9444,6 +9446,28 @@ bool
<p>Whether the RADOS namespaces should be preserved on deletion of the object store</p>
</td>
</tr>
<tr>
<td>
<code>poolPlacements</code><br/>
<em>
<a href="#ceph.rook.io/v1.PoolPlacementSpec">
[]PoolPlacementSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PoolPlacements control which Pools are associated with a particular RGW bucket.
Once PoolPlacements are defined, RGW client will be able to associate pool
with ObjectStore bucket by providing &ldquo;<LocationConstraint>&rdquo; during s3 bucket creation
or &ldquo;X-Storage-Policy&rdquo; header during swift container creation.
See: <a href="https://docs.ceph.com/en/latest/radosgw/placement/#placement-targets">https://docs.ceph.com/en/latest/radosgw/placement/#placement-targets</a>
PoolPlacement with name: &ldquo;default&rdquo; will be used as a default pool if no option
is provided during bucket creation.
If default placement is not provided, spec.sharedPools.dataPoolName and spec.sharedPools.MetadataPoolName will be used as default pools.
If spec.sharedPools are also empty, then RGW pools (spec.dataPool and spec.metadataPool) will be used as defaults.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.ObjectStoreHostingSpec">ObjectStoreHostingSpec
Expand Down Expand Up @@ -10624,6 +10648,49 @@ the triple <key,value,effect> using the matching operator <operator></p>
<div>
<p>PlacementSpec is the placement for core ceph daemons part of the CephCluster CRD</p>
</div>
<h3 id="ceph.rook.io/v1.PlacementStorageClassSpec">PlacementStorageClassSpec
</h3>
<p>
(<em>Appears on:</em><a href="#ceph.rook.io/v1.PoolPlacementSpec">PoolPlacementSpec</a>)
</p>
<div>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the StorageClass name. Ceph allows arbitrary name for StorageClasses,
however most clients/libs insist on AWS names so it is recommended to use
one of the valid x-amz-storage-class values for better compatibility:
REDUCED_REDUNDANCY | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE | OUTPOSTS | GLACIER_IR | SNOW | EXPRESS_ONEZONE
See AWS docs: <a href="https://aws.amazon.com/de/s3/storage-classes/">https://aws.amazon.com/de/s3/storage-classes/</a></p>
</td>
</tr>
<tr>
<td>
<code>dataPoolName</code><br/>
<em>
string
</em>
</td>
<td>
<p>DataPoolName is the data pool used to store ObjectStore objects data.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.PoolMirroringInfo">PoolMirroringInfo
</h3>
<p>
Expand Down Expand Up @@ -10780,6 +10847,85 @@ StatesSpec
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.PoolPlacementSpec">PoolPlacementSpec
</h3>
<p>
(<em>Appears on:</em><a href="#ceph.rook.io/v1.ObjectSharedPoolsSpec">ObjectSharedPoolsSpec</a>)
</p>
<div>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Pool placement name. Name can be arbitrary. Placement with name &ldquo;default&rdquo; will be used as default.</p>
</td>
</tr>
<tr>
<td>
<code>metadataPoolName</code><br/>
<em>
string
</em>
</td>
<td>
<p>The metadata pool used to store ObjectStore bucket index.</p>
</td>
</tr>
<tr>
<td>
<code>dataPoolName</code><br/>
<em>
string
</em>
</td>
<td>
<p>The data pool used to store ObjectStore objects data.</p>
</td>
</tr>
<tr>
<td>
<code>dataNonECPoolName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>The data pool used to store ObjectStore data that cannot use erasure coding (ex: multi-part uploads).
If dataPoolName is not erasure coded, then there is no need for dataNonECPoolName.</p>
</td>
</tr>
<tr>
<td>
<code>storageClasses</code><br/>
<em>
<a href="#ceph.rook.io/v1.PlacementStorageClassSpec">
[]PlacementStorageClassSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>StorageClasses can be selected by user to override dataPoolName during object creation.
Each placement has default STANDARD StorageClass pointing to dataPoolName.
This list allows defining additional StorageClasses on top of default STANDARD storage class.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.PoolSpec">PoolSpec
</h3>
<p>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -200,11 +200,18 @@ CSI-Addons supports the following operations:
* [Creating a ReclaimSpaceCronJob](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/reclaimspace.md#reclaimspacecronjob)
* [Annotating PersistentVolumeClaims](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/reclaimspace.md#annotating-perstentvolumeclaims)
* [Annotating Namespace](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/reclaimspace.md#annotating-namespace)
* [Annotating StorageClass](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/reclaimspace.md#annotating-storageclass)
* Network Fencing
* [Creating a NetworkFence](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/networkfence.md)
* Volume Replication
* [Creating VolumeReplicationClass](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/volumereplicationclass.md)
* [Creating VolumeReplication CR](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/volumereplication.md)
* Key Rotation Job for PV encryption
* [Creating EncryptionKeyRotationJob](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/encryptionkeyrotation.md#encryptionkeyrotationjob)
* [Creating EncryptionKeyRotationCronJob](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/encryptionkeyrotation.md#encryptionkeyrotationcronjob)
* [Annotating PersistentVolumeClaims](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/encryptionkeyrotation.md#annotating-persistentvolumeclaims)
* [Annotating Namespace](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/encryptionkeyrotation.md#annotating-namespace)
* [Annotating StorageClass](https://github.com/csi-addons/kubernetes-csi-addons/blob/v0.9.1/docs/encryptionkeyrotation.md#annotating-storageclass)

## Enable RBD and CephFS Encryption Support

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,9 @@ Rook can configure the Ceph Object Store for several different scenarios. See ea

1. Create a [local object store](#create-a-local-object-store-with-s3) with dedicated Ceph pools. This option is recommended if a single object store is required, and is the simplest to get started.
2. Create [one or more object stores with shared Ceph pools](#create-local-object-stores-with-shared-pools). This option is recommended when multiple object stores are required.
3. Connect to an [RGW service in an external Ceph cluster](#connect-to-an-external-object-store), rather than create a local object store.
4. Configure [RGW Multisite](#object-multisite) to synchronize buckets between object stores in different clusters.
3. Create [one or more object stores with pool placement targets and storage classes](#create-local-object-stores-with-pool-placements). This configuration allows Rook to provide different object placement options to object store clients.
4. Connect to an [RGW service in an external Ceph cluster](#connect-to-an-external-object-store), rather than create a local object store.
5. Configure [RGW Multisite](#object-multisite) to synchronize buckets between object stores in different clusters.

!!! note
Updating the configuration of an object store between these types is not supported.
Expand Down Expand Up @@ -188,6 +189,83 @@ To consume the object store, continue below in the section to [Create a bucket](
Modify the default example object store name from `my-store` to the alternate name of the object store
such as `store-a` in this example.

### Create Local Object Store(s) with pool placements

!!! attention
This feature is experimental.

This section contains a guide on how to configure [RGW's pool placement and storage classes](https://docs.ceph.com/en/reef/radosgw/placement/) with Rook.

Object Storage API allows users to override where bucket data will be stored during bucket creation. With `<LocationConstraint>` parameter in S3 API and `X-Storage-Policy` header in SWIFT. Similarly, users can override where object data will be stored by setting `X-Amz-Storage-Class` and `X-Object-Storage-Class` during object creation.

To enable this feature, configure `poolPlacements` representing a list of possible bucket data locations.
Each `poolPlacement` must have:

* a **unique** `name` to refer to it in `<LocationConstraint>` or `X-Storage-Policy`. A placement with reserved name `default` will be used by default if no location constraint is provided.
* `dataPoolName` and `metadataPoolName` representing object data and metadata locations. In Rook, these data locations are backed by `CephBlockPool`. `poolPlacements` and `storageClasses` specs refer pools by name. This means that all pools should be defined in advance. Similarly to [sharedPools](#create-local-object-stores-with-shared-pools), the same pool can be reused across multiple ObjectStores and/or poolPlacements/storageClasses because of RADOS namespaces. Here, each pool will be namespaced with `<object store name>.<placement name>.<pool type>` key.
* **optional** `dataNonECPoolName` - extra pool for data that cannot use erasure coding (ex: multi-part uploads). If not set, `metadataPoolName` will be used.
* **optional** list of placement `storageClasses`. Classes defined per placement, which means that even classes of `default` placement will be available only within this placement and not others. Each placement will automatically have default storage class named `STANDARD`. `STANDARD` class always points to placement `dataPoolName` and cannot be removed or redefined. Each storage class must have:
* `name` (unique within placement). RGW allows arbitrary name for StorageClasses, however some clients/libs insist on AWS names so it is recommended to use one of the valid `x-amz-storage-class` values for better compatibility: `STANDARD | REDUCED_REDUNDANCY | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE | OUTPOSTS | GLACIER_IR | SNOW | EXPRESS_ONEZONE`. See [AWS docs](https://aws.amazon.com/s3/storage-classes/).
* `dataPoolName` - overrides placement data pool when this class is selected by user.

Example: Configure `CephObjectStore` with `default` placement pointing to `us` pools and placement `europe` pointing to pools in corresponding geographies. These geographical locations are only an example. Placement name can be arbitrary and could reflect the backing pool's replication factor, device class, or failure domain. This example also defines storage class `REDUCED_REDUNDANCY` for each placement.

```yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: my-store
namespace: rook-ceph
spec:
gateway:
port: 80
instances: 1
sharedPools:
poolPlacements:
- name: default
metadataPoolName: "us-data-pool"
dataPoolName: "us-meta-pool"
storageClasses:
- name: REDUCED_REDUNDANCY
dataPoolName: "us-reduced-pool"
- name: europe
metadataPoolName: "eu-meta-pool"
dataPoolName: "eu-data-pool"
storageClasses:
- name: REDUCED_REDUNDANCY
dataPoolName: "eu-reduced-pool"
```

S3 clients can direct objects into the pools defined in the above. The example below uses the [s5cmd](https://github.com/peak/s5cmd) CLI tool which is pre-installed in the toolbox pod:

```shell
# make bucket without location constraint -> will use "us"
s5cmd mb s3://bucket1
# put object to bucket1 without storage class -> end up in "us-data-pool"
s5cmd put obj s3://bucket1/obj
# put object to bucket1 with "STANDARD" storage class -> end up in "us-data-pool"
s5cmd put obj s3://bucket1/obj --storage-class=STANDARD
# put object to bucket1 with "REDUCED_REDUNDANCY" storage class -> end up in "us-reduced-pool"
s5cmd put obj s3://bucket1/obj --storage-class=REDUCED_REDUNDANCY
# make bucket with location constraint europe
s5cmd mb s3://bucket2 --region=my-store:europe
# put object to bucket2 without storage class -> end up in "eu-data-pool"
s5cmd put obj s3://bucket2/obj
# put object to bucket2 with "STANDARD" storage class -> end up in "eu-data-pool"
s5cmd put obj s3://bucket2/obj --storage-class=STANDARD
# put object to bucket2 with "REDUCED_REDUNDANCY" storage class -> end up in "eu-reduced-pool"
s5cmd put obj s3://bucket2/obj --storage-class=REDUCED_REDUNDANCY
```

### Connect to an External Object Store

Rook can connect to existing RGW gateways to work in conjunction with the external mode of the `CephCluster` CRD. First, create a `rgw-admin-ops-user` user in the Ceph cluster with the necessary caps:
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ helm-docs: $(HELM_DOCS) ## Use helm-docs to generate documentation from helm cha
-t ../../../Documentation/Helm-Charts/ceph-cluster-chart.gotmpl.md \
-t ../../../Documentation/Helm-Charts/_templates.gotmpl

check-helm-docs:
check.helm-docs:
@$(MAKE) helm-docs
@git diff --exit-code || { \
echo "Please run 'make helm-docs' locally, commit the updated docs, and push the change. See https://rook.io/docs/rook/latest/Contributing/documentation/#making-docs" ; \
Expand Down
Loading

0 comments on commit caca245

Please sign in to comment.