Skip to content

Commit e26de43

Browse files
committed
common: update README and example CRDs
Signed-off-by: Vicente Cheng <vicente.cheng@suse.com>
1 parent 6b2de1d commit e26de43

10 files changed

+170
-57
lines changed

README.md

Lines changed: 40 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -1,82 +1,65 @@
1-
# csi-driver-lvm #
1+
# Harvester-csi-driver-lvm
22

3-
CSI DRIVER LVM utilizes local storage of Kubernetes nodes to provide persistent storage for pods.
3+
Harvester-CSI-Driver-LVM is derived from [metal-stack/csi-driver-lvm](https://github.com/metal-stack/csi-driver-lvm).
44

5-
It automatically creates hostPath based persistent volumes on the nodes.
5+
## Introduction
66

7-
Underneath it creates a LVM logical volume on the local disks. A comma-separated list of grok pattern, which disks to use must be specified.
7+
Harvester-CSI-Driver-LVM utilizes local storage to provide persistent storage for workloads (Usually VM workloads). It will make the VM unable to be migrated to other nodes, but it can provide better performance.
88

9-
This CSI driver is derived from [csi-driver-host-path](https://github.com/kubernetes-csi/csi-driver-host-path) and [csi-lvm](https://github.com/metal-stack/csi-lvm)
9+
Before you use it, you should have the pre-established Volume Group (VG) on that node. The VG name will be specified in the StorageClass.
1010

11-
## Currently it can create, delete, mount, unmount and resize block and filesystem volumes via lvm ##
11+
The Harvester-CSI-Driver-LVM provides the following features:
12+
- OnDemand Creation of Logical Volume (LV).
13+
- Support LVM type Striped and DM-Thin.
14+
- Support for Raw Block Volume.
15+
- Support Volume Expansion.
16+
- Support Volume Snapshot.
17+
- Support Volume Clone.
1218

13-
For the special case of block volumes, the filesystem-expansion has to be performed by the app using the block device
19+
**NOTE**: The Snapshot/Clone feature only works on the same nodes. Clone works for different Volume Groups.
1420

1521
## Installation ##
1622

17-
**Helm charts for installation are located in a separate repository called [helm-charts](https://github.com/metal-stack/helm-charts). If you would like to contribute to the helm chart, please raise an issue or pull request there.**
23+
You can use Helm to install the Harvester-CSI-Driver-LVM by remote repo or local helm chart files.
1824

19-
You have to set the devicePattern for your hardware to specify which disks should be used to create the volume group.
25+
1. Install the Harvester-CSI-Driver-LVM locally:
2026

21-
```bash
22-
helm install --repo https://helm.metal-stack.io mytest helm/csi-driver-lvm --set lvm.devicePattern='/dev/nvme[0-9]n[0-9]'
27+
```
28+
$ git clone https://github.com/harvester/csi-driver-lvm.git
29+
$ cd csi-driver-lvm/deploy
30+
$ helm install harvester-lvm-csi-driver charts/ -n harvester-system
2331
```
2432

25-
Now you can use one of following storageClasses:
33+
2. Install the Harvester-CSI-Driver-LVM by remote repo:
2634

27-
* `csi-driver-lvm-linear`
28-
* `csi-driver-lvm-mirror`
29-
* `csi-driver-lvm-striped`
35+
```
36+
$ helm repo add harvester https://charts.harvesterhci.io
37+
$ helm install harvester/harvester-lvm-csi-driver -n harvester-system
38+
```
3039

31-
To get the previous old and now deprecated `csi-lvm-sc-linear`, ... storageclasses, set helm-chart value `compat03x=true`.
40+
After the installation, you can check the status of the following pods:
41+
```
42+
$ kubectl get pods -A |grep harvester-csi-driver-lvm
43+
harvester-system harvester-csi-driver-lvm-controller-0 4/4 Running 0 3h2m
44+
harvester-system harvester-csi-driver-lvm-plugin-ctlgp 3/3 Running 1 (14h ago) 14h
45+
harvester-system harvester-csi-driver-lvm-plugin-qxxqs 3/3 Running 1 (14h ago) 14h
46+
harvester-system harvester-csi-driver-lvm-plugin-xktx2 3/3 Running 0 14h
47+
```
3248

33-
## Migration ##
49+
The CSI driver will be installed in the `harvester-system` namespace and provision to each node.
3450

35-
If you want to migrate your existing PVC to / from csi-driver-lvm, you can use [korb](https://github.com/BeryJu/korb).
51+
After installation, you can refer to the `examples` directory for some example CRDs for usage.
3652

3753
### Todo ###
3854

39-
* implement CreateSnapshot(), ListSnapshots(), DeleteSnapshot()
40-
41-
42-
### Test ###
43-
44-
```bash
45-
kubectl apply -f examples/csi-pvc-raw.yaml
46-
kubectl apply -f examples/csi-pod-raw.yaml
47-
48-
49-
kubectl apply -f examples/csi-pvc.yaml
50-
kubectl apply -f examples/csi-app.yaml
55+
* Implement the unittest
56+
* Implement the webhook for the validation
5157

52-
kubectl delete -f examples/csi-pod-raw.yaml
53-
kubectl delete -f examples/csi-pvc-raw.yaml
58+
### HowTo Build
5459

55-
kubectl delete -f examples/csi-app.yaml
56-
kubectl delete -f examples/csi-pvc.yaml
5760
```
58-
59-
### Development ###
60-
61-
In order to run the integration tests locally, you need to create to loop devices on your host machine. Make sure the loop device mount paths are not used on your system (default path is `/dev/loop10{0,1}`).
62-
63-
You can create these loop devices like this:
64-
65-
```bash
66-
for i in 100 101; do fallocate -l 1G loop${i}.img ; sudo losetup /dev/loop${i} loop${i}.img; done
67-
sudo losetup -a
68-
# use this for recreation or cleanup
69-
# for i in 100 101; do sudo losetup -d /dev/loop${i}; rm -f loop${i}.img; done
61+
$ make
7062
```
7163

72-
You can then run the tests against a kind cluster, running:
73-
74-
```bash
75-
make test
76-
```
77-
78-
To recreate or cleanup the kind cluster:
79-
80-
```bash
81-
make test-cleanup
82-
```
64+
The above command will execute the validation and build the target Image.
65+
You can define your REPO and TAG with ENV `REPO` and `TAG`.

examples/README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
2+
There are some example for using the `harvester-csi-driver-lvm`.
3+
4+
Before you start, you need to install the `harvester-csi-driver-lvm` first. For more information, refer to the [installation guide](../README.md#installation).
5+
Also, the examples contain some hardcode values. You need to modify them to fit your environment.
6+
- vgName: You nee to ensure the vgName was created in your environment.
7+
- nodeAffinity: You need to replace the value of the corresponding node name in your environment.

examples/generic-vol.yaml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
apiVersion: v1
2+
kind: Pod
3+
metadata:
4+
name: nginx-01
5+
spec:
6+
containers:
7+
- name: test-container
8+
image: nginx
9+
volumeDevices:
10+
- name: ephemeral-volume
11+
devicePath: /vol001
12+
affinity:
13+
nodeAffinity:
14+
requiredDuringSchedulingIgnoredDuringExecution:
15+
nodeSelectorTerms:
16+
- matchExpressions:
17+
- key: topology.lvm.csi/node
18+
operator: In
19+
values:
20+
- harvester-node-2
21+
volumes:
22+
- name: ephemeral-volume
23+
ephemeral:
24+
volumeClaimTemplate:
25+
spec:
26+
accessModes:
27+
- ReadWriteOnce
28+
resources:
29+
requests:
30+
storage: 1Gi
31+
storageClassName: lvm-striped
32+
volumeMode: Block

examples/pod.yaml

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
apiVersion: v1
2+
kind: Pod
3+
metadata:
4+
name: ubuntu-jammy-pod
5+
spec:
6+
containers:
7+
- name: ubuntu-jammy-container
8+
image: ubuntu:jammy
9+
command: ["/bin/bash", "-c", "--"]
10+
args: ["while true; do sleep 30; done;"]
11+
volumeDevices:
12+
- devicePath: "/volumes/vol001"
13+
name: vol001
14+
affinity:
15+
nodeAffinity:
16+
requiredDuringSchedulingIgnoredDuringExecution:
17+
nodeSelectorTerms:
18+
- matchExpressions:
19+
- key: topology.lvm.csi/node
20+
operator: In
21+
values:
22+
- harvester-node-2
23+
volumes:
24+
- name: vol001
25+
persistentVolumeClaim:
26+
claimName: vol001

examples/pvc-clone.yaml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
apiVersion: v1
2+
kind: PersistentVolumeClaim
3+
metadata:
4+
name: vol001-clone
5+
spec:
6+
storageClassName: lvm-striped
7+
dataSource:
8+
name: vol001
9+
kind: PersistentVolumeClaim
10+
apiGroup: ""
11+
accessModes:
12+
- ReadWriteOnce
13+
resources:
14+
requests:
15+
storage: 1Gi
16+
volumeMode: Block

examples/pvc.yaml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
apiVersion: v1
2+
kind: PersistentVolumeClaim
3+
metadata:
4+
name: vol001
5+
spec:
6+
storageClassName: lvm-striped
7+
accessModes:
8+
- ReadWriteOnce
9+
resources:
10+
requests:
11+
storage: 1Gi
12+
volumeMode: Block

examples/storageclass-dm-thin.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
allowVolumeExpansion: true
2+
apiVersion: storage.k8s.io/v1
3+
kind: StorageClass
4+
metadata:
5+
name: lvm-dm-thin
6+
parameters:
7+
type: dm-thin
8+
vgName: vg01
9+
provisioner: lvm.driver.harvesterhci.io
10+
reclaimPolicy: Delete
11+
volumeBindingMode: WaitForFirstConsumer

examples/storageclass-striped.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
allowVolumeExpansion: true
2+
apiVersion: storage.k8s.io/v1
3+
kind: StorageClass
4+
metadata:
5+
name: lvm-striped
6+
parameters:
7+
type: striped
8+
vgName: vg01
9+
provisioner: lvm.driver.harvesterhci.io
10+
reclaimPolicy: Delete
11+
volumeBindingMode: WaitForFirstConsumer

examples/volume-snapshot-class.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
apiVersion: snapshot.storage.k8s.io/v1
2+
deletionPolicy: Delete
3+
driver: lvm.driver.harvesterhci.io
4+
kind: VolumeSnapshotClass
5+
metadata:
6+
name: lvm-snapshot

examples/volumesnapshot.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
apiVersion: snapshot.storage.k8s.io/v1
2+
kind: VolumeSnapshot
3+
metadata:
4+
name: vol001-snapshot
5+
namespace: default
6+
spec:
7+
volumeSnapshotClassName: lvm-snapshot
8+
source:
9+
persistentVolumeClaimName: vol001

0 commit comments

Comments
 (0)