Skip to content

Commit

Permalink
doc(provision, faq): add readme and faq (#11)
Browse files Browse the repository at this point in the history
Signed-off-by: Akhil Mohan <[email protected]>
  • Loading branch information
akhilerm authored Apr 21, 2021
1 parent 4196dfa commit c1a7cf9
Show file tree
Hide file tree
Showing 4 changed files with 348 additions and 2 deletions.
159 changes: 157 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,159 @@
# device-localpv (experimental/pre-alpha)

# OpenEBS Local Device CSI Driver

CSI Driver for using Local Block Devices

## Project Status

Currently, the Device-LocalPV CSI Driver is in pre-alpha.

## Usage

### Prerequisites

Before installing the device CSI driver please make sure your Kubernetes Cluster
must meet the following prerequisites:

1. Disks are available on the node with a single 10MB partition having partition name used to
identify the disk
2. You have access to install RBAC components into kube-system namespace.
The OpenEBS Device driver components are installed in kube-system namespace
to allow them to be flagged as system critical components.

### Supported System

K8S : 1.18+

OS : Ubuntu

### Setup

Find the disk which you want to use for the Device LocalPV, for testing a loopback device can be used

```
truncate -s 1024G /tmp/disk.img
sudo losetup -f /tmp/disk.img --show
```

Create the meta partition on the loop device which will be used for provisioning volumes

```
sudo parted /dev/loop9 mklabel gpt
sudo parted /dev/loop9 mkpart test-device 1MiB 10MiB
```

### Installation

Deploy the Operator yaml

```
kubectl apply -f https://raw.githubusercontent.com/openebs/device-localpv/master/deploy/device-operator.yaml
```

### Deployment


#### 1. Create a Storage class

```
$ cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-device-sc
allowVolumeExpansion: true
parameters:
devname: "test-device"
provisioner: device.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
```

Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for Device LocalPV

##### Device Availability

If the device with meta partition is available on certain nodes only, then make use of topology to tell the list of nodes where we have the devices available.
As shown in the below storage class, we can use allowedTopologies to describe device availability on nodes.

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-device-sc
allowVolumeExpansion: true
parameters:
devname: "test-device"
provisioner: device.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- device-node1
- device-node2
```

The above storage class tells that device with meta partition "test-device" is available on nodes device-node1 and device-node2 only. The Device CSI driver will create volumes on those nodes only.


#### 2. Create the PVC

```
$ cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: csi-devicepv
spec:
storageClassName: openebs-device-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
```

Create a PVC using the storage class created for the Device driver.

#### 3. Deploy the application

Create the deployment yaml using the pvc backed by Device driver storage.

```
$ cat fio.yaml
apiVersion: v1
kind: Pod
metadata:
name: fio
spec:
restartPolicy: Never
containers:
- name: perfrunner
image: openebs/tests-fio
command: ["/bin/bash"]
args: ["-c", "while true ;do sleep 50; done"]
volumeMounts:
- mountPath: /datadir
name: fio-vol
tty: true
volumes:
- name: fio-vol
persistentVolumeClaim:
claimName: csi-devicepv
```

After the deployment of the application, we can go to the node and see that the partition is created and is being used as a volume
by the application for reading/writting the data.

#### 4. Deprovisioning

for deprovisioning the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv the partition will be wiped and deleted from the device.

```
$ kubectl delete -f fio.yaml
pod "fio" deleted
$ kubectl delete -f pvc.yaml
persistentvolumeclaim "csi-devicepv" deleted
```

51 changes: 51 additions & 0 deletions deploy/sample/fio-block.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-device-sc
allowVolumeExpansion: true
parameters:
devname: "test-device"
provisioner: device.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: block-claim
spec:
volumeMode: Block
storageClassName: openebs-device-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fiob
spec:
replicas: 1
selector:
matchLabels:
name: fiob
template:
metadata:
labels:
name: fiob
spec:
containers:
- resources:
name: perfrunner
image: openebs/tests-fio
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "while true ;do sleep 50; done"]
volumeDevices:
- devicePath: /dev/xvda
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: block-claim
81 changes: 81 additions & 0 deletions docs/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
### 1. How to add custom topology key

To add custom topology key, we can label all the nodes with the required key and value :-

```sh
$ kubectl label node k8s-node-1 openebs.io/rack=rack1
node/k8s-node-1 labeled

$ kubectl get nodes k8s-node-1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-node-1 Ready worker 16d v1.17.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true,openebs.io/rack=rack1

```
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.

Once we have labeled the node, we can install the lvm driver. The driver will pick the node labels and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can label the node with the topology information and then restart the Device-LocalPV CSI driver daemon sets (openebs-device-node) are required so that the driver can pick the labels and add them as supported topology keys. We should restart the pod in kube-system namespace with the name as openebs-device-node-[xxxxx] which is the node agent pod for the Device-LocalPV Driver.

Note that restart of Device LocalPV CSI driver daemon sets are must in case, if we are going to use WaitForFirstConsumer as volumeBindingMode in storage class. In case of immediate volume binding mode, restart of daemon set is not a must requirement, irrespective of sequence of labelling the node either prior to install lvm driver or after install. However it is recommended to restart the daemon set if we are labeling the nodes after the installation.

```sh
$ kubectl get pods -n kube-system -l role=openebs-lvm

NAME READY STATUS RESTARTS AGE
openebs-device-controller-0 4/4 Running 0 5h28m
openebs-device-node-4d94n 2/2 Running 0 5h28m
openebs-device-node-gssh8 2/2 Running 0 5h28m
openebs-device-node-twmx8 2/2 Running 0 5h28m
```

We can verify that key has been registered successfully with the Device LocalPV CSI Driver by checking the CSI node object yaml :-

```yaml
$ kubectl get csinodes k8s-node-1 -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
creationTimestamp: "2020-04-13T14:49:59Z"
name: k8s-node-1
ownerReferences:
- apiVersion: v1
kind: Node
name: k8s-node-1
uid: fe268f4b-d9a9-490a-a999-8cde20c4dadb
resourceVersion: "4586341"
selfLink: /apis/storage.k8s.io/v1/csinodes/k8s-node-1
uid: 522c2110-9d75-4bca-9879-098eb8b44e5d
spec:
drivers:
- name: local.csi.openebs.io
nodeID: k8s-node-1
topologyKeys:
- beta.kubernetes.io/arch
- beta.kubernetes.io/os
- kubernetes.io/arch
- kubernetes.io/hostname
- kubernetes.io/os
- node-role.kubernetes.io/worker
- openebs.io/rack
```
We can see that "openebs.io/rack" is listed as topology key. Now we can create a storageclass with the topology key created :
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-device-sc
allowVolumeExpansion: true
parameters:
devname: "test-device"
provisioner: device.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/rack
values:
- rack1
```
The Device LocalPV CSI driver will schedule the PV to the nodes where label "openebs.io/rack" is set to "rack1".
Note that if storageclass is using Immediate binding mode and topology key is not mentioned then all the nodes should be labeled using same key, that means, same key should be present on all nodes, nodes can have different values for those keys. If nodes are labeled with different keys i.e. some nodes are having different keys, then DevicePV's default scheduler can not effectively do the volume capacity based scheduling. Here, in this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined and DevicePV scheduler will schedule the PV among those nodes only.
59 changes: 59 additions & 0 deletions docs/storageclasses.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
## Parameters

### StorageClass With Custom Node Labels

There can be a use case where we have certain kinds of device present on certain nodes only, and we want a particular type of application to use that device. We can create a storage class with `allowedTopologies` and mention all the nodes there where that device type is present:

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: device-sc
allowVolumeExpansion: true
parameters:
devname: "test-device"
provisioner: device.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/nodename
values:
- node-1
- node-2
```
Here we can have device with meta partition name “test-device” created on the nvme disks and want to use this high performing devices for the applications that need higher IOPS. We can use the above SorageClass to create the PVC and deploy the application using that.
The problem with the above StorageClass is that it works fine if the number of nodes is less, but if the number of nodes is huge, it is cumbersome to list all the nodes like this. In that case, what we can do is, we can label all the similar nodes using the same key value and use that label to create the StorageClass.
```
user@k8s-master:~ $ kubectl label node k8s-node-2 openebs.io/devname=nvme
node/k8s-node-2 labeled
user@k8s-master:~ $ kubectl label node k8s-node-1 openebs.io/devname=nvme
node/k8s-node-1 labeled
```

Now, restart the Device-LocalPV Driver (if already deployed, otherwise please ignore) so that it can pick the new node label as the supported topology. Check [faq](./faq.md#1-how-to-add-custom-topology-key) for more details.

```
$ kubectl delete po -n kube-system -l role=openebs-device
```

Now, we can create the StorageClass like this:

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nvme-device-sc
allowVolumeExpansion: true
parameters:
devname: "test-device"
provisioner: device.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/devname
values:
- nvme
```
Here, the volumes will be provisioned on the nodes which has label “openebs.io/devname” set as “nvme”.

0 comments on commit c1a7cf9

Please sign in to comment.