Skip to content

Commit

Permalink
Merge pull request #32 from gabemontero/readme-deploy-yaml
Browse files Browse the repository at this point in the history
BUILD-261: readme restructure
  • Loading branch information
openshift-merge-robot authored May 20, 2021
2 parents acd4a23 + 08db344 commit 9e654f4
Show file tree
Hide file tree
Showing 6 changed files with 499 additions and 453 deletions.
488 changes: 35 additions & 453 deletions README.md

Large diffs are not rendered by default.

40 changes: 40 additions & 0 deletions docs/content-update-details.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Details around pushing Secret and ConfigMap updates to provisioned Volumes

### Excluded OCP namespaces

The current list of namespaces excluded from the controller's watches:

- kube-system
- openshift-machine-api
- openshift-kube-apiserver
- openshift-kube-apiserver-operator
- openshift-kube-scheduler
- openshift-kube-controller-manager
- openshift-kube-controller-manager-operator
- openshift-kube-scheduler-operator
- openshift-console-operator
- openshift-controller-manager
- openshift-controller-manager-operator
- openshift-cloud-credential-operator
- openshift-authentication-operator
- openshift-service-ca
- openshift-kube-storage-version-migrator-operator
- openshift-config-operator
- openshift-etcd-operator
- openshift-apiserver-operator
- openshift-cluster-csi-drivers
- openshift-cluster-storage-operator
- openshift-cluster-version
- openshift-image-registry
- openshift-machine-config-operator
- openshift-sdn
- openshift-service-ca-operator

The list is not yet configurable, but most likely will become so as the project's lifecycle progresses.

Allowing the disabling processing of updates, or switching the default for the system as not dealing with
updates, but then allowing for opting into updates, is also under consideration.

Lastly, the current abilities to switch which Secret or ConfigMap a `Share` references, or even switch between
a ConfigMaps and Secrets (and vice-versa of course) is under consideration, and may be removed during these
still early stages of this driver's lifecycle.
12 changes: 12 additions & 0 deletions docs/csi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Current status with respect to the Kubernetes CSIVolumeSource API

So let's take each part of the [CSIVolumeSource](https://github.com/kubernetes/api/blob/71efbb18d63cd30604981514ac623a6be1d413bb/core/v1/types.go#L1743-L1771):

- for the `Driver` string field, it needs to be ["csi-driver-projected-resource.openshift.io"](https://github.com/openshift/csi-driver-projected-resource/blob/1fcc354faa31f624086265ea2228661a0fc2e7b1/pkg/client/client.go#L28).
- for the `VolumeAttributes` map, this driver currently adds the "share" key (which maps the the `Share` instance your `Pod` wants to use) in addition to the
elements of the `Pod` the kubelet stores when contacting the driver to provision the `Volume`. See [this list](https://github.com/openshift/csi-driver-projected-resource/blob/c3f1c454f92203f4b406dabe8dd460782cac1d03/pkg/hostpath/nodeserver.go#L37-L42).
- the `ReadOnly` field is ignored, as the this driver's controller actively updates the `Volume` as the underlying `Secret` or `ConfigMap` change, or as
the `Share` or the RBAC related to the `Share` change. **NOTE:** we are looking at providing `ReadOnly` volume support in future updates.
- the `FSType` field is ignored. This driver by design only supports `tmpfs`, with a different mount performed for each `Volume`, in order to defer all SELinux concerns to the kubelet.
- the `NodePublishSecretRef` field is ignored. The CSI `NodePublishVolume` and `NodeUnpublishVolume` flows gate the permission evaluation required for the `Volume`
by performing `SubjectAccessReviews` against the reference `Share` instance, using the `serviceAccount` of the `Pod` as the subject.
97 changes: 97 additions & 0 deletions docs/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Frequently Asked Questions

## What happens if the Share does not exist when you create a Pod that references it?

You'll see an event like:

```bash
$ oc get events
0s Warning FailedMount pod/my-csi-app MountVolume.SetUp failed for volume "my-csi-volume" : rpc error: code = InvalidArgument desc = the csi driver volumeAttribute 'share' reference had an error: share.projectedresource.storage.openshift.io "my-share" not found
$
```

And your Pod will never reach the Running state.

However, if the kubelet is still in a retry cycle trying to launch a Pod with a `Share` reference, if `Share` non-existence is the only thing preventing a mount, the mount should then succeed if the `Share` comes into existence.

## What happens if the Share is removed after the pod starts?

The data will be removed from the location specified by `volumeMount` in the `Pod`. Instead of

```bash
$ oc rsh my-csi-app
sh-4.4# ls -lR /data
ls -lR /data
total 312
-rw-r--r--. 1 root root 3243 Jan 29 17:59 4653723971430838710-key.pem
-rw-r--r--. 1 root root 311312 Jan 29 17:59 4653723971430838710.pem

```

You'll get

```bash
oc rsh my-csi-app
sh-4.4# ls -lR /data
ls -lR /data
/data:
total 0
sh-4.4#

```

## What happens if the ClusterRole or ClusterRoleBinding are not present when your newly created Pod tries to access an existing Share?

```bash
$ oc get events
LAST SEEN TYPE REASON OBJECT MESSAGE
6s Normal Scheduled pod/my-csi-app Successfully assigned my-csi-app-namespace/my-csi-app to ip-10-0-136-162.us-west-2.compute.internal
2s Warning FailedMount pod/my-csi-app MountVolume.SetUp failed for volume "my-csi-volume" : rpc error: code = PermissionDenied desc = subjectaccessreviews share my-share podNamespace my-csi-app-namespace podName my-csi-app podSA default returned forbidden
$

```
And your Pod will never get to the Running state.

## What happens if the Pod successfully mounts a Share, and later the permissions to access the Share are removed?

The data will be removed from the `Pod’s` volumeMount location.

Instead of

```bash
$ oc rsh my-csi-app
sh-4.4# ls -lR /data
ls -lR /data
/data:
total 312
-rw-r--r--. 1 root root 3243 Jan 29 17:59 4653723971430838710-key.pem
-rw-r--r--. 1 root root 311312 Jan 29 17:59 4653723971430838710.pem
sh-4.4#

```

You'll get

```bash
oc rsh my-csi-app
sh-4.4# ls -lR /data
ls -lR /data
/data:
total 0
sh-4.4#
```

Do note that if your Pod copied the data to other locations, the Projected Resource driver cannot do anything about those copies. A big motivator for allowing
some customization of the directory and file structure off of the `volumeMount` of the `Pod` is to help reduce the *need* to copy
files. Hopefully you can mount that data directly at its final, needed, destination.

Also note that the Projected Resource does not try to reverse engineer which RoleBinding or ClusterRoleBinding allows your Pod to access the Share.
The Kubernetes and OpenShift libraries for this are not currently structured to be openly consumed by other components. Nor did we entertain taking
snapshots of that code to serve such a purpose. So instead of listening to RoleBinding or Role changes, on the Projected Resource controller’s re-list interval
(which is configurable via start up argument on the command invoked from out DaemonSet, and whose default is 10 minutes), the controller will re-execute
Subject Access Review requests for each Pod’s reference to each `Share` on the `Share` re-list and remove content if permission was removed. But as noted
in the potential feature list up top, we'll continue to periodically revisit if there is a maintainable way of monitoring permission changes
in real time.

Conversely, if the kubelet is still in a retry cycle trying to launch a Pod with a `Share` reference, if now resolved permission issues were the only thing preventing
a mount, the mount should then succeed. Of course, as kubelet retry vs. controller re-list is the polling mechanism, and it is more frequent, the change in results would be more immediate in this case.
144 changes: 144 additions & 0 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
# Installing the Projected Resource CSI driver

## Before you begin

1. You must have an OpenShift cluster running 4.8 or later.

1. Grant `cluster-admin` permissions to the current user.

## Installing from a local clone of this repository (only developer preview level support)

1. Run the following command

```bash
# change directories into you clone of this repository, then
./deploy/deploy.sh
```

You should see an output similar to the following printed on the terminal showing the creation or modification of the various
Kubernetes resource:

```shell
deploying hostpath components
./deploy/0000_10_projectedresource.crd.yaml
oc apply -f ./deploy/0000_10_projectedresource.crd.yaml
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
./deploy/00-namespace.yaml
oc apply -f ./deploy/00-namespace.yaml
namespace/csi-driver-projected-resource created
./deploy/01-service-account.yaml
oc apply -f ./deploy/01-service-account.yaml
serviceaccount/csi-driver-projected-resource-plugin created
./deploy/02-cluster-role.yaml
oc apply -f ./deploy/02-cluster-role.yaml
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
./deploy/03-cluster-role-binding.yaml
oc apply -f ./deploy/03-cluster-role-binding.yaml
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged unchanged
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create unchanged
./deploy/csi-hostpath-driverinfo.yaml
oc apply -f ./deploy/csi-hostpath-driverinfo.yaml
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
./deploy/csi-hostpath-plugin.yaml
oc apply -f ./deploy/csi-hostpath-plugin.yaml
service/csi-hostpathplugin created
daemonset.apps/csi-hostpathplugin created
16:21:25 waiting for hostpath deployment to complete, attempt #0
```

## Installing from the master branch of this repository (only developer preview level support)

1. Run the following command

```bash
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/00-namespace.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/0000_10_projectedresource.crd.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/01-service-account.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/02-cluster-role.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/03-cluster-role-binding.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/csi-hostpath-driverinfo.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/master/deploy/csi-hostpath-plugin.yaml
```

You should see an output similar to the following printed on the terminal showing the creation or modification of the various
Kubernetes resource:

```shell
namespace/csi-driver-projected-resource created
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
serviceaccount/csi-driver-projected-resource-plugin created
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
service/csi-hostpathplugin created
daemonset.apps/csi-hostpathplugin created
```


## Installing from a release specific branch of this repository (only developer preview level support)

1. Run the following command

```bash
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/00-namespace.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/0000_10_projectedresource.crd.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/01-service-account.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/02-cluster-role.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/03-cluster-role-binding.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/csi-hostpath-driverinfo.yaml
oc apply -f --filename https://raw.githubusercontent.com/openshift/csi-driver-projected-resource/release-4.8/deploy/csi-hostpath-plugin.yaml
```

You should see an output similar to the following printed on the terminal showing the creation or modification of the various
Kubernetes resource:

```shell
namespace/csi-driver-projected-resource created
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
serviceaccount/csi-driver-projected-resource-plugin created
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
service/csi-hostpathplugin created
daemonset.apps/csi-hostpathplugin created
```


## Installing from the release page (only developer preview level support)

1. Run the following command

```bash
oc apply -f --filename https://github.com/openshift/csi-driver-projected-resource/releases/download/v0.1.0/release.yaml
```

You should see an output similar to the following printed on the terminal showing the creation or modification of the various
Kubernetes resource:

```shell
namespace/csi-driver-projected-resource created
customresourcedefinition.apiextensions.k8s.io/shares.projectedresource.storage.openshift.io created
serviceaccount/csi-driver-projected-resource-plugin created
clusterrole.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-privileged created
clusterrolebinding.rbac.authorization.k8s.io/projected-resource-secret-configmap-share-watch-sar-create created
csidriver.storage.k8s.io/csi-driver-projected-resource.openshift.io created
service/csi-hostpathplugin created
daemonset.apps/csi-hostpathplugin created
```


## Validate the installation

First, let's validate the deployment. Ensure all expected pods are running for the driver plugin, which in a
3 node OCP cluster will look something like:

```shell
$ oc get pods -n csi-driver-projected-resource
NAME READY STATUS RESTARTS AGE
csi-hostpathplugin-c7bbk 2/2 Running 0 23m
csi-hostpathplugin-m4smv 2/2 Running 0 23m
csi-hostpathplugin-x9xjw 2/2 Running 0 23m
```
Loading

0 comments on commit 9e654f4

Please sign in to comment.