-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor documentation #800
Refactor documentation #800
Conversation
131b78c
to
c841cae
Compare
2bd343d
to
ca068c1
Compare
5981820
to
bac77e5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quite a few needling suggestions to improve grammar and clarity.
A couple of questions on some points that lead me to some possibly confusing assumptions that I think some adjustments or additional info may help resolve.
docs/architecture.md
Outdated
* `Legacy node server` | ||
|
||
## Controller | ||
Controller runs as `Deployment` Pods named `controller` which is three replicas in any Kubernetes nodes not limiting to where node server is running. In the three replicas, one instance is elected to serve requests. Each pod contains below running containers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Controller runs as `Deployment` Pods named `controller` which is three replicas in any Kubernetes nodes not limiting to where node server is running. In the three replicas, one instance is elected to serve requests. Each pod contains below running containers | |
The Controller runs as `Deployment` Pods named `controller`, which are three replicas located in any Kubernetes nodes not limiting to where node server is running. In the three replicas, one instance is elected to serve requests. Each pod contains below running containers: |
I'm not really sure what not limiting to where node server is running
means.
Is this that the controller does not exist in a pod running the node server? Or something else?
If that's true, then maybe ... in any Kubernetes node where the node server is not running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
controller
pods may (may not) run on nodes where node server
pods are running. We have control on where node server
pods to run, but not to controller
pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Controller runs as `Deployment` Pods named `controller` which is three replicas in any Kubernetes nodes not limiting to where node server is running. In the three replicas, one instance is elected to serve requests. Each pod contains below running containers | |
The Controller runs as `Deployment` Pods named `controller`, which are three replicas located in any Kubernetes nodes, potentially including a pod with a node server running. In the three replicas, one instance is elected to serve requests. Each pod contains below running containers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I trimmed to remove node-server
reference here.
The Controller runs as `Deployment` Pods named `controller`, which are three replicas located in any Kubernetes nodes. In the three replicas, one instance is elected to serve requests. Each pod contains below running containers:
Does it look good?
b926111
to
d9aa14d
Compare
|
||
#### 3. Intitialize the drives | ||
## DirectPV CSI driver installation | ||
Before starting the installation, it is required to have DirectPV plugin installed on your system. For plugin installation refer [this documentation](#directpv-plugin-installation). If you are not using `krew`, replace `kubectl directpv` by `kubectl-directpv` in below steps. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the plugin is not installed via krew, we must download the binary and move it to /usr/local/bin/
- this will enable the plugin for kubectl commands.
Can we mention ^^ here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to specify where to keep the binary. Each OS works differently. If an user does not prefer to use Krew
, but want to have directpv
subcommand for kubectl
, he/she knows how to make it happen. This is out of scope of this document.
- quay.io/minio/directpv:latest | ||
* If `seccomp` is enabled, load [DirectPV seccomp profile](../seccomp.json) on nodes where you want to install DirectPV and use `--seccomp-profile` flag to `kubectl directpv install` command. For more information, refer Kubernetes documentation [here](https://kubernetes.io/docs/tutorials/clusters/seccomp/) | ||
* If `apparmor` is enabled, load [DirectPV apparmor profile](../apparmor.profile) on nodes where you want to install DirectPV and use `--apparmor-profile` flag to `kubectl directpv install` command. For more information, refer to the [Kubernetes documentation](https://kubernetes.io/docs/tutorials/clusters/apparmor/). | ||
* Enabled `ExpandCSIVolumes` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) for [volume expansion](https://kubernetes-csi.github.io/docs/volume-expansion.html) feature. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this feature gate is enabled by default in k8s v1.16 and above. can we also specify this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to specify any kubernetes version here. If this feature gate is disabled by default or admin, it is required to be enabled. That is the Prerequisites
#### Installing by generating DirectPV manifests | ||
To install using generated manifests file, run below command | ||
```sh | ||
$ curl -sfL https://github.com/minio/directpv/raw/master/docs/tools/install.sh | sh - apply |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use kubectl directpv install -o yaml
for this because we don't need to maintain install.sh
for every updates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically kubectl-directpv install -o yaml
is not really usable in all environments. We should discourage the usage and promote install.sh
which does better job in fresh install as well as upgrade.
docs/drive-management.md
Outdated
## Add drives | ||
Drives are added to DirectPV to provision volumes. This involves a two step process as shown below. | ||
|
||
1. Run `discover` command - `discover` command probes eligible drives from DirectPV nodes and stores drive information in a YAML file. You should carefully examine the YAML file and set the `select` field to `yes` or `no` value to indicate drive selection. `select` field is set to `yes` value by default. Below is an example of the `discover` command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also mention about the --drives
and --nodes
selector flags here? They can review drives.yaml
optionally if they want. Instead, the table that we show would be useful for the review here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually these flags are explained command reference guide
. However reviewing drives.yaml
by the user must be done even these flags are used. Because drive state might change at the time of fetching the information.
The `init` command creates a request to add the selected drives in the YAML file generated using the `discover` command. As this process wipes out all data on the selected drives, wrong drive selection will lead to permanent data loss. Below is an example of `init` command: | ||
|
||
```sh | ||
$ kubectl directpv init drives.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ kubectl directpv init drives.yaml | |
$ kubectl directpv init drives.yaml --dangerous |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not expose --dangerous
flag anywhere. A copy/paste such reference would lead to data loss.
Refer to the [list drives command](./command-reference.md#drives-command) for more information. | ||
|
||
## Label drives | ||
Drives are labeled to set custom tagging which can be used in volume provisioning. Below is an example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we also specify how to control scheduling using labels here? something like https://github.com/minio/directpv/blob/master/docs/scheduling.md#volume-scheduling-based-on-drive-labels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is covered in volume-scheduling.md#Customizing drive selection
``` | ||
|
||
## Remove drives | ||
Drives that do not contain any volumes can be removed. Below is an example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also specify what this command does? - that it releases the drive by umounting it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is internal, I don't think we need to specify here.
Replace a faulty drive with a new drive on a same node. In this process, all volumes in the faulty drive are moved to the new drive then faulty drive is removed from DirectPV. Currently, DirectPV does not support moving data on the volume to the new drive. Use [replace.sh](./tools/replace.sh) script to perform drive replacement. Below is an example: | ||
```sh | ||
# Replace 'sdd' drive by 'sdf' drive on 'node1' node | ||
$ replace.sh sdd sdf node1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we provide the step by step guide as comments in the script? customers can use this incase if the script doesn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added those steps initially and removed them later. As commands have to be run conditionally, it is high chance of copy/paste error. If the script doesn't work, steps also won't work. If someone wants to understand what steps are carried out, reading release.sh
script is quite straight-forward.
***CAUTION: THIS IS DANGEROUS OPERATION WHICH LEADS TO DATA LOSS*** | ||
|
||
Volume can be deleted only if it is in `Ready` state (that is, no pod is using it). Run the `kubectl delete pvc` command which triggers DirectPV volume deletion. As removing a volume leads to data loss, double check what volume you are deleting. Below is an example: | ||
```sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we also explain about kubectl directpv clean
command which can be used to clean state volumes? or volumes of the pods which were force deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add another section ## Clean stale volumes
.
docs/troubleshooting.md
Outdated
@@ -0,0 +1,67 @@ | |||
# Frequently Asked Questions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we isolate FAQs from troubleshooting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this is not FAQ. We would need a better title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe Common Issues
?
Though when I copied this over to the Hugo docs, I left it a FAQs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me rename the file as faq.md
ac72a90
to
2cc2837
Compare
2cc2837
to
12081de
Compare
12081de
to
736dc8e
Compare
Signed-off-by: Bala.FA <[email protected]>
Co-authored-by: Daryl White <[email protected]>
Signed-off-by: Bala.FA <[email protected]>
Signed-off-by: Bala.FA <[email protected]>
Signed-off-by: Bala.FA <[email protected]>
736dc8e
to
4a4cd41
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to update few links in the README page. LGTM otherwise.
Signed-off-by: Bala.FA <[email protected]>
a1f2bde
to
41eecf8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any additional feedback other than a possible alternative title for the FAQ page.
docs/troubleshooting.md
Outdated
@@ -0,0 +1,67 @@ | |||
# Frequently Asked Questions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe Common Issues
?
Though when I copied this over to the Hugo docs, I left it a FAQs.
Signed-off-by: Bala.FA <[email protected]>
6fd27c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
No description provided.