From 39a47a5725c410a51758c451037db47092453f78 Mon Sep 17 00:00:00 2001 From: Kevin Klues Date: Tue, 11 Aug 2020 10:29:28 +0000 Subject: [PATCH] Update README.md and RELEASE.md with v0.7.0-rc.5 tag Signed-off-by: Kevin Klues --- README.md | 34 +++++++++++++++++++--------------- RELEASE.md | 2 +- 2 files changed, 20 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 813164c43..858bc12f4 100644 --- a/README.md +++ b/README.md @@ -82,7 +82,7 @@ Once you have configured the options above on all the GPU nodes in your cluster, you can enable GPU support by deploying the following Daemonset: ```shell -$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.7.0-rc.4/nvidia-device-plugin.yml +$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.7.0-rc.5/nvidia-device-plugin.yml ``` **Note:** This is a simple static daemonset meant to demonstrate the basic @@ -123,7 +123,7 @@ The preferred method to deploy the device plugin is as a daemonset using `helm`. Instructions for installing `helm` can be found [here](https://helm.sh/docs/intro/install/). -The `helm` chart for the latest release of the plugin (`v0.7.0-rc.4`) includes +The `helm` chart for the latest release of the plugin (`v0.7.0-rc.5`) includes a number of customizable values. The most commonly overridden ones are: ``` @@ -176,7 +176,7 @@ that you can set in your pod spec to get access to a specific MIG device. Please take a look in the following `values.yaml` file to see the full set of overridable parameters for the device plugin. -* https://github.com/NVIDIA/k8s-device-plugin/blob/v0.7.0-rc.4/deployments/helm/nvidia-device-plugin/values.yaml +* https://github.com/NVIDIA/k8s-device-plugin/blob/v0.7.0-rc.5/deployments/helm/nvidia-device-plugin/values.yaml #### Installing via `helm install`from the `nvidia-device-plugin` `helm` repository @@ -199,7 +199,7 @@ plugin with the various flags from above. Using the default values for the flags: ```shell $ helm install \ - --version=0.7.0-rc.4 \ + --version=0.7.0-rc.5 \ --generate-name \ nvdp/nvidia-device-plugin ``` @@ -208,7 +208,7 @@ Enabling compatibility with the `CPUManager` and running with a request for 100ms of CPU time and a limit of 512MB of memory. ```shell $ helm install \ - --version=0.7.0-rc.4 \ + --version=0.7.0-rc.5 \ --generate-name \ --set compatWithCPUManager=true \ --set resources.requests.cpu=100m \ @@ -219,7 +219,7 @@ $ helm install \ Use the legacy Daemonset API (only available on Kubernetes < `v1.16`): ```shell $ helm install \ - --version=0.7.0-rc.4 \ + --version=0.7.0-rc.5 \ --generate-name \ --set legacyDaemonsetAPI=true \ nvdp/nvidia-device-plugin @@ -228,7 +228,7 @@ $ helm install \ Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy` ```shell $ helm install \ - --version=0.7.0-rc.4 \ + --version=0.7.0-rc.5 \ --generate-name \ --set compatWithCPUManager=true \ --set migStrategy=mixed \ @@ -246,7 +246,7 @@ Using the default values for the flags: ```shell $ helm install \ --generate-name \ - https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.4.tgz + https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5.tgz ``` Enabling compatibility with the `CPUManager` and running with a request for @@ -257,7 +257,7 @@ $ helm install \ --set compatWithCPUManager=true \ --set resources.requests.cpu=100m \ --set resources.limits.memory=512Mi \ - https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.4.tgz + https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5.tgz ``` Use the legacy Daemonset API (only available on Kubernetes < `v1.16`): @@ -265,7 +265,7 @@ Use the legacy Daemonset API (only available on Kubernetes < `v1.16`): $ helm install \ --generate-name \ --set legacyDaemonsetAPI=true \ - https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.4.tgz + https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5.tgz ``` Enabling compatibility with the `CPUManager` and the `mixed` `migStrategy` @@ -274,14 +274,14 @@ $ helm install \ --generate-name \ --set compatWithCPUManager=true \ --set migStrategy=mixed \ - https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.4.tgz + https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.7.0-rc.5.tgz ``` ## Building and Running Locally The next sections are focused on building the device plugin locally and running it. It is intended purely for development and testing, and not required by most users. -It assumes you are pinning to the latest release tag (i.e. `v0.7.0-rc.4`), but can +It assumes you are pinning to the latest release tag (i.e. `v0.7.0-rc.5`), but can easily be modified to work with any available tag or branch. ### With Docker @@ -289,8 +289,8 @@ easily be modified to work with any available tag or branch. #### Build Option 1, pull the prebuilt image from [Docker Hub](https://hub.docker.com/r/nvidia/k8s-device-plugin): ```shell -$ docker pull nvidia/k8s-device-plugin:v0.7.0-rc.4 -$ docker tag nvidia/k8s-device-plugin:v0.7.0-rc.4 nvidia/k8s-device-plugin:devel +$ docker pull nvidia/k8s-device-plugin:v0.7.0-rc.5 +$ docker tag nvidia/k8s-device-plugin:v0.7.0-rc.5 nvidia/k8s-device-plugin:devel ``` Option 2, build without cloning the repository: @@ -298,7 +298,7 @@ Option 2, build without cloning the repository: $ docker build \ -t nvidia/k8s-device-plugin:devel \ -f docker/amd64/Dockerfile.ubuntu16.04 \ - https://github.com/NVIDIA/k8s-device-plugin.git#v0.7.0-rc.4 + https://github.com/NVIDIA/k8s-device-plugin.git#v0.7.0-rc.5 ``` Option 3, if you want to modify the code: @@ -352,6 +352,10 @@ $ ./k8s-device-plugin --pass-device-specs ## Changelog +### Version v0.7.0-rc.5 + +- Add deviceListStrategyFlag to allow device list passing as volume mounts + ### Version v0.7.0-rc.4 - Allow one to override selector.matchLabels in the helm chart diff --git a/RELEASE.md b/RELEASE.md index 8ff3ce6f5..882aaabc3 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -8,7 +8,7 @@ Publishing the container is automated through gitlab-ci and only requires one to Publishing the helm chart is currently manual, and we should move to an automated process ASAP # Release Process Checklist -- [ ] Update the README to change occurances of the old version (e.g: `v0.7.0-rc.4`) with the new version +- [ ] Update the README to change occurances of the old version (e.g: `v0.7.0-rc.5`) with the new version - [ ] Update the README changelog - [ ] Commit, Tag and Push to Gitlab - [ ] Build a new helm package with `helm package ./deployments/helm/nvidia-device-plugin`