-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm: Add missing RBAC for nodes to cephfs chart #5126
Conversation
Signed-off-by: Ondrej Vasko <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, these permissions are also listed in deploy/cephfs/kubernetes/csi-provisioner-rbac.yaml
I think something else is missing here, If these RBACs are missing how is CI passing for the helm charts? @nixpanic @iPraveenParihar any idea? |
That one is not logging anything useful. I tried to label/unlabel PVC and also recreate it (same for other 3 provisioner pods I have)
|
AFAIK, cephfs provisioner doesn't require node resource access. Let me try it on my machine. |
Using
It was added in PR #3460 here. But not sure, why was it added. I don't find any requirement of it 😕. |
I wondered about that as well. Possibly minikube does not require RBACs? |
@nixpanic, found this rook/rook#11697 by @Madhu-1. verified it
|
IMO it not related to minikube it could be related to external-provisioner version. @Lirt what is the external-provisioner version in your cluster? and also can you paste the yaml output of the cephfs deployment? |
@Mergifyio queue |
✅ The pull request has been merged automaticallyThe pull request has been merged automatically at 72b9d5a |
/test ci/centos/upgrade-tests-cephfs |
/test ci/centos/upgrade-tests-rbd |
/test ci/centos/k8s-e2e-external-storage/1.32 |
/test ci/centos/k8s-e2e-external-storage/1.31 |
/test ci/centos/mini-e2e-helm/k8s-1.32 |
/test ci/centos/k8s-e2e-external-storage/1.30 |
/test ci/centos/mini-e2e-helm/k8s-1.31 |
/test ci/centos/mini-e2e/k8s-1.32 |
/test ci/centos/mini-e2e-helm/k8s-1.30 |
/test ci/centos/mini-e2e/k8s-1.31 |
/test ci/centos/mini-e2e/k8s-1.30 |
we need to cover this in our E2E as well :) |
I used all default tags from helm chart 3.13.0 (if I don't have mistake in values.yaml). Here is deployment apiVersion: apps/v1
kind: Deployment
metadata:
name: ceph-csi-fs-ceph-csi-cephfs-provisioner
namespace: storage
spec:
replicas: 3
selector:
matchLabels:
app: ceph-csi-cephfs
component: provisioner
release: ceph-csi-fs
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: ceph-csi-cephfs
chart: ceph-csi-cephfs-3.13.0
component: provisioner
heritage: Helm
release: ceph-csi-fs
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ceph-csi-cephfs
- key: component
operator: In
values:
- provisioner
topologyKey: kubernetes.io/hostname
containers:
- args:
- --nodeid=$(NODE_ID)
- --type=cephfs
- --controllerserver=true
- --pidlimit=-1
- --endpoint=$(CSI_ENDPOINT)
- --v=4
- --drivername=$(DRIVER_NAME)
- --setmetadata=true
- --logslowopinterval=30s
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: DRIVER_NAME
value: cephfs.csi.ceph.com
- name: NODE_ID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
image: artifactory.devops.telekom.de/quay.io/cephcsi/cephcsi:v3.13.0
imagePullPolicy: IfNotPresent
name: csi-cephfsplugin
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: socket-dir
- mountPath: /sys
name: host-sys
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /dev
name: host-dev
- mountPath: /etc/ceph/
name: ceph-config
- mountPath: /etc/ceph-csi-config/
name: ceph-csi-config
- mountPath: /tmp/csi/keys
name: keys-tmp-dir
- args:
- --csi-address=$(ADDRESS)
- --v=1
- --timeout=60s
- --leader-election=true
- --retry-interval-start=500ms
- --extra-create-metadata=true
- --feature-gates=HonorPVReclaimPolicy=true
- --prevent-volume-mode-conversion=true
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
image: artifactory.devops.telekom.de/registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
imagePullPolicy: IfNotPresent
name: csi-provisioner
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: socket-dir
- args:
- --csi-address=$(ADDRESS)
- --v=1
- --timeout=60s
- --leader-election=true
- --extra-create-metadata=true
- --enable-volume-group-snapshots=false
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
image: artifactory.devops.telekom.de/registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
imagePullPolicy: IfNotPresent
name: csi-snapshotter
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: socket-dir
- args:
- --v=1
- --csi-address=$(ADDRESS)
- --timeout=60s
- --leader-election
- --retry-interval-start=500ms
- --handle-volume-inuse-error=false
- --feature-gates=RecoverVolumeExpansionFailure=true
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
image: artifactory.devops.telekom.de/registry.k8s.io/sig-storage/csi-resizer:v1.11.1
imagePullPolicy: IfNotPresent
name: csi-resizer
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: socket-dir
- args:
- --type=liveness
- --endpoint=$(CSI_ENDPOINT)
- --metricsport=8080
- --metricspath=/metrics
- --polltime=60s
- --timeout=3s
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
image: artifactory.devops.telekom.de/quay.io/cephcsi/cephcsi:v3.13.0
imagePullPolicy: IfNotPresent
name: liveness-prometheus
ports:
- containerPort: 8080
name: metrics
protocol: TCP
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: socket-dir
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/control-plane: ""
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ceph-csi-fs-ceph-csi-cephfs-provisioner
serviceAccountName: ceph-csi-fs-ceph-csi-cephfs-provisioner
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
volumes:
- emptyDir:
medium: Memory
name: socket-dir
- hostPath:
path: /sys
type: ""
name: host-sys
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
- hostPath:
path: /dev
type: ""
name: host-dev
- configMap:
defaultMode: 420
name: ceph-config-cephfs
name: ceph-config
- configMap:
defaultMode: 420
name: ceph-csi-config-cephfs
name: ceph-csi-config
- emptyDir:
medium: Memory
name: keys-tmp-dir Storage Class: allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard-rwx-retain
parameters:
clusterID: ...
csi.storage.k8s.io/controller-expand-secret-name: ceph-rwx-pool-01
csi.storage.k8s.io/controller-expand-secret-namespace: storage-namespace
csi.storage.k8s.io/node-stage-secret-name: ceph-rwx-pool-01
csi.storage.k8s.io/node-stage-secret-namespace: storage-namespace
csi.storage.k8s.io/provisioner-secret-name: ceph-rwx-pool-01
csi.storage.k8s.io/provisioner-secret-namespace: storage-namespace
fsName: ...
pool: ...
provisioner: cephfs.csi.ceph.com
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer |
Fixes: #5125
All information included in linked Issue.
Checklist:
guidelines in the developer
guide.
Request
notes
updated with breaking and/or notable changes for the next major release.