Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

custom SecurityContextConstraint to run sig-storage/local-volume-provisioner in a DaemonSet? #447

Open
jonasbartho opened this issue May 14, 2024 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jonasbartho
Copy link

Hi,

Is there any documentation on which custom SecurityContextConstraint that can be used to run registry.k8s.io/sig-storage/local-volume-provisioner in a daemonset without having "privileged=true" enabled? This is not allowed anymore in OKD/Openshift.

Proper documentation around this would be very useful. (using an operator is not possible here btw)

@jonasbartho
Copy link
Author

jonasbartho commented May 14, 2024

I am using the following scc which looks to be working for now but the volume gets a "permission denied":

scc.yaml:

allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities:
  - CHOWN
  - FSETID
  - SETGID
  - SETUID
  - NET_BIND_SERVICE
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
  type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
  name: scc-local-provisioner
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
  - KILL
  - DAC_OVERRIDE
  - FOWNER
  - SETPCAP
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
seccompProfiles:
- runtime/default
supplementalGroups:
  type: RunAsAny
volumes:
  - configMap
  - downwardAPI
  - emptyDir
  - hostPath
  - projected
  - secret

clusterrole.yaml:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:openshift:scc:scc-local-provisioner
rules:
- apiGroups:
  - security.openshift.io
  resourceNames:
  - scc-local-provisioner
  resources:
  - securitycontextconstraints
  verbs:
  - use

rolebinding_scc.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: system:openshift:scc:scc-local-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:scc-local-provisioner
subjects:
  - kind: ServiceAccount
    name: local-static-provisioner
    namespace: kube-system

oc logs -f local-static-provisioner:

E0514 17:31:15.318990       1 discovery.go:221] Failed to discover local volumes: error reading directory: open /mnt/local-disks: permission denied

@niranjandarshann
Copy link

@jonasbartho What i noticed in scc.yaml your current SCC lacks certain capabilities that might be required. Specifically, the SYS_ADMIN capability is often needed for operations involving hostPath volumes.
So try using

allowedCapabilities:
  - CHOWN
  - FSETID
  - SETGID
  - SETUID
  - NET_BIND_SERVICE
  - SYS_ADMIN 

and check whether it run fine or not.

@niranjandarshann
Copy link

Here One more thing what i want to ask is Had you given the appropriate permission to the directory /mnt/local-disks So that the container can access it?

@niranjandarshann
Copy link

If you forget and hadnt given appropriate permission to the directory /mnt/local-disks You can use
sudo chmod -R 777 /mnt/local-disks
Command to give permission to the container to access it.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants