Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with MooseFS CSI Provisioner Creating PV Directories as root:root #17

Open
talkraghu opened this issue Feb 6, 2025 · 3 comments

Comments

@talkraghu
Copy link

Context:
I am using the MooseFS CSI provisioner in my Kubernetes cluster to dynamically create PersistentVolumes (PVs).

Version I am trying is: https://github.com/moosefs/moosefs-csi/blob/v0.9.7

However, the directories created by the provisioner on the MooseFS mount are always owned by root:root.

This causes an issue where pods running with a non-root user (UID 1000) cannot write to the mounted PV, resulting in permission errors.

Problem Details
The MooseFS CSI driver is provisioning PersistentVolumes (PVs) successfully.
However, the created directories inside the MooseFS mount are owned by root:root.
The pods attempting to use the PV run as UID 1000, so they do not have write access.

[root@bigdaddy-k8sc-node1-5 volumes]# ls -lrt /opt/nsp/moosefs/client/pv_data/volumes
total 3912
drwxrwx---. 2 root root      1 Feb  6 15:39 pvc-0decea91-2b3c-417f-b76c-f0573358c27c
drwxrwx---. 3 root root      1 Feb  6 16:06 pvc-74a46149-d69b-4a43-ac09-0ea4535f3eef
drwxrwx---. 2 root root      1 Feb  6 16:20 pvc-89ea7061-9d67-46ca-afb1-ec75304dde00
drwxr-xr-x. 2 root root       1 Feb  6 17:18 pvc-9f4f5973-51c9-4cbd-b678-5dac37fd5791
[root@bigdaddy-k8sc-node1-5 volumes]# 

What I've Tried
Setting fsGroup in the Pod's SecurityContext

Added this to the pod.spec:

securityContext:
  fsGroup: 1000

Issue: This does not seem to propagate ownership changes inside MooseFS.

Tried setting gid=1000 as mount option csi-moosefs-config.yaml (configmap)
This mount option was rejected

Help me figure out how can I set the PV directory ownership to "root:1000".

@xandrus
Copy link
Member

xandrus commented Feb 10, 2025

Hi!
I suspect you are looking for a K8S solution called initContainer.

K8s provides the option to use the init container declaration to execute commands before the main application container. With this solution, the UID and GUI of the folder can be easily modified:
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

So fo example:

kind: Pod
apiVersion: v1
metadata:
  name: my-moosefs-pod
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
        - mountPath: "/data" 
          name: moosefs-volume
      command: [ "sleep", "1000000" ]
  initContainers:
    - name: volume-mount-chown
      image: busybox
      command: ["sh", "-c", "chown -R 1000:1000 /data"]
      volumeMounts:
        - mountPath: "/data" 
          name: moosefs-volume
  volumes:
    - name: moosefs-volume
      persistentVolumeClaim:
        claimName: my-moosefs-pvc

There is also an option to use the extra MooseFS set noowner attribute for a specific folder.
This option is rather a temporary solution as it is rather unsafe!

mfsseteattr -f noowner /mnt/k8s/data/dir

@talkraghu
Copy link
Author

Thank you @xandrus for the suggestions. I had to execute mfsseteattr and later run the pod with initContainer to alter the pv directoryd.

  1. Set a mfs client mount at one of the node and then run the mfsseteattr as noowner at /opt/nsp/moosefs/client
mfsmount /opt/nsp/moosefs/client -H 100.120.119.210
mfsseteattr -f noowner /opt/nsp/moosefs/client

[root@test-k8sc-node1-4 dynamic-provisioning]# mfsgeteattr /opt/nsp/moosefs/client
/opt/nsp/moosefs/client: noowner
[root@test-k8sc-node1-4 dynamic-provisioning]# 

  1. running initcontainer for the pod
    The namespaces where pods are deployed cannot run the containers as root user. Below the short initContainer yaml that I had to deploy
  initContainers:
    - name: volume-mount-chown
      image: blr-orbw-artifactory.in.alcatel-lucent.com:8081/orbw-artifactory-docker-mirror/busybox:1.37.0
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        allowPrivilegeEscalation: false
        capabilities:
          drop: ["ALL"]
        seccompProfile:
          type: RuntimeDefault
      command: ["sh", "-c", "chgrp 1000 /data"]
      volumeMounts:
        - mountPath: "/data"
          name: moosefs-volume

@xandrus
Copy link
Member

xandrus commented Feb 13, 2025

I was wondering - maybe in your case it would be enough to change the maproot export parameters in the /etc/mfs/mfsexport.cfg file to solve this issue.
For example:

* / rw,admin,maproot=0:1000,alldirs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants