You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug: A clear and concise description of what the bug is.
I installed openebs localpv provisioner via helm chart with xfs quota enabled:
xfsQuota:
# If true, enables XFS project quota
enabled: true
# Detailed configuration options for XFS project quota.
# If XFS Quota is enabled with the default values, the usage limit
# is set at the storage capacity specified in the PVC.
softLimitGrace: "60%"
hardLimitGrace: "90%"
Then I created a pvc with 10Gi and running it with a busybox container. However I found openebs set a wrong&bigger soft&hard limits for this pvc.
root@stonetest:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
busybox-test Bound pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 10Gi RWO openebs-hostpath 4h51m
root@stonetest:~# kubectl get pv pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 -o yaml | grep path
openebs.io/cas-type: local-hostpath
path: /openebs/local/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
storageClassName: openebs-hostpath
root@stonetest:~# mount | grep 45a4
/dev/vdc on /var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
root@stonetest:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
vdc xfs f993bbb1-d875-4436-ab4d-d7275b2c719c 30.6G 39% /var/lib/kubelet/pods/93ae9893-0374-4197-9084-02f64d8aaba6/volumes/kubernetes.io~local-volume/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c
/var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2
/openebs
root@stonetest:~# xfs_quota -x
xfs_quota> print
Filesystem Pathname
/openebs /dev/vdc (pquota)
/var/lib/kubelet/pods/6f140003-770a-408c-a281-3b1e1faecf0c/volumes/kubernetes.io~local-volume/pvc-45a4e4f7-8117-4725-a17b-e3446da4b7a2 /dev/vdc (pquota)
/var/lib/kubelet/pods/93ae9893-0374-4197-9084-02f64d8aaba6/volumes/kubernetes.io~local-volume/pvc-e31459f2-67e5-44fb-bd2f-c4b7f1bc9f9c /dev/vdc (pquota)
xfs_quota> report
Project quota on /openebs (/dev/vdc)
Blocks
Project ID Used Soft Hard Warn/Grace
---------- --------------------------------------------------
#0 0 0 0 00 [0 days]
#1 0 17179872 20401096 00 [--------]
#2 0 17179872 20401096 00 [--------]
The openebs provisioner set the soft limit as 17179872KB and hard limit as 20401096KB which exceeded the pvc's capacity (10Gi), I think this is wrong.
The soft limit should be 10Gi * 0.6 and the hard limit should be 10Gi * 0.9 respectively.
Expected behaviour: A concise description of what you expected to happen
The openebs provisioner set the correct soft&hard limits for the pvc
Steps to reproduce the bug:
attach a disk to k8s node, format it with xfs and mount it to /openebs with pquota options.
install openebs localpv provisioner with helm chart v3.3.1.
root@stonetest:~# helm install openebs openebs/openebs --namespace openebs -f openebs-values.yaml
root@stonetest:~# cat openebs-values.yaml
apiserver:
enabled: false
varDirectoryPath:
baseDir: "/openebs"
provisioner:
enabled: false
localprovisioner:
enabled: true
basePath: "/openebs/local"
deviceClass:
enabled: false
hostpathClass:
# Name of the default hostpath StorageClass
name: openebs-hostpath
# If true, enables creation of the openebs-hostpath StorageClass
enabled: true# Available reclaim policies: Delete/Retain, defaults: Delete.
reclaimPolicy: Delete
# If true, sets the openebs-hostpath StorageClass as the default StorageClass
isDefaultClass: false# Path on the host where local volumes of this storage class are mounted under.# NOTE: If not specified, this defaults to the value of localprovisioner.basePath.
basePath: "/openebs/local"# Custom node affinity label(s) for example "openebs.io/node-affinity-value"# that will be used instead of hostnames# This helps in cases where the hostname changes when the node is removed and# added back with the disks still intact.# Example:# nodeAffinityLabels:# - "openebs.io/node-affinity-key-1"# - "openebs.io/node-affinity-key-2"
nodeAffinityLabels: []
# Prerequisite: XFS Quota requires an XFS filesystem mounted with# the 'pquota' or 'prjquota' mount option.
xfsQuota:
# If true, enables XFS project quota
enabled: true# Detailed configuration options for XFS project quota.# If XFS Quota is enabled with the default values, the usage limit# is set at the storage capacity specified in the PVC.
softLimitGrace: "60%"
hardLimitGrace: "90%"# Prerequisite: EXT4 Quota requires an EXT4 filesystem mounted with# the 'prjquota' mount option.
ext4Quota:
# If true, enables XFS project quota
enabled: false# Detailed configuration options for EXT4 project quota.# If EXT4 Quota is enabled with the default values, the usage limit# is set at the storage capacity specified in the PVC.
softLimitGrace: "0%"
hardLimitGrace: "0%"
snapshotOperator:
enabled: false
ndm:
enabled: false
ndmOperator:
enabled: false
ndmExporter:
enabled: false
webhook:
enabled: false
crd:
enableInstall: false
policies:
monitoring:
enabled: false
analytics:
enabled: false
jiva:
enabled: false
openebsLocalpv:
enabled: false
localpv-provisioner:
openebsNDM:
enabled: false
cstor:
enabled: false
openebsNDM:
enabled: false
openebs-ndm:
enabled: false
localpv-provisioner:
enabled: false
openebsNDM:
enabled: false
zfs-localpv:
enabled: false
lvm-localpv:
enabled: false
nfs-provisioner:
enabled: false
create pvc and running a busybox with it
xfs_quota -x then check the soft/hard limits for the pvc
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
root@stonetest:~# kubectl get pods -n openebs --show-labels
NAME READY STATUS RESTARTS AGE LABELS
openebs-localpv-provisioner-5757b495fc-4zflv 1/1 Running 0 5h16m app=openebs,component=localpv-provisioner,name=openebs-localpv-provisioner,openebs.io/component-name=openebs-localpv-provisioner,openebs.io/version=3.3.0,pod-template-hash=5757b495fc,release=openebs
The text was updated successfully, but these errors were encountered:
stoneshi-yunify
changed the title
xfs quota: wrong soft/hard limits number
xfs quota: wrong soft/hard limits number was set by openebs provisioner
Dec 7, 2022
The limit is applied on top of the capacity.
So if your PVC capacity is 10GiB and your limit is 60%, so the quote is set to 160%, and thus 16GiB.
And btw the max limit is double the PVC capacity, and so 100%.
I'm not sure why it is done this way, perhaps because it doesn't make sense to set a size smaller than the capacity, but I do think it is very confusing and we should at least document this better.
Describe the bug: A clear and concise description of what the bug is.
I installed openebs localpv provisioner via helm chart with xfs quota enabled:
Then I created a pvc with 10Gi and running it with a busybox container. However I found openebs set a wrong&bigger soft&hard limits for this pvc.
The openebs provisioner set the soft limit as 17179872KB and hard limit as 20401096KB which exceeded the pvc's capacity (10Gi), I think this is wrong.
The soft limit should be 10Gi * 0.6 and the hard limit should be 10Gi * 0.9 respectively.
Expected behaviour: A concise description of what you expected to happen
The openebs provisioner set the correct soft&hard limits for the pvc
Steps to reproduce the bug:
The output of the following commands will help us better understand what's going on:
kubectl get pods -n <openebs_namespace> --show-labels
kubectl logs <upgrade_job_pod> -n <openebs_namespace>
Anything else we need to know?:
Add any other context about the problem here.
Environment details:
kubectl get po -n openebs --show-labels
): openebs helm chart v3.3.1kubectl version
): v1.23.10cat /etc/os-release
): Ubuntu 22.04 LTSuname -a
): Linux stonetest 5.15.0-53-generic chore(Makefile/buildscript): Clean up traces of travis workflow from makefile and buildscript #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/LinuxThe text was updated successfully, but these errors were encountered: