-
Hello, Today I noticed that a ceph pool (only used by the csi driver) is almost full. The problem is, I can't explain why. There is a data amount of ~2 TB net, the pool has a sizing of 3, so ~2 TB * 3 = ~6TB gross, which should be the expected amount. But the pool has a sizing of ~29TB gross, actually. I tried to find some stale data, even if all storage classes use the delete policy. But I can't retrieve the omap keys/values as described in the cleanup doc: While troubleshooting, I found that using The log files for the CSI driver are also unremarkable. I have the feeling that the deletion in the CephFS pool apparently did not work correctly and reliably in the past. The Ceph-rbd-driver is also running in the same K8s cluster, which in turn uses an RBD pool in the same Ceph cluster, here I see no anomalies in the storage space utilization. edit: I've mounted the CephFS-Pool and df told's me, that actually 8.7TB are used. This is definitely more than 2TB of PVC's in sum. How I can identify the stale data? Can I assume, that the directories with the identical .meta-file are currently in use and all other directories are stale? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I found out, that the FS-Path is mapped in the PV-Object and that the other directories actually stale data. |
Beta Was this translation helpful? Give feedback.
I found out, that the FS-Path is mapped in the PV-Object and that the other directories actually stale data.