4.0.5: questions about disk and memory footprint #13225
-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
@ronmegini I don't see any evidence of a leak. This should have been a discussion, not an issue. Messages are stored on disk with a fair amount of metadata. When messages are consumed and confirmed, they are not deleted immediately, neither by CQs nor by QQs and certainly not streams (where consumptions is non-destructive). Quorum queues delete entire segment files, not individual messages and CQs compact segment files when more than half of two files have "holes" (messages marked for deletion). The explanation for one pod having an outsized footprint is almost always as simple as: that pod hosts the majority of replicas, or, in your case, the majority of CQs which in 4.x are a non-replicated type. The runtime allocates memory without using it immediately, which is explained in the Reasoning About Memory Use guide. There is kernel page cache which Kubernetes before a certain version (1.25 or so) or certain cgroups version (v1) is considered to be a part of the process' footprint even thought that's completely incorrect, which our docs also mention. |
Beta Was this translation helpful? Give feedback.
@ronmegini I don't see any evidence of a leak. This should have been a discussion, not an issue.
Messages are stored on disk with a fair amount of metadata. When messages are consumed and confirmed, they are not deleted immediately, neither by CQs nor by QQs and certainly not streams (where consumptions is non-destructive).
Quorum queues delete entire segment files, not individual messages and CQs compact segment files when more than half of two files have "holes" (messages marked for deletion).
The explanation for one pod having an outsized footprint is almost always as simple as: that pod hosts the majority of replicas, or, in your case, the majority of CQs which in 4.x are a non-replic…