-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
worker threads mechanism does not allow for small number of volumes #56
Comments
Have you observed this problem on both the controller and agent? |
I haven't looked at the controller. I was using only the agent. In the PVMonitorAgent code, you will see the checkPVWorker threads started together. Every monitorInterval, one picks an item from the queue, does its check, and requeues it - at which point (if there is only one volume attached to a pod) the next thread immediately pulls the requeued item and does the same thing, and so on, so that there are 10 successive checks for the same volume all at once. This is not really a problem, it's just noisy and looks like a bug until you know why it's happening. Mitigation would be to reduce threads to 1 and decrease the interval. Ideally, monitorInterval would define an upper bound of on the polling frequency of any given volume, regardless of the number of threads. |
Let me check the code and give your feedback. Thanks for your issue ❤️ |
/assign @fengzixu |
Thank you @fengzixu. Conceptually, this issue would be solved simply by putting the monitor interval delay after the volume is checked and before requeuing it to be checked again - though that would entail some reorganization of the code as it is currently structured. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
the issue remain exist in v0.9.0 |
/reopen |
@xing-yang: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/assign |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The external-health-monitor's worker thread mechanism (10 by default) is blind to the number of volumes. If there are fewer than 10 volumes then there is a burst of repeated probes of the same volume: if there is only one then there is a rapid fire succession of 10 probes of the same volume every minute.
This is not a high priority issue.
The text was updated successfully, but these errors were encountered: