Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Too many requests to GET /v1/servers/{id} #661

Open
apricote opened this issue Jun 12, 2024 · 1 comment
Open

fix: Too many requests to GET /v1/servers/{id} #661

apricote opened this issue Jun 12, 2024 · 1 comment
Assignees
Labels
bug Something isn't working pinned

Comments

@apricote
Copy link
Member

apricote commented Jun 12, 2024

TL;DR

In some situations hcloud-cloud-controller-manager starts to spam GET /v1/servers/{id} for a subset of the nodes in the cluster every few seconds.

Expected behavior

I would expect hccm to only send a single request per --node-status-update-frequency (defaults to 5 minutes).

Observed behavior

We get reports from customers that exhaust the rate limit and we can see that hcloud-cloud-controller-manager sends > 1 req/s for GET /v1/servers/{id}. The default rate limit is 1 request per second.

A restart of the pod fixes the behaviour.

Minimal working example

We are not sure how to reproduce this yet.

Log output

No response

Additional information

Looking at some request logs, this seems to affect even very old versions of HCCM (>2 years).

@apricote apricote added the bug Something isn't working label Jun 12, 2024
@apricote apricote self-assigned this Jun 12, 2024
Copy link
Contributor

This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.

@github-actions github-actions bot added the stale label Sep 10, 2024
@jooola jooola added pinned and removed stale labels Sep 10, 2024
lukasmetzner added a commit that referenced this issue Dec 16, 2024
This includes metrics about internal operations from
`k8s.io/cloud-provider` like the workqueue depth and requests to the
Kubernetes API.

This metrics were already exposed on `:8233/metrics` but this was not
documented or scraped.

This commit now uses the same registry for our metrics and the
Kubernetes libraries, and also exposes them on both ports for backwards
compatibility.

Besides having all data available, this will also help us with debugging
#661.

Co-authored-by: Lukas Metzner <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working pinned
Projects
None yet
Development

No branches or pull requests

2 participants