Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RATE-LIMIT log message every 10 seconds, even with all monitors paused. #5122

Open
2 tasks done
rct opened this issue Sep 20, 2024 · 1 comment
Open
2 tasks done

RATE-LIMIT log message every 10 seconds, even with all monitors paused. #5122

rct opened this issue Sep 20, 2024 · 1 comment
Labels

Comments

@rct
Copy link

rct commented Sep 20, 2024

⚠️ Please verify that this question has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I'm trying to diagnose/solve getting this message logged every 10 seconds [RATE-LIMIT] INFO: remaining requests: 60. Based on other closed issues, I got the impression that some of my monitors are getting throttled.

But how do I determine what's getting throttled and creating this backlog?

The first thing I tried was to pause all monitors by pausing each monitoring group. However, even with all monitors paused, I'm still getting the log message every 10 seconds.

Does pausing freeze the backlogged queue of requests?

Is there anyway to see some counters/stats that will give an overview of what might be queued / backlogged?

Currently I have 44 monitors, all pretty generic. The majority are just pings, There are about 6 HTTP and 6 DNS monitors.

Thanks for any insights.

📝 Error Message(s) or Log

Sep 20 15:10:19 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:19-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:29 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:29-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:39 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:39-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:49 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:49-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:59 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:59-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:11:09 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:11:09-04:00 [RATE-LIMIT] INFO: remaining requests: 60

🐻 Uptime-Kuma Version

1.23.13

💻 Operating System and Arch

Home Assistant Add On 0.12.2

🌐 Browser

Firefox 130.0.1

🖥️ Deployment Environment

  • Runtime: Docker version 26.1.4, build 26.1.4, alpine_3_19, NodeJS 18.20.4,
  • Database: SQLite 3.41.1, db version 10
  • Filesystem used to store the database on:
  • number of monitors: 44
@rct rct added the help label Sep 20, 2024
@rct
Copy link
Author

rct commented Sep 20, 2024

Before posting, I had referenced this previous issue and explanation: #3157 (comment)

It doesn't seem to me that I'm spamming a particular endpoint, since all the requests are to a different destination. The DNS monitors have some overlap.

Also, I restarted the container with all monitors paused and got RATE-LIMIT messages logged within 5 seconds of the process starting:

Sep 20 15:23:09 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:09-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:23:09 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:09-04:00 [RATE-LIMIT] INFO: remaining requests: 59.01
Sep 20 15:23:18 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:18-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:23:19 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:19-04:00 [RATE-LIMIT] INFO: remaining requests: 59.007
Sep 20 15:23:29 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:29-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:23:29 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:29-04:00 [RATE-LIMIT] INFO: remaining requests: 59.009
Sep 20 15:23:39 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:39-04:00 [RATE-LIMIT] INFO: remaining requests: 60

So I guess the backlog of requests must be persisted in the database?

Is there any way to clear the backlog?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant