Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RATE-LIMIT log message every 10 seconds, even with all monitors paused. #5122

Open
2 tasks done
rct opened this issue Sep 20, 2024 · 4 comments · May be fixed by #5402
Open
2 tasks done

RATE-LIMIT log message every 10 seconds, even with all monitors paused. #5122

rct opened this issue Sep 20, 2024 · 4 comments · May be fixed by #5402
Labels
area:core issues describing changes to the core of uptime kuma feature-request Request for new features to be added good first issue Good for newcomers

Comments

@rct
Copy link

rct commented Sep 20, 2024

⚠️ Please verify that this question has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I'm trying to diagnose/solve getting this message logged every 10 seconds [RATE-LIMIT] INFO: remaining requests: 60. Based on other closed issues, I got the impression that some of my monitors are getting throttled.

But how do I determine what's getting throttled and creating this backlog?

The first thing I tried was to pause all monitors by pausing each monitoring group. However, even with all monitors paused, I'm still getting the log message every 10 seconds.

Does pausing freeze the backlogged queue of requests?

Is there anyway to see some counters/stats that will give an overview of what might be queued / backlogged?

Currently I have 44 monitors, all pretty generic. The majority are just pings, There are about 6 HTTP and 6 DNS monitors.

Thanks for any insights.

📝 Error Message(s) or Log

Sep 20 15:10:19 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:19-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:29 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:29-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:39 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:39-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:49 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:49-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:10:59 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:10:59-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:11:09 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:11:09-04:00 [RATE-LIMIT] INFO: remaining requests: 60

🐻 Uptime-Kuma Version

1.23.13

💻 Operating System and Arch

Home Assistant Add On 0.12.2

🌐 Browser

Firefox 130.0.1

🖥️ Deployment Environment

  • Runtime: Docker version 26.1.4, build 26.1.4, alpine_3_19, NodeJS 18.20.4,
  • Database: SQLite 3.41.1, db version 10
  • Filesystem used to store the database on:
  • number of monitors: 44
@rct rct added the help label Sep 20, 2024
@rct
Copy link
Author

rct commented Sep 20, 2024

Before posting, I had referenced this previous issue and explanation: #3157 (comment)

It doesn't seem to me that I'm spamming a particular endpoint, since all the requests are to a different destination. The DNS monitors have some overlap.

Also, I restarted the container with all monitors paused and got RATE-LIMIT messages logged within 5 seconds of the process starting:

Sep 20 15:23:09 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:09-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:23:09 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:09-04:00 [RATE-LIMIT] INFO: remaining requests: 59.01
Sep 20 15:23:18 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:18-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:23:19 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:19-04:00 [RATE-LIMIT] INFO: remaining requests: 59.007
Sep 20 15:23:29 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:29-04:00 [RATE-LIMIT] INFO: remaining requests: 60
Sep 20 15:23:29 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:29-04:00 [RATE-LIMIT] INFO: remaining requests: 59.009
Sep 20 15:23:39 hasstst addon_a0d7b954_uptime-kuma[506]: 2024-09-20T11:23:39-04:00 [RATE-LIMIT] INFO: remaining requests: 60

So I guess the backlog of requests must be persisted in the database?

Is there any way to clear the backlog?

Copy link

We are clearing up our old help-issues and your issue has been open for 60 days with no activity.
If no comment is made and the stale label is not removed, this issue will be closed in 7 days.

@github-actions github-actions bot added the Stale label Nov 19, 2024
@rct
Copy link
Author

rct commented Nov 19, 2024

I still have this issue

@CommanderStorm CommanderStorm added feature-request Request for new features to be added area:core issues describing changes to the core of uptime kuma good first issue Good for newcomers and removed help Stale labels Nov 19, 2024
@louislam
Copy link
Owner

Based on the number 60, it should be the API rate limiter. I think it should be changed to debug message (hidden by default).

@Backdraft007 Backdraft007 linked a pull request Dec 3, 2024 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma feature-request Request for new features to be added good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants