-
-
Notifications
You must be signed in to change notification settings - Fork 23.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: rate limit error chaching #2448
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -373,11 +373,21 @@ const noop = () => {}; | |
const logger = | ||
process.env.NODE_ENV !== "test" ? console : { log: noop, error: noop }; | ||
|
||
// Cache settings. | ||
const CARD_CACHE_SECONDS = 14400; | ||
const ERROR_CACHE_SECONDS = 600; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's update this to 30min? Currently it's 10. And if PATs are exhausted that means Vercel will serve fresh responses every 10 minutes, but PATs will only restore after 30min or 1hour. Changing it to 30min will reduce the load on vercel. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @anuraghazra GitHub does restore the individual PATs after 1 hour. However, the frequency at which PATs are refreshed depends on when the PAT was depleted. As a result, the 10 minutes could catch a refreshed token. However, if you think the load of Vercel should be decreased, I am happy to increase it to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Okay i think we can keep it as is, load on Vercel is not a huge deal right now. |
||
|
||
const CONSTANTS = { | ||
ONE_MINUTE: 60, | ||
FIVE_MINUTES: 300, | ||
TEN_MINUTES: 600, | ||
FIFTEEN_MINUTES: 900, | ||
THIRTY_MINUTES: 1800, | ||
TWO_HOURS: 7200, | ||
FOUR_HOURS: 14400, | ||
ONE_DAY: 86400, | ||
CARD_CACHE_SECONDS, | ||
ERROR_CACHE_SECONDS, | ||
}; | ||
|
||
const SECONDARY_ERROR_MESSAGES = { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whats happening here?
CONSTANTS.ERROR_CACHE_SECONDS / 2
in max-ageCONSTANTS.ERROR_CACHE_SECONDS
in s-maxageCONSTANTS.ONE_DAY
in revalidate?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait this code is on master lol. Have no idea why is it like this 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay this was the change #2124
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, this was when I would sometimes merge PRs without your review since they had been open for a long time. Sorry for the confusion 😅. Feel free to change it to any caching behavior you like. The reasoning for the current caching code can be found at #2124 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anuraghazra, we can also remove these lines and keep the same behavior as for the case when no error is thrown. Both are fine with me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep the same behaviour as old one. Not really confident with how this header will behave on prod.