-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add asyncio.Lock mutex to _request #227
Conversation
With this change if there is simultaneous request to the same url at the same time, it will only fetch the url once and the rest is feeded from cache
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #227 +/- ##
==========================================
- Coverage 97.75% 97.31% -0.44%
==========================================
Files 10 10
Lines 1025 1044 +19
Branches 173 177 +4
==========================================
+ Hits 1002 1016 +14
- Misses 16 20 +4
- Partials 7 8 +1 ☔ View full report in Codecov by Sentry. |
Another thing to consider is lock cleanup. For a long-running cache with a large number of unique requests, the number of cache keys in memory could start to add up (roughly 1MB per 6K unique requests). I don't think that needs to be solved in this PR, though. I'll create a separate issue for that (#228). |
Yes, that looks good to me!
That's fine, it looks like that's just because test coverage is only run for python 3.11 right now, so it thinks that |
Merged. Thanks for the contribution @rudcode! |
With this change if there is simultaneous request to the same url at the same time, it will only fetch the url once and the rest is feeded from cache
@olk-m @JWCook
I create a test like this, is it enough?