Replies: 3 comments 3 replies
-
Good question! The simplest way to handle this would be to subclass class CustomSession(CachedSession):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.lock = threading.RLock() # or multiprocessing.Lock()
def send(self, *args, **kwargs):
with self.lock:
return super().send(*args, **kwargs) I might consider adding that as optional behavior. The reason that's not currently the default behavior is that it's a tradeoff that really depends on the use case:
There could be a more thorough solution that only locks duplicate requests. Something like pushing requests onto a queue to be handled by worker threads, which then acquire locks based on the cache key. I haven't fully thought that through yet, so there could be a simpler option. Edit: taking a closer look at your link, it looks like aiocache's RedLock class is doing something along those lines. I'll need to read up on this a bit more. Let me know if you have any other thoughts or suggestions on this. |
Beta Was this translation helpful? Give feedback.
-
This is a really cool project, by the way! That looks like something I would potentially use myself. Link for others' reference: https://github.com/snarfed/bridgy-fed |
Beta Was this translation helpful? Give feedback.
-
My use case is to request a single API and avoid duplicate calls to the same API endpoint (with the same POST request body). A very simplified example where the same URL is called two times (cache will not work until we get a first response): import asyncio
import logging
from aiohttp_client_cache import CachedSession
logging.basicConfig(level=logging.DEBUG)
async def main():
async with CachedSession() as session:
await asyncio.gather(
session.get('http://httpbin.org/delay/5'),
session.get('http://httpbin.org/delay/5')
)
asyncio.run(main()) @JWCook Have you started, stopped or rejected work on this feature? I can give this a try if I have some time. |
Beta Was this translation helpful? Give feedback.
-
In concurrent usage, for example with threads, can we coalesce requests for the same resource? To solve dogpiling.
So if multiple requests for the same URL come, before the first request is ready and the result is in cache, the request should be done only once. First request processing fetches the result and puts it to the cache, while later requests just wait that the result is ready in the cache, and return the result from the cache then.
One way to do this is with locking, like in the aiocache lib, https://github.com/aio-libs/aiocache/blob/b83a02f6f225dadd4069b8cba23798eb144a3291/tests/acceptance/test_lock.py#L55
I've used that in async code, but we're now looking into using requests-cache in bridgy-fed which is sync code, but uses threaded workers.
Beta Was this translation helpful? Give feedback.
All reactions