You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following up on the work about caching in #174 and #235. The cache supports request coalescing (also know as query deduplication): if a value is not in the cache, we do the underlying query, but if there is already a running query for it, we should wait for it to complete instead of executing the query multiple times in parallel. That way we get the result faster and reduce the amount of work.
I think this feature would make a lot of sense on subgraph queries, to reduce traffic to backend servers, and would unlock some of the benefits of caching (with a query taking 40ms, it would set an upper limit of 25rps on the subgraph instead of one subgraph query per client query) without tackling the issues of cache invalidation. And that would be compatible with caching once we get there.
Risk: we are not sure yet if the mutex design we use is safe against deadlocks, so I'll work on modeling and testing it so we can reuse it
The text was updated successfully, but these errors were encountered:
Following up on the work about caching in #174 and #235. The cache supports request coalescing (also know as query deduplication): if a value is not in the cache, we do the underlying query, but if there is already a running query for it, we should wait for it to complete instead of executing the query multiple times in parallel. That way we get the result faster and reduce the amount of work.
I think this feature would make a lot of sense on subgraph queries, to reduce traffic to backend servers, and would unlock some of the benefits of caching (with a query taking 40ms, it would set an upper limit of 25rps on the subgraph instead of one subgraph query per client query) without tackling the issues of cache invalidation. And that would be compatible with caching once we get there.
Risk: we are not sure yet if the mutex design we use is safe against deadlocks, so I'll work on modeling and testing it so we can reuse it
The text was updated successfully, but these errors were encountered: