You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For a long-running cache with a large number of unique requests, the number of cache keys in memory could start to add up. With the current hash function (sha256), that's roughly 1MB per 6K unique requests (~113B per the hex digest, ~56B per lock object).
Possible solutions would include:
Adding a fixed TTL to each lock and run cleanup after every request
Something like lru-dict, but ideally without adding another dependency.
A wrapper function + functools.lru_cache might be sufficient.
The text was updated successfully, but these errors were encountered:
Follow-up from #227.
For a long-running cache with a large number of unique requests, the number of cache keys in memory could start to add up. With the current hash function (sha256), that's roughly 1MB per 6K unique requests (~113B per the hex digest, ~56B per lock object).
Possible solutions would include:
functools.lru_cache
might be sufficient.The text was updated successfully, but these errors were encountered: