-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unbound memory usage #40
Comments
@kamiyaa what do u think |
Code relay task created for this https://github.com/code-relay-io/joshuto/blob/master/README.md. So it should get done. |
I looked into this a bit am am not convinced there is even a problem that needs solving here. Yes memory usage does grow for a while during use, but I'm not convinced it is actually "unbounded". It grows as it sees and caches new directory structures. You can make it grow for a while by browsing new trees. When you run out of file system trees that haven't been seen then just hanging around open or even in use doesn't grow the memory usage. There is also a mechanism to notice outdated cache entries and depreciate them, causing a refresh of that part of the cache. This seems to protect against any possible runaway condition caching directories that have come and gone. I did look into using something like transient-hashmap, but I don't think this makes sense for two reasons: time isn't really a metric the user would care about—if a directory has been seen it should be cached until it is changed, not until some random timeout; additionally the current design expects the map to at least be populated to the root, so some potentially older directories must be guarded from being pruned. Transient maps don't lend themselves to that sort of thing. The only other approach I could think of is some garbage collector that periodically verified that all files in the cache currently exist and pruning them if not, but this could be a very expensive operation (relatively speaking) and would only have minor potential effects on the size of the hash map (it would catch deletions in places that you were not actively browsing any more). I just can't make a case for this being worth while. What problem are we actually trying to solve here? |
I think we can safely close this one XD |
It looks like the application loads files/directories into a hashmap, however this operation should be bound to reduce memory usage when using
joshuto
for a long time.Acceptance tests:
The text was updated successfully, but these errors were encountered: