Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unbound memory usage #40

Closed
1 task
H-Richard opened this issue Mar 16, 2021 · 4 comments
Closed
1 task

Unbound memory usage #40

H-Richard opened this issue Mar 16, 2021 · 4 comments
Assignees
Labels
bug Something isn't working enhancement New feature or request help wanted Extra attention is needed

Comments

@H-Richard
Copy link

H-Richard commented Mar 16, 2021

It looks like the application loads files/directories into a hashmap, however this operation should be bound to reduce memory usage when using joshuto for a long time.

Acceptance tests:

  • the hashmap is continuously pruned
@H-Richard
Copy link
Author

@kamiyaa what do u think

@kamiyaa kamiyaa added bug Something isn't working enhancement New feature or request help wanted Extra attention is needed labels Mar 16, 2021
@mmulet
Copy link

mmulet commented Mar 19, 2021

Code relay task created for this https://github.com/code-relay-io/joshuto/blob/master/README.md. So it should get done.

@alerque
Copy link

alerque commented Mar 22, 2021

I looked into this a bit am am not convinced there is even a problem that needs solving here. Yes memory usage does grow for a while during use, but I'm not convinced it is actually "unbounded". It grows as it sees and caches new directory structures. You can make it grow for a while by browsing new trees. When you run out of file system trees that haven't been seen then just hanging around open or even in use doesn't grow the memory usage.

There is also a mechanism to notice outdated cache entries and depreciate them, causing a refresh of that part of the cache. This seems to protect against any possible runaway condition caching directories that have come and gone.

I did look into using something like transient-hashmap, but I don't think this makes sense for two reasons: time isn't really a metric the user would care about—if a directory has been seen it should be cached until it is changed, not until some random timeout; additionally the current design expects the map to at least be populated to the root, so some potentially older directories must be guarded from being pruned. Transient maps don't lend themselves to that sort of thing.

The only other approach I could think of is some garbage collector that periodically verified that all files in the cache currently exist and pruning them if not, but this could be a very expensive operation (relatively speaking) and would only have minor potential effects on the size of the hash map (it would catch deletions in places that you were not actively browsing any more). I just can't make a case for this being worth while.

What problem are we actually trying to solve here?

@kamiyaa
Copy link
Owner

kamiyaa commented Mar 22, 2021

I think we can safely close this one XD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants