You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the recent bechmarks of storing MTs in disk vs memory, encoding time is still the large dominant force here which gives room for (batched) disk operations.
Cache size is not small, if I remember correctly is something like 4 sector sizes, roughly:
The parents cache gives a considerable speed improvement which, if enabled by default, storing it on disk would not have any major memory impact and the disk penalty would be greatly compensated by the cache benefits.
Cache access pattern is extremely regular because by definition we encode sequentially (the only difference is the forward/reverse direction), so grouping entries in blocks (similar to MT generation) would minimize I/O.
How?
There could be an option (feature) to still keep it in RAM to optimize for speed in extreme cases if needed.
We can leverage the new DiskStore to reduce implementation time.
The disk-trees feature (soon to be made default) and whatever feature we end up using here should eventually converge to a more general and simple profile option that the user should be aware of, that should transmit an idea in the lines of "optimize for memory at the cost of speed and disk usage".
If this turns out to actually be useful and the Pedersen cache (#697) also exhibits regular access patterns something similar could be done there as well.
The text was updated successfully, but these errors were encountered:
Yes, they are pretty much the same, sorry I missed that issue.
Please coordinate with @DrPeterVanNostrand if you think that one should be implemented instead, the idea was to reduce the memory consumption of the parents to pave the way for #827.
Why?
Based on the recent bechmarks of storing MTs in disk vs memory, encoding time is still the large dominant force here which gives room for (batched) disk operations.
Cache size is not small, if I remember correctly is something like 4 sector sizes, roughly:
The parents cache gives a considerable speed improvement which, if enabled by default, storing it on disk would not have any major memory impact and the disk penalty would be greatly compensated by the cache benefits.
Cache access pattern is extremely regular because by definition we encode sequentially (the only difference is the forward/reverse direction), so grouping entries in blocks (similar to MT generation) would minimize I/O.
How?
There could be an option (feature) to still keep it in RAM to optimize for speed in extreme cases if needed.
We can leverage the new
DiskStore
to reduce implementation time.The
disk-trees
feature (soon to be made default) and whatever feature we end up using here should eventually converge to a more general and simple profile option that the user should be aware of, that should transmit an idea in the lines of "optimize for memory at the cost of speed and disk usage".If this turns out to actually be useful and the Pedersen cache (#697) also exhibits regular access patterns something similar could be done there as well.
The text was updated successfully, but these errors were encountered: