-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zcash_client_backend: Use knowledge of inserted treestates to reduce time to spendability #982
Comments
Do we even need to modify the scan ranges? This feels like it is trying to shoe-horn additional semantics or signals into The need to obtain a tree state at the anchor height is associated with a change to the chain tip height. We could therefore instead modify |
We'll have to keep track somewhere of the fact that we've been given the tree state & combine the information about that latest-provided-tree-state with information about scan ranges. We'll need this whichever way we do it, so I guess that just keeping |
Yeah, when a treestate at a specific height is provided, we could look at the height and if it intersects a |
A couple more insights: This change will mean that we can no longer use whether or not a subtree is completely scanned as a condition for determining whether a note is spendable. Instead, we will likely want to store an extra piece of shard metadata, which is the height in the shard above which it's possible to build a witness. |
Is scanning work at the chain tip actually a bottleneck? I would have thought that it's desirable to always scan the chain tip when we see new blocks, and that it takes very little time in the steady state. (If the database reopening on each FFI call is why it's taking a significant time, then that's what we should address, rather than adding complexity elsewhere.) |
In my experience with Zashi, it can take more than a minute, sometimes even several minutes, to scan enough blocks to fill up the 2^16 notes in the tip shard; 2^16 notes is still ~0.1% of the chain, and in sparse block periods that can be weeks worth of blocks. |
I want to generalize this task to "the wallet downloads the tree state corresponding to the previous block for every one of the user's notes that is discovered, and inserts this tree state into the note commitment tree". This doesn't reveal any information to the lightwalletd server that fetching the memos doesn't already reveal. This would make it so that blocks prior to the discovered note could be downgraded to |
After #1262 we have the treestates downloaded and inserted for every block range, so we just need to ensure that the scan queue logic can take advantage of them by deprioritizing the range prior to the first tree state for one of our notes in a block. |
A relevant change for this issue is to alter the |
In order to minimize the amount of scanning work required at the chain tip, we should implement the following:
ChainTip
, call itAnchorRange
or something of the sort. The semantics of anAnchorRange
scanning range are that when a wallet encounters such a range, it immediately downloads the treestate corresponding to the last block prior to the start of the range and inserts that frontier directly into the note commitment tree (using a to-be-definedWalletCommitmentTrees
method).AnchorRange
scanning range is at mostPRUNING_DEPTH
blocks long and has its end equal to thechain_tip + 1
AnchorRange
range with a starting height <stable_height
has its priority reduced toHistoric
stable_height
has no unscanned ranges belowstable_height
, noAnchorRange
must exist, because in this case it's unnecessary to download or insert any additional tree state.It might be possible to repurpose the
ChainTip
priority to have these semantics, but I think it's better to be explicit about the circumstance where the tree state should be downloaded, and we should continue to support wallets that don't want to download tree states; such wallets will treatAnchorRange
ranges in the same fashion as they currently treatChainTip
ranges.The text was updated successfully, but these errors were encountered: