You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PR filecoin-project/merkletree#88 was not merged, but would have been a possible solution to this issue, and while #1165 was merged, it missed the buggy instance in question:
This "missed" instance (i.e. a failure in grep by the author, @cryptonemo ) allows post to over build partial trees during cached proof generation by only using a subset of the cached data, rather than all of it. The current code believes that the cached data had discarded 3 rows, rather than 2 rows, meaning the logic rebuilds a single row extra than it needs to due to the 'default' discrepancy. This is because the default value is chosen not by the re-worked default logic in rust-fil-proofs, but by the now outdated default logic in merkle_light.
While there are many many checks that will not allow a level cache store to be instantiated with an incorrect value for rows_to_discard, in this case, the tree is already instantiated and we're later passing an incorrect value specifying what it needs to use. Technically this is correct behaviour and it's working as intended, however, in this case, it's not at all desired or intended.
Acceptance criteria
The preference is to no longer overbuild partial trees from cached tree data during proof generation, nor rely on defaults from merkle_light.
Risks + pitfalls
There are some complications regarding tests, written up below.
Where to begin
SImply changing the default in merkle_light as PR filecoin-project/merkletree#88 will solve this particular issue and tests well locally. However, a different proposal is to use the re-worked local default for consistency in the proofs code, to bypass that default all together. While this also solves the issue, it unfortunately causes some tests to fail, where proofs are attempted to be built with a setting that is too large for test sized tree. In particular, merkle_light has checks that return failures when the value of rows_to_discard is smaller than (or equal to) the number of rows in the tree. Note that this only happens during testing, where trees are small and this issue does not affect production tree widths.
The combination of correcting the default usage in rust-fil-proofs with the additional row checks in merkle_light is a proposed solution to this. Previously, this wasn't needed because the tests passed, but apparently relied on merkle_light's default value, which did not allow it to be too small for the tree.
tl;dr: Proofs code during post can pass a rows_to_discard value that is larger than the proofs enforced default value, meaning that more work is done to produce the partial merkle trees for proof generation. This can be fixed by using the proofs default, but since this breaks some tests and some checks in merkle_light for "small trees", we also should enforce via merkle_light that less rows should be discarded than we have.
The text was updated successfully, but these errors were encountered:
Description
Reported on slack by user "chenjianye", there is a bug related to past PRs still alive in the code today. The relevant PRs are:
filecoin-project/merkletree#88
and
#1165
PR filecoin-project/merkletree#88 was not merged, but would have been a possible solution to this issue, and while #1165 was merged, it missed the buggy instance in question:
This "missed" instance (i.e. a failure in grep by the author, @cryptonemo ) allows
post
to over build partial trees during cached proof generation by only using a subset of the cached data, rather than all of it. The current code believes that the cached data had discarded 3 rows, rather than 2 rows, meaning the logic rebuilds a single row extra than it needs to due to the 'default' discrepancy. This is because the default value is chosen not by the re-worked default logic inrust-fil-proofs
, but by the now outdated default logic inmerkle_light
.While there are many many checks that will not allow a level cache store to be instantiated with an incorrect value for
rows_to_discard
, in this case, the tree is already instantiated and we're later passing an incorrect value specifying what it needs to use. Technically this is correct behaviour and it's working as intended, however, in this case, it's not at all desired or intended.Acceptance criteria
The preference is to no longer overbuild partial trees from cached tree data during proof generation, nor rely on defaults from merkle_light.
Risks + pitfalls
There are some complications regarding tests, written up below.
Where to begin
SImply changing the default in
merkle_light
as PR filecoin-project/merkletree#88 will solve this particular issue and tests well locally. However, a different proposal is to use the re-worked local default for consistency in the proofs code, to bypass that default all together. While this also solves the issue, it unfortunately causes some tests to fail, where proofs are attempted to be built with a setting that is too large for test sized tree. In particular,merkle_light
has checks that return failures when the value ofrows_to_discard
is smaller than (or equal to) the number of rows in the tree. Note that this only happens during testing, where trees are small and this issue does not affect production tree widths.The combination of correcting the default usage in
rust-fil-proofs
with the additional row checks inmerkle_light
is a proposed solution to this. Previously, this wasn't needed because the tests passed, but apparently relied onmerkle_light
's default value, which did not allow it to be too small for the tree.tl;dr: Proofs code during post can pass a
rows_to_discard
value that is larger than the proofs enforced default value, meaning that more work is done to produce the partial merkle trees for proof generation. This can be fixed by using the proofs default, but since this breaks some tests and some checks inmerkle_light
for "small trees", we also should enforce viamerkle_light
that less rows should be discarded than we have.The text was updated successfully, but these errors were encountered: