-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merkle Mountain Range for efficient Grandpa ancestry proofs #263
Comments
This will intersect heavily with the changes we are planning for the Ethereum bridge. We are thinking of having validators do BLS signatures outside of GRANDPA on finalised blocks. This would avoid the need for simplifying GRANDPA justifications, but not the case of working backwards from one thing signed by the validators. So maybe the validators should be signing the MMR roots with BLS. For Polkadot and Kusama, we also want a cheap way of proving that a parachain block happened directly, which might mean using more than relay chain block hashes in something like this. That would really help bridges on parachains. |
Extensions on top of this that we would like to see:
Rough pseudo-code: trait MMR {
type LeafData: Encode;
const IS_ENABLED: bool;
fn mmr_leaf_data_for_current_block() -> Self::LeafData;
}
struct ParachainMmr;
impl MMR for ParachainMmr {
type LeafData = (ParachainId, ParachainHeadData);
const IS_ENABLED = true;
fn mmr_leaf_data_for_current_block() -> Self::LeafData { todo!() }
}
impl frame_system::Trait for PolkadotRuntime {
type MMR = ParachainMmr;
} Ideally the hash function should be configurable to make MMR proofs easy to verify on chains with limited support (Think:
The crypto for signature is not yet decided yet, ideally if this was something that is cheap to verify on Ethereum and supports signature aggregation. Perhaps BLS over some eth-supported curve. Open question remains how frequently such commitments should be prepared and how validators should agree on a single block to collect the signatures on. One option would be to take a block: block_to_sign_on = last_block_with_signed_mmr_root + NextPowerOfTwo((last_finalized_block - last_block_with_signed_mmr_root) / 2)
Potentially useful: |
Still todo in substrate repo:
|
Hi, @tomusdrw I have noticed the Merkle Mountain Range pallet(paritytech/substrate#7312) is accepted in the substrate. Based upon this, Is it possible to integrate MMR to |
@AsceticBear hey! Yes, we have a plan to create a different Sub<>Sub bridge pallet and use MMR + BEEFY actually instead of GRANDPA. |
@tomusdrw We have a plan to do this MMR optimization based on current sub<>sub bridge in this repo like what you said above. However, a problem we encountered is how do we know the block with the flag |
@AsceticBear sorry for the late answer. Please take a look at BEEFY protocol: Currently it's using secp256k1 signatures for Ethereum compatibility, but the plan is to have BEEFY authorities create an aggregated signature (e.g. BLS) for even faster verification. If I understood your question correctly, with GRANDPA + MMR your client might need to have the session length hardcoded, so that with every set transition it requires the header with |
Closing, since:
|
* Upgdate to latest polkadot & substrate * Fix code formatting (cargo fmt) * Fix unit tests
Imagine a chain like this:
Why?
Light client implementations have a hard time to decide if headers 6-9 should be imported if header 10 (
0xA
) has not yet been seen.Our current approach (Solidity for PoA, most likely upcoming substrate light client) is to simply import headers as we see them, verifying only their parent-hash to make sure they extend the right (finalized) fork and mark them finalized at the very end, when we see justification data for block number 10.
This approach has some drawbacks:
For many chains, where transactions costs (both computation and storage) are high, this approach might be suboptimal.
How?
The idea is to be able to import header number 10 directly, without requiring 6-9 to be imported first (or at all). While we could in theory simply accept header 10 if it's signed correctly by current validator set, we might run into two issues:
In Frame-based substrate runtimes,
frame_system
pallet is actually storing block hashes of recent blocks, so it might be possible to use this data to prove ancestry (you simply present a storage proof at block 10), but this data is pruned (MaxBlockHashes
) and while in theory if finality data was on-chain as well, we could extend this period, there are more efficient ways of doing it, namely:paritytech/substrate#2053
Or even better Merkle Mountain Ranges: paritytech/substrate#3722
Details
We should extend
frame_system
to store MMR peaks and useIndexing API
to write all the nodes to the Offchain Database.Every node with indexing enabled will then be able to construct MMR proofs of ancestry that can be efficiently verified against the on-chain data.
MMR offchain data can also be re-constructed from the header chain (perhaps within an Offchain Worker), but reconstructing the structure on-demand is not feasible (it's a
O(n)
process).So to circle back to our example, the light client could import header 10 directly, but would require an extra proof data:
Also any application built on top of the header chain could verify that say header 7 is part of the chain by providing exactly the same proofs.
This issue needs to be implemented in Substrate repo.
The text was updated successfully, but these errors were encountered: