-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finality Verifier Pallet #628
Comments
Could |
I think that upon calling |
LGTM! There are some concerns with the long_proof/chunk based API, but we shouldn't be focusing on it initially. I really hope that something like #263 will resolve that kind of issue for us. I can imagine that headers can be retro-actively imported after the fact if that's desired, i.e. we have finality proof + ancestry check via MMR and only then we allow to import missing headers before finality.
|
Reposting @svyatonik's comment from #629 here since this is a better place for this discussion.
Now to answer some of your questions.
This is correct, those calls would be removed. The only way to write to the pallet would be through the Finality Verifier.
This is a good point. Initially I think it makes sense to have two separate pallets. This allows us to explore a new approach without removing any functionality from the system as whole. If we do deem this approach to be successful we can look into refactoring the pallets. |
Yes, long term it should be a single pallet, right now let's try to minimize the amount of changes, especially given the audit looming. |
For the 2-phase fallback process, it's possible to jump 250 headers at once in ancestry even without using MMR (see #263). We can use |
So I'm trying to make sure I understand you correctly here. What you're saying is that Is this the right idea? |
Yup, it's exactly that. |
One of the problems we've been running into with the current design is handling long The first way we thought of handling this was with the multi-stage protocol proposed in The next solution was to use sparse ancestry proofs. This can be done either with MMRs or Consider the following scenario: The source chain has A set of malicious set 1 authorities could collude and finalize Since the authorities are not bonded on the bridge, and there's no way to report this To mitigate this we will limit the We do need to make some changes to the |
The final API should look like this: enum AncestryProof {
/// Full header-chain between current best finalized header (`current_best`) and the new one.
Chain(Vec<Header>),
/// A runtime storage proof of `frame_system::block_hash(current_best.number)`.
///
/// The storage proof is verified against the new header that is being imported.
/// If the storage value matches the hash of our best finalized header, we can be sure
/// that we are extending the right chain.
StorageProof(Vec<u8>),
/// Merkle Mountain Range Proof
///
/// Reserved for future use when MMR comes along.
MMRProof(Vec<u8>),
}
trait GrandpaLightClient {
fn append_header(
header: Header,
justification: GrandpaJustification,
ancestry_proof: AncestryProof,
);
} I've depicted this as an enum just to make it eaiser to understand and show that API stays the same despite of the proof being used, but the pallet will most likely only support one way at the time (this simplifies weights significantly). The pallet logic should basically:
Since GRANDPA is guaranteed to always create justification for Later on, we would additionally verify that the validator-set-id is incrementing when Andre implements that changes, but note that we can easily implement everything already and it will work just fine. |
Closing, since it's already implemented. |
The current Substrate header sync pallet is susceptible to several different attacks. These attacks involve being able to write to storage indefinitely (like with the case of #454), or requiring potentially unbounded iteration in order to make progress (like in the case of #367).
To works around these issues the following is being proposed. Instead of accepting headers and finality proofs directly into the existing pallet we should instead go through a "middleware" pallet. The role of this new pallet will be to verify finality and ancestry proofs for incoming headers, and if successful write the headers to the base Substrate pallet.
This essentially flips the flow of the current pallet, in which we first import headers and later verify if they're actually part of the canonical chain (through the finality proof verification). By verifying finality first and writing headers later we can ensure that we only write "known good" headers to the base pallet.
Additionally, by keeping the base pallet as is (more or less), we are able to ensure that applications building on the bridge can continue to have access to all headers from the canonical chain .
The proposed API for the pallet is as follows:
If this call is successful, the headers from the ancestry proof would be written to the base pallet.
Optionally, if the proofs don't fit into a single block, we can also have a "chunk" based API to allow relayers to submit proofs across several blocks:
After a call to
signal_long_proof()
you'd be able to callsubmit_chunk()
as many times as required to complete the proof. We may want to restrict others from submitting proofs during this period, and add some timeouts in case we don't receive a chunk, or if we don't complete a proof, in a certain amount of time. In the case where we timeout, or the proof is invalid, the submitter will lose theirdeposit
.The text was updated successfully, but these errors were encountered: