-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grandpa Super Light Client in Solidity #323
Comments
From Alistair:
|
hmm based on the above, sounds like we will need to have a 2-step process for updating the ethereum on-chain light client each time we have a new "thing" for it to update from instead of a single-transaction fully fire-and-forget process, ie:
this also means 100+ block confirmation times, ie, up to an hour? is this correct? |
That's the way I understand this as well, however I feel these two things can be taken care of by two different parties:
Also 100 blocks is ~ 27mins with 16s block time. When I was writing this I realised that there is one thing which I don't understand. Relayers will have a full control over a bitvec of validators whose signatures they have, since this bitvec is not being agreed on by Grandpa authorities. |
@tomusdrw yep, those make sense. i have some additional concerns that these ideas/designs impact UX with confirmation time, and also add a lot of complexity to the protocol which does not exist in a version with a real grandpa light client, but of course if the real light client is totally infeasible then this may be the best option. and yes i agree, security wise it doesn't seem like this idea is as sound as a real light client - i think even in general, in any proposed protocol, if the relayer is doing any transformation/creation/modification of bytes that are not 1 to 1 exactly as they have been posted onchain on polkadot, then either this transformation/creation/modification needs to be verified on-chain on ethereum to have been done correctly, or, the relayer has an opportunity to be malicious. this applies to specifying the bit field of validators and/or specifying the count of validators that have provided signatures. ideally we want a bitfield/count that has been signed by validators too, though not sure if that is possible. also, not sure what kind of collateral/slashing is being referred to here? If it's a slashable offence to not participate in the second round, then this is this slashing happening within Polkadot from staked DOTs? Or are you suggesting some kinda collateral/slashing on the Ethereum side? The former makes sense to me and sounds secure, but involves more riskier modifications to Polkadot. The latter could work but introduces another whole lot of complexity, security assumptions and running capital costs to the bridge that may be problematic. |
I have some more information at https://hackmd.io/ohOt4jAPT8uu-soJXHUq0Q . The slashable offensive on Polkadot is to vote on a block in the extra round when a different block of the same height is finalised. Again safety is more important than liveness. If liveness is a problem, it can be fixed with the right incentives, but we really want to be sure that the system is expensive to attack. 100 blocks is likely overkill. We can probably get away with 10. I think the scheme is cheaper than passing in 700 public keys and signatures for BLS, nevermind verifying them. And without certain EIPs, the latter isn't feasible. I suspect it would be cheaper than using NEAR's challenge design with 700 Ethereum signatures. The main difficulty I see with this applies to any option we have considered: how do way incentive relayers to send the data heavy second transaction. Such a transaction has a high gas cost even if the smart contract returns straight away. Which means that we don't want several relayers trying to post it at the same time, as they will all pay a lot. |
@AlistairStewart alternative idea: ie:
This would be more gas expensive as we still need to give all signatures, but most can be ignored/skipped so we still save on signature check gas costs. I also haven't thought about extra security risk from using current blockhash rather than future blockhash - it definitely adds an attack vector in terms of Miner-driven attacks, but I'm not sure if it has any impact on potential attacks from the relayer/signers, as the current blockhash should also not be predictable. |
Closing, since the ETH bridge is being worked as W3F Grant in polkadot-ethereum repositry. Happy to re-open if the outlook changes. |
The idea is to implement a Solidity contract, which will server as Grandpa Super Light Client. The current contract we have for PoA (https://github.com/svyatonik/substrate-bridge-sol/) requires a bunch of builtin contracts to handle SCALE encoding, Grandpa signature verification, etc. Also it requires all headers to be imported sequentially and stores quite a lot of them, which requires quite high gas costs.
To create a Grandpa Light Client that could be deployed to Ethereum Mainnet right now, requires us to change the desing in a way that:
The initial proposal is to build a light client that imports MMR root hashes (most likely sha256) only (see #263) that are signed by Grandpa authorities. One way is to use BLS signature scheme, but that would require EIP 2357 to be implemented.
Alistair from W3F is working on some other scheme based on random sampling that would not require an extra builtin though.
Verifying any substrate merkle proofs requires Blake2, so it's questionable if it's feasible. Ideally any data that we might want to verify should use a sha256 merkle proof. There is a way to create a custom message delivery protocol, which would put some data to the MMR itself for efficient verification (see #327).
The text was updated successfully, but these errors were encountered: