Local Pessimistic Proofs and Cross-Chain Consistency Checks #108
Replies: 11 comments 4 replies
-
Great writeup and this meets my intuitions based upon the discussions with Carlos, Jesus, and shared sequencing teams. Question: what DA guarantees do we need for the agglayer to verify that the |
Beta Was this translation helpful? Give feedback.
-
PP pseudo-code
// public inputs
const oldLER
const oldLBR
const oldNullifierRoot
const hashBridgeExits
const hashImportedBridgeExits
const selectedGER
const networkID
// public outputs
const newLER
const newLBR
const newNullifierRoot
// Section A: Verify `hashbBridgeExits`
const bridgeExits = [bridgeExit_0, bridgeExit_1, ..., bridgeExit_N-1]; // private input
assert(keccak256(bridgeExits), hashBridgeExits);
// Section B: Apply `bridgeExits` to `oldLER`
let currentLER = oldLER;
for (let i = 0; i < bridgeExits.length; i++) {
currentLER = smt.add(currentLER, bridgeExits[i]); // returns the new root
}
assert(newLER, currentLER);
// Section C: apply bridgeExits to LBR and store balance to check at the very end
const checkBalances = []; // unique identified by originTokenAddress & originTokenNetworkID
let currentLBR = oldLBR;
for (let i = 0; i < bridgeExits.length; i++) {
const singleBridgeExit = bridgeExits[i];
if (singleBridgeExit.originNetworkID == networkID) {
continue; // skip since asset it is originally from this chain
}
if (singleBridgeExit.leafType == MESSAGE_TYPE) {
continue; // skip since it is message type, not asset type
}
// add pair originTokenAddress & originTokenNetworkID
checkBalances.push(singleBridgeExit.originTokenAddress & singleBridgeExit.originTokenNetworkID;
// extract value from the LBR
currentLBR = smt.updateLeafBridgeExits(currentLBR, singleBridgeExit); // this function decrease token balance base on (originNetworkID, originAddress, amount)
}
// Section D: Verify `hashImportedBridgeExits`
const importedBridgeExits = [importedBridgeExit_0, importedBridgeExit_1, ..., importedBridgeExit_N-1]; // private input
assert(keccak256(importedBridgeExits), hashImportedBridgeExits)
// Section E: group importedBridges by its originNetworkID
const mappingIBByNetworkID = {}; // IB --> Imported Bridges
for (let i = 0; i < importedBridgeExits.lenght; i++) {
const singleImportedBridgeExit = importedBridgeExits[i];
if (mappingIBByNetworkID[singleImportedBridgeExit.originNetworkID] == undefined) {
mappingIBByNetworkID[singleImportedBridgeExit.originNetworkID] = [];
}
mappingIBByNetworkID[singleImportedBridgeExit.originNetworkID].push(singleImportedBridgeExit);
}
// section F: for each networkID, verify the LER that is going to be used against the selectedGER
const networkIDLERMap = { // private input
"0": "0xAA...BB",
"1": "0xCC...DD",
...
"N-1": "0xEE...FF",
};
for (let i = 0; i < Object.keys(mappingIBByNetworkID).length; i++) {
const networkID = Object.keys(mappingIBByNetworkID)[i];
const singleLER = networkIDLERMap[networkID];
const smtProof = [0x00...01, 0x00...02, ...]; // private input
const isIncluded = verifySMTProof(selectedGER, singleLER, smtProof); // return true/false
assert(isIncluded, true);
}
// section G: verify importedBridges exists on its corresponding LER
for (let i = 0; i < Object.keys(mappingIBByNetworkID).length; i++) {
const networkID = Object.keys(mappingIBByNetworkID)[i];
const LERNetworkID = networkIDLERMap[networkID];
// get all importedBirdges for that networkID and verify them
for (let j = 0; j < mappingIBByNetworkID[networkID].lenght; j++) {
const singleImportedBridgeExit = mappingIBByNetworkID[networkID][j];
const smtProof = [0x00...01, 0x00...02, ...]; // private input
const isIncluded = verifySMTProof(LERNetworkID, singleImportedBridgeExit, smtProof); // return true/false
assert(isIncluded, true)
}
}
// section H: add importedBridges on the LBR
const currentNullifierRoot = oldNullifierRoot;
for (let i = 0; i < importedBridgeExits.length; i++) {
const singleImportedBridgeExit = importedBridgeExits[i];
if (singleImportedBridgeExit.leafType == MESSAGE_TYPE) {
continue; // skip since it is message type, not asset type
}
if (singleImportedBridgeExit.destinationNetworkID != networkID) {
continue; // skip since it does not affect this networkID
}
// verify it has not been nullified
const isClaimed = smt.readValue(oldNullifierRoot, singleImportedBridgeExit); // uses index & networkID from the bridgeExit
if (isClaimed == true) {
continue; // jump to the next one, since it has been claimed
}
// add it to the LBR
// extract value from the LBR
currentLBR = smt.updateLeafBridgeExits(currentLBR, singleImportedBridgeExit); // this function adds token balance base on (originNetworkID, originAddress, amount)
// update nullifierRoot
currentNullifierRoot = smt.updateNullifierTree(currentNullifierRoot, singleImportedBridgeExit); // uses index & networkID from the bridgeExit
}
assert(newLBR, currentLBR);
assert(newNullifierRoot, currentNullifierRoot);
// section I: verify there is not negative value for all the decreasing balances in the LBR
for (let i = 0; i < checkBalances.length; i++) {
const balance = smt.readValue(currentLBR, checkBalances[i]); // gets value from a certain originAddress # originNetworkID
assert (balance > 0);
} claimed_local_exit_roots
There are several options to do that verification, each one with its trade-offs. I will try to summarize all three approaches that I can think of:
|
Beta Was this translation helpful? Give feedback.
-
There's no DA guarantee required - "claimed_local_exit_root = valid" <=> the local exit root included in a previous GET or the current GET being built. |
Beta Was this translation helpful? Give feedback.
-
Nice writeup @krlosMata !
What I would propose is a mixture of the first and the third approaches. For the happy path, where the chain doesn't need to force-include on L1, it can submit a vector of claimed_local_exit_roots to the AL, and the AL can prove inclusion in previous GERs or the current GER. This point (proving inclusion in the current GER) is really important, as it's what allows us to have low-latency interop (we can allow chains to claim from LETs before they're even included in a GET by the AL). The force-include path would be allowing chains to submit another proof to L1 showing that the claimed LERs are consistent with the L1InfoRoot. |
Beta Was this translation helpful? Give feedback.
-
Let me write down all the necessary steps taking account your 1-3 approach:
sequenceDiagram
participant chain_A
participant aggLayer
participant L1
Note over chain_A: All bridgeExits <br/> from oldLER to newLER
chain_A->>chain_A: Sign bridgeExits
Note over chain_A: All claims (importedBridgeExits) <br/> from oldLER to newLER
chain_A->>chain_A: Sign importedBridgeExits
chain_A->>aggLayer: Certificate (data needed to build the PP)
note over aggLayer: Build PP <br/> check PP consistency
aggLayer->>L1: PP
As far as I understood this would be the data flow. Let me know if I'm missing something. Some comments:
|
Beta Was this translation helpful? Give feedback.
-
Thanks @krlosMata - few thoughts on this.
I would actually argue that the chain should maintain the It's not strictly required that the chain itself will generate a PP - it could be a service that we expose, but I think that it makes sense to separate this from the AL itself, as we can remove the requirement that AL nodes store LBT and NullifierSets.
The nice thing is that it doesn't need to add any complexity. If we use the approach where PPs don't specify a |
Beta Was this translation helpful? Give feedback.
-
somehow the chain needs to know the other chain LER, so some sort of communication is needed and if we split the proofs it kinda add more complexity at this first stage |
Beta Was this translation helpful? Give feedback.
-
Cool, I think that we mainly agree on everything
I'm just wondering if we can separate this from the main agglayer repo, as it's functionality that we don't strictly need. We could have a separate PP-generator service that could be run by the chain itself or by some other party.
The chain will have all of the LERs though - they'll be provided by users or builders during the claims process. The chain doesn't really care if the LER is old or new (for fast interop) - it can handle them all the same. The main changes for fast interop would be in the zkEVM proof - instead of accepting a GER, chains could specify a set of LERs to process claims on. |
Beta Was this translation helpful? Give feedback.
-
I think it would make sense to split this out into a library with go, rust bindings as well as a service that can run as a sidecar for EVM chains. We can then ask other folks to port this to svm, movevm etc. |
Beta Was this translation helpful? Give feedback.
-
I don't think that any contract or VM changes should be required for this. We should be able to have a single implementation of a service written in Rust that can query or receive local LET and claim/deposit data to generate the proof. The LBT and Nullifier Set are generated entirely from the LET and claims/deposits. |
Beta Was this translation helpful? Give feedback.
-
yep, it can be separated. This is convenient to compute proofs in a stateless server (the service does not know anything about the
true.
That is a huge change in the zkEVM proofs as well as in the Bridge L2 SC. It would need every chain to update its SC and its Verifier (forkID) (all chains do not need to update at the same time though...which is good). |
Beta Was this translation helpful? Give feedback.
-
[DRAFT]
I'd like to propose a change to how we calculate the pessimistic proof (this was largely inspired by Carlos Matallana's idea to generate local pessimistic proofs) that is designed to achieve the following:
(1) is very important because it removes complexity from the agglayer. The agglayer no longer needs to compute local balance trees for all chains at each epoch. We can calculate pessimistic proofs per chain and simplify the recursive aggregation step to combine these proofs before settlement to Ethereum. Generating per-chain pessimistic proofs without relying on global state also makes individually settling / force-including chains on L1 much simpler
The proposed scheme was described in more detail here: https://hackmd.io/YZ-AeAR2QXS4e_wOYD6NBQ but the basic intuition is that right now we're proving the statement: for each chain
X
, the pessimistic proof guarantees that after applying every asset transfer in every LET toX
, its token balances are{b_i}
whereb_i
are(token_address, balance)
pairs.This is actually a stronger statement than we need to prove. It suffices to show that for all chains
X_j
the balancesb_{j,i}
are non-negative after applying some subset of all asset transfers with destination chainX_j
.To give an example, suppose that there's some shitchain spamming tokens and making giant LETs depositing to many chains. Right now, the pessimistic proof needs to calculate the impact on every single chain, even when these tokens won't be claimed on nearly any of these chains, because the spammer cannot pay tx fees. Instead, the pessimistic proof should only apply asset transfers to destination chains when those transfers are claimed. Since each destination chain has the claim data available, pessimistic proofs can be computed entirely locally.
Here's some pseudo-code for the local version of the pessimistic proof (forgive me, I haven't written code in like 3 years, please don't hesitate to change while implementing):
Going to clean this up later, but for now, the idea is that the pessimistic proof first applies all claims that are provided by the PP chain (the chain generating the proof) to the Local Balance Tree. Ie something like (I've forgotten how to write Rust):
From here, we can infer from the batch the previous frontier of the PP chain's local exit tree, and then extract all new asset_transfers included in the PP chains local exit tree. We then deduct each asset transfer from the chain's local balance tree.
Now we have a locally-generated pessimistic proof, but we still have to verify that claimed_local_exit_roots are valid. This can be done by the agglayer. The Agglayer can recursively aggregate all locally-generated pessimistic proofs and merge the set of claimed_local_exit_roots, before verifying that each is valid as part of the final proof step.
The nice thing is that this is the cross-chain consistency / interop proof. We allow chains to cryptographically guarantee that they cannot depend on invalid or inconsistent LERs. To guarantee this property for synchronous message passing, we can modify the bridge contract, as described here: https://hackmd.io/Cn5NqzC2TJ6dVg9AtmFOUw
Beta Was this translation helpful? Give feedback.
All reactions