-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(blobs): Integrate beacon chain client/web2 blob getter #9101
Labels
Milestone
Comments
MirandaWood
changed the title
Integrate beacon chain client
Integrate beacon chain client/web2 blob getter
Oct 9, 2024
Open
Merged
just-mitch
added
C-node
Component: Aztec Node
C-l1-contracts
Component: contracts deployed to L1
labels
Nov 7, 2024
just-mitch
changed the title
Integrate beacon chain client/web2 blob getter
feat: Integrate beacon chain client/web2 blob getter
Nov 7, 2024
This was referenced Nov 20, 2024
Closed
MirandaWood
added a commit
that referenced
this issue
Nov 25, 2024
## The Blobbening It's happening and I can only apologise. ![image](https://github.com/user-attachments/assets/5592b2ad-55a6-459d-a838-4084b310ee93) Follows #8955. ### Intro More detailed stuff below, but the major changes are: - Publish DA through blobs rather than calldata. This means: - No more txs effects hashing - Proving that our effects are included in a blob (see below for details) using base -> root rollup circuits - Accumulating tx effects in a blob sponge (`SpongeBlob`) - Building blocks by first passing a hint of how many tx effects the block will have - Updating forge to handle blobs - Tests for all the above ### Major Issues Things that we should resolve before merging: - Run times massively increased: - This is largely because the `nr` code for `blob`s is written with the BigNum lib, which uses a lot of unconstrained code then a small amount of constrained code to verify results. Unfortunately this means we cannot simulate circuits containing blobs (currently `block-root`) using wasm or set `nr` tests to `unconstrained` because that causes a `stack overflow` in brillig. - To avoid straight up failures, I've created nr tests which are not `unconstrained` (meaning `rollup-lib` tests take 10mins or so to run) and I've forced circuit simulation to run in native ACVM rather than wasm (adding around 1min to any tests that simulate `block-root`). - Yes, there's more! All the above is happening while we only create _one blob per block_. This is definitely not enough space (we aim for 3 per block), but I imagine tripling the blob `nr` code would only cause more runtime issues. - ~Data retrieval~ The below will be done in #9101, and for now we use calldata just to keep the archiver working: - The current (interim) solution is to still publish the same block body calldata as before, just so the archiver actually runs. This calldata is no longer verified with the txs effects hash, but is checked (in ts) against the known blob hash, so a mismatch will still throw. - The actual blob contents will look different to the body calldata since we will be tightly packing effects and adding length markers before each section (like how log lengths work). I've added to/from methods to aid conversion in `data-retrieval` to use. - ~Blob verification precompile gas~ Batching blob KZG proofs is being thought about (see #8955 for progression): - The current approach to verify that the published blob matches the tx effects coming from the rollup is to call the point evaluation precompile _for each blob_. This costs 50k gas each time, so is not sustainable. - We think it's possible to accumulate the KZG proofs used to validate blobs into one. Mike is thinking about this and whether it's doable using `nr`, so we can call the precompile once per epoch rather than 3 times per block. ### General TODOs Things I'm working on: - Moving from 1 to 3 blobs per block - This will slow everything down massively so I'd prefer to solve the runtime issues before tackling this. - It's also going to be relatively complex, because the base rollup will need code to fill from one of three SpongeBlob instances and will need to know how to 'jump' from one full blob to the next at any possible index. Hopefully this does not lead to a jump in gates. ### Description The general maths in nr and replicated across `foundation/blob` is described [here](https://github.com/AztecProtocol/engineering-designs/blob/3362f6ddf62cba5eda605ab4203069b2b77a777c/in-progress/8403-blobbity-boo.md#implementation). #### Old DA Flow From the base rollup to L1, the previous flow for publishing DA was: Nr: - In the `base` rollup, take in all tx effects we wish to publish and `sha256` hash them to a single value: `tx_effects_hash` - This value is propogated up the rollup to the next `merge` (or `block-root`) circuit - Each `merge` or `block-root` circuit simply `sha256` hashes each 'child' `tx_effects_hash` from its left and right inputs - Eventually, at `block-root`, we have one value: `txs_effects_hash` which becomes part of the header's content commitment Ts: - The `txs_effects_hash` is checked and propogated through the orchestrator and becomes part of the ts class `L2Block` in the header - The actual tx effects to publish become the `L2Block`'s `.body` - The `publisher` sends the serialised block `body` and `header` to the L1 block `propose` function Sol: - In `propose`, we decode the block `body` and `header` - The `body` is deconstructed per tx into its tx effects and then hashed using `sha256`, until we have `N` `tx_effects_hash`es (mimicing the calculation in the `base` rollup) - Each `tx_effects_hash` is then input as leaves to a wonky tree and hashed up to the root (mimicing the calculation from `base` to `block-root`), forming the final `txs_effects_hash` - This final value is checked to be equal to the one in the header's content commitment, then stored to be checked against for validating data availability - *Later, when verifying a rollup proof, we use the above header values as public inputs. If they do not match what came from the circuit, the verification fails. *NB: With batch rollups, I've lost touch with what currently happens at verification and how we ensure the `txs_effects_hash` matches the one calculated in the rollup, so this might not be accurate. #### New DA Flow The new flow for publishing DA is: Nr: - In the `base` rollup, we treat tx effects as we treat `PartialStateReference`s - injecting a hint to the `start` and `end` state we expect from processing this `base`'s transaction - We take all the tx effects to publish and `absorb` them into the given `start` `SpongeBlob` state. We then check the result is the same as the given `end` state - Like with `PartialStateReference`s, each `merge` or `block-root` checks that the left input's `end` blob state is equal to the right input's `start` blob state - Eventually, at `block-root`, we check the above _and_ that the left's `start` blob state was empty. Now we have a sponge which has absorbed, as a flat array, all the tx effects in the block we wish to publish - We inject the flat array of effects as a private input, along with the ts calculated blob commitment, and pass them and the sponge to the blob function - The blob function: - Poseidon hashes the flat array of effects, and checks this matches the accumulated sponge when squeezed (this confirms that the flat array is indeed the same array of tx effects propogated from each `base`) - Computes the challenge `z` by hashing this ^ hash with the blob commitment - Evaluates the blob polynomial at `z` using the flat array of effects in the barycentric formula (more details on the engineering design link above), to return `y` - The `block-root` adds this triple (`z`, `y`, and commitment `C`) to a new array of `BlobPublicInputs` - *Like how we handle `fees`, each `block-merge` and `root` merges the left and right input arrays, so we end up with an array of each block's blob info *NB: this will likely change to accumulating to a single set of values, rather than one per block, and is being worked on by Mike. The above also describes what happens for one blob per block for simplicity (it will actually be 3). Ts: - The `BlobPublicInputs` are checked against the ts calculated blob for each block in the orchestrator - They form part of a serialised array of bytes called `blobInput` (plus the expected L1 `blobHash` and a ts generated KZG proof) sent to L1 to the `propose` function - The `propose` transaction is now a special 'blob transaction' where all the tx effects (the same flat array as dealt with in the rollup) are sent as a sidecar - *We also send the serialised block `body`, so the archiver can still read the data back until #9101 *NB: this will change once we can read the blobs themselves from the beacon chain/some web2 client. Sol: - In `propose`, instead of recalcating the `txs_effects_hash`, we send the `blobInput` to a new `validateBlob` function. *This function: - Gathers the real `blobHash` from the EVM and checks it against the one in `blobInput` - Calls the [point evaluation precompile ](https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile) and checks that our `z`, `y`, and `C` indeed correspond to the blob we claim - We now have a verified link between the published blob and our `blobInput`, but still need to link this to our rollup circuit: - Each set of `BlobPublicInputs` is extracted from the bytes array and stored against its block number - When the `root` proof is verified, we reconstruct the array of `BlobPublicInputs` from the above stored values and use them in proof verification - If any of the `BlobPublicInputs` are incorrect (equivalently, if any of the published blobs were incorrect), the proof verification will fail - To aid users/the archiver in checking their blob data matches a certain block, the EVM `blobHash` is been added to `BlockLog` once it has been verified by the precompile *NB: As above, we will eventually call the precompile just once for many blobs with one set of `BlobPublicInputs`. This will still be used in verifying the `root` proof to ensure the tx effects match those from each `base`. --------- Co-authored-by: ludamad <adam.domurad@gmail.com>
MirandaWood
changed the title
feat: Integrate beacon chain client/web2 blob getter
feat(blobs): Integrate beacon chain client/web2 blob getter
Nov 30, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Currently we manage DA by publishing all tx effects in calldata and verifying their correctness by hashing them to the
TxsEffectsHash
and ensuring it matches the one coming from the rollup root proof's public inputs.We are moving to using blobs, which is cheaper than publishing calldata. Since the actual blob contents are not accessible in the EVM, we verify that the block's tx effects have been published by providing a proof of opening of the blob commitment, using an evaluation calculated in the rollup root (more info here).
This means we can no longer extract the block contents from calldata, as we do in the archiver's
data_retrieval.ts
. Instead, blobs are published to Ethereum's beacon chain. Unfortunatelyfoundry
does not seem to support any test beacon chain andviem
doesn't have any methods to connect to it. From some (limited) research it seems like we would need to integrate a new client to extract the blob contents from the beacon chain.EDIT: Since we can verify a blob of data is a) our data and b) valid (by checking the calculated blob hash against one stored in Rollup.sol, which we would only store once a blob's KZG proof has been verified), we could use a web2 blob store and just read the information in the archiver.
The interim approach will be to continue to publish calldata alongside the blob and extract that calldata in the same way as we do now. This calldata will not be verified against the rollup proof in the contract, but in ts it will be checked it matches the blob we are publishing. Then once we can extract the blob from the beacon chain, the calldata can simply be removed as an input to
propose
. I will mark all the interim things that need to be removed with//TODO(#9101)
.Some hopefully useful links:
mw/blob-circuit
The text was updated successfully, but these errors were encountered: