Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: blobs. #9302

Merged
merged 90 commits into from
Nov 25, 2024
Merged

feat: blobs. #9302

merged 90 commits into from
Nov 25, 2024

Conversation

MirandaWood
Copy link
Contributor

@MirandaWood MirandaWood commented Oct 21, 2024

The Blobbening

It's happening and I can only apologise.
image

Follows #8955.

Intro

More detailed stuff below, but the major changes are:

  • Publish DA through blobs rather than calldata. This means:
    • No more txs effects hashing
    • Proving that our effects are included in a blob (see below for details) using base -> root rollup circuits
    • Accumulating tx effects in a blob sponge (SpongeBlob)
    • Building blocks by first passing a hint of how many tx effects the block will have
    • Updating forge to handle blobs
    • Tests for all the above

Major Issues

Things that we should resolve before merging:

  • Run times massively increased:

    • This is largely because the nr code for blobs is written with the BigNum lib, which uses a lot of unconstrained code then a small amount of constrained code to verify results. Unfortunately this means we cannot simulate circuits containing blobs (currently block-root) using wasm or set nr tests to unconstrained because that causes a stack overflow in brillig.
    • To avoid straight up failures, I've created nr tests which are not unconstrained (meaning rollup-lib tests take 10mins or so to run) and I've forced circuit simulation to run in native ACVM rather than wasm (adding around 1min to any tests that simulate block-root).
    • Yes, there's more! All the above is happening while we only create one blob per block. This is definitely not enough space (we aim for 3 per block), but I imagine tripling the blob nr code would only cause more runtime issues.
  • Data retrieval The below will be done in feat(blobs): Integrate beacon chain client/web2 blob getter #9101, and for now we use calldata just to keep the archiver working:

    • The current (interim) solution is to still publish the same block body calldata as before, just so the archiver actually runs. This calldata is no longer verified with the txs effects hash, but is checked (in ts) against the known blob hash, so a mismatch will still throw.
    • The actual blob contents will look different to the body calldata since we will be tightly packing effects and adding length markers before each section (like how log lengths work). I've added to/from methods to aid conversion in data-retrieval to use.
  • Blob verification precompile gas Batching blob KZG proofs is being thought about (see Epic: Blobs #8955 for progression):

    • The current approach to verify that the published blob matches the tx effects coming from the rollup is to call the point evaluation precompile for each blob. This costs 50k gas each time, so is not sustainable.
    • We think it's possible to accumulate the KZG proofs used to validate blobs into one. Mike is thinking about this and whether it's doable using nr, so we can call the precompile once per epoch rather than 3 times per block.

General TODOs

Things I'm working on:

  • Moving from 1 to 3 blobs per block
    • This will slow everything down massively so I'd prefer to solve the runtime issues before tackling this.
    • It's also going to be relatively complex, because the base rollup will need code to fill from one of three SpongeBlob instances and will need to know how to 'jump' from one full blob to the next at any possible index. Hopefully this does not lead to a jump in gates.

Description

The general maths in nr and replicated across foundation/blob is described here.

Old DA Flow

From the base rollup to L1, the previous flow for publishing DA was:

Nr:

  • In the base rollup, take in all tx effects we wish to publish and sha256 hash them to a single value: tx_effects_hash
  • This value is propogated up the rollup to the next merge (or block-root) circuit
  • Each merge or block-root circuit simply sha256 hashes each 'child' tx_effects_hash from its left and right inputs
  • Eventually, at block-root, we have one value: txs_effects_hash which becomes part of the header's content commitment

Ts:

  • The txs_effects_hash is checked and propogated through the orchestrator and becomes part of the ts class L2Block in the header
  • The actual tx effects to publish become the L2Block's .body
  • The publisher sends the serialised block body and header to the L1 block propose function

Sol:

  • In propose, we decode the block body and header
  • The body is deconstructed per tx into its tx effects and then hashed using sha256, until we have N tx_effects_hashes (mimicing the calculation in the base rollup)
  • Each tx_effects_hash is then input as leaves to a wonky tree and hashed up to the root (mimicing the calculation from base to block-root), forming the final txs_effects_hash
  • This final value is checked to be equal to the one in the header's content commitment, then stored to be checked against for validating data availability
  • *Later, when verifying a rollup proof, we use the above header values as public inputs. If they do not match what came from the circuit, the verification fails.

*NB: With batch rollups, I've lost touch with what currently happens at verification and how we ensure the txs_effects_hash matches the one calculated in the rollup, so this might not be accurate.

New DA Flow

The new flow for publishing DA is:

Nr:

  • In the base rollup, we treat tx effects as we treat PartialStateReferences - injecting a hint to the start and end state we expect from processing this base's transaction
  • We take all the tx effects to publish and absorb them into the given start SpongeBlob state. We then check the result is the same as the given end state
  • Like with PartialStateReferences, each merge or block-root checks that the left input's end blob state is equal to the right input's start blob state
  • Eventually, at block-root, we check the above and that the left's start blob state was empty. Now we have a sponge which has absorbed, as a flat array, all the tx effects in the block we wish to publish
  • We inject the flat array of effects as a private input, along with the ts calculated blob commitment, and pass them and the sponge to the blob function
  • The blob function:
    • Poseidon hashes the flat array of effects, and checks this matches the accumulated sponge when squeezed (this confirms that the flat array is indeed the same array of tx effects propogated from each base)
    • Computes the challenge z by hashing this ^ hash with the blob commitment
    • Evaluates the blob polynomial at z using the flat array of effects in the barycentric formula (more details on the engineering design link above), to return y
  • The block-root adds this triple (z, y, and commitment C) to a new array of BlobPublicInputs
  • *Like how we handle fees, each block-merge and root merges the left and right input arrays, so we end up with an array of each block's blob info

*NB: this will likely change to accumulating to a single set of values, rather than one per block, and is being worked on by Mike. The above also describes what happens for one blob per block for simplicity (it will actually be 3).

Ts:

  • The BlobPublicInputs are checked against the ts calculated blob for each block in the orchestrator
  • They form part of a serialised array of bytes called blobInput (plus the expected L1 blobHash and a ts generated KZG proof) sent to L1 to the propose function
  • The propose transaction is now a special 'blob transaction' where all the tx effects (the same flat array as dealt with in the rollup) are sent as a sidecar
  • *We also send the serialised block body, so the archiver can still read the data back until feat(blobs): Integrate beacon chain client/web2 blob getter #9101

*NB: this will change once we can read the blobs themselves from the beacon chain/some web2 client.

Sol:

  • In propose, instead of recalcating the txs_effects_hash, we send the blobInput to a new validateBlob function. *This function:
    • Gathers the real blobHash from the EVM and checks it against the one in blobInput
    • Calls the point evaluation precompile and checks that our z, y, and C indeed correspond to the blob we claim
  • We now have a verified link between the published blob and our blobInput, but still need to link this to our rollup circuit:
    • Each set of BlobPublicInputs is extracted from the bytes array and stored against its block number
    • When the root proof is verified, we reconstruct the array of BlobPublicInputs from the above stored values and use them in proof verification
    • If any of the BlobPublicInputs are incorrect (equivalently, if any of the published blobs were incorrect), the proof verification will fail
  • To aid users/the archiver in checking their blob data matches a certain block, the EVM blobHash is been added to BlockLog once it has been verified by the precompile

*NB: As above, we will eventually call the precompile just once for many blobs with one set of BlobPublicInputs. This will still be used in verifying the root proof to ensure the tx effects match those from each base.

offset += 1;

// TX FEE
tx_effects_hash_input[offset] = transaction_fee;
// TODO(Miranda): how many bytes do we expect tx fee to be? Using 29 for now
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ TMNT (or any) team: is there a restriction anywhere on the size of the tx fee?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think there really is a limit, there is probably some sensible value, but as base fees might go up a lot in congestion for now would just keep it as is. You can essentially see it as just a more domain specific public state update.

Copy link
Contributor

@iAmMichaelConnor iAmMichaelConnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing!
I think this is mergeable. @LeilaWang, @LHerskind what do you think?

The circuits look good (with well documented todos) (and we'll be updating them for batching of blobs soon anyway).
The smart contracts look good (with well documented todos) (and we know they're not optimised, because we'll be moving to batching of blobs soon anyway).
The typescript is passing tests.
The "boxes" tests have been failing on master for a while, so no need to fix those here.

Ship?

Copy link
Contributor

@LHerskind LHerskind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a bit of naming that should probably be updated around single or multiple blobs, e.g., blobHash vs blobsHash. For the cases where there are multiple, would prefer the blobsHash to be used consistently to make it simpler to follow.

The TxsDecoder should also be pretty unused after this so maybe it can be deleted along with it.

@@ -94,6 +98,9 @@ contract Rollup is EIP712("Aztec Rollup", "1"), Leonidas, IRollup, ITestRollup {
// Testing only. This should be removed eventually.
uint256 private assumeProvenThroughBlockNumber;

// @note Always true, exists to override to false for testing only
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an issue for getting rid of this again?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet - made one here: #10147

@@ -24,7 +24,7 @@ import {Errors} from "@aztec/core/libraries/Errors.sol";
* | 0x0020 | 0x04 | lastArchive.nextAvailableLeafIndex
* | | | ContentCommitment {
* | 0x0024 | 0x20 | numTxs
* | 0x0044 | 0x20 | txsEffectsHash
* | 0x0044 | 0x20 | blobHash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.
Quick question, should this not be blobsHash (multiple blobs).

* @param _proof - The proof to verify
*/
function submitEpochRootProof(
uint256 _epochSize,
bytes32[7] calldata _args,
bytes32[] calldata _fees,
bytes calldata _aggregationObject,
bytes calldata _blobPublicInputsAndAggregationObject, // having separate inputs here caused stack too deep
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible to get around this by using something similar to the ProposeArgs. Might be worth considering for the sake of your sanity 🫡

@@ -311,6 +321,10 @@ contract Rollup is EIP712("Aztec Rollup", "1"), Leonidas, IRollup, ITestRollup {
}
}

for (uint256 i = 0; i < _epochSize; i++) {
// free up gas (hopefully)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless you are submitting the proof in the same transaction as you set the values originally you will be paying extra to do this. Cleaning it here is essentially doing like gas-tokens did back in the days before pricing was changed.
Would remove.

// Which we are checking in the `_validateHeader` call below.
bytes32 txsEffectsHash = TxsDecoder.decode(_body);
// Since an invalid blob hash here would fail the consensus checks of
// the header, the `blobInput` is implicitly accepted by consensus as well.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🫡

*/
function getEpochProofPublicInputs(
uint256 _epochSize,
bytes32[7] calldata _args,
bytes32[] calldata _fees,
bytes calldata _aggregationObject
bytes calldata _blobPublicInputsAndAggregationObject // having separate inputs here caused stack too deep
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As earlier, using a struct can be used to work around this while keeping the readability.

bytes32(_blobPublicInputsAndAggregationObject[blobOffset:blobOffset += 32])
);
// To fit into 2 fields, the commitment is split into 31 and 17 byte numbers
// TODO: The below left pads, possibly inefficiently
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an issue for this? You don't need to make the work, but if you can add an issue and throw it into #7820 that would be neat 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made one, thanks! #10148

_signatures: _signatures,
_digest: digest,
_currentTime: Timestamp.wrap(block.timestamp),
_blobHash: blobsHash,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _blobHash should probably also be blobsHash (multiple)

@@ -856,14 +934,12 @@ contract Rollup is EIP712("Aztec Rollup", "1"), Leonidas, IRollup, ITestRollup {
SignatureLib.Signature[] memory _signatures,
bytes32 _digest,
Timestamp _currentTime,
bytes32 _txEffectsHash,
bytes32 _blobHash,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same down here, there is a bunch of blobHash that should probably be blobsHash.

Copy link
Contributor

@LHerskind LHerskind left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets get it merged before the fees changes goes in.

* @param _args - Array of public inputs to the proof (previousArchive, endArchive, previousBlockHash, endBlockHash, endTimestamp, outHash, proverId)
* @param _fees - Array of recipient-value pairs with fees to be distributed for the epoch
* @param _blobPublicInputsAndAggregationObject - The aggregation object and blob PIs for the proof
* @param _submitArgs - Struct for constructing PIs which has:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is PIs? Public Inputs? Personal Information

@@ -402,19 +397,19 @@ contract Rollup is EIP712("Aztec Rollup", "1"), Leonidas, IRollup, ITestRollup {
* @param _signatures - The signatures to validate
* @param _digest - The digest to validate
* @param _currentTime - The current time
* @param _blobHash - The EVM blob hash for this block
* @param _blobsHash - The EVM blob hash for this block
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The EVM blob hash for this block is not correct anymore

* @param _args - Array of public inputs to the proof (previousArchive, endArchive, previousBlockHash, endBlockHash, endTimestamp, outHash, proverId)
* @param _fees - Array of recipient-value pairs with fees to be distributed for the epoch
* @param _blobPublicInputsAndAggregationObject - The aggregation object and blob PIs for the proof
* @param _submitArgs - Struct for constructing PIs which has:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anther PIs, think it is public inputs, but would be easier to follow if written out

uint256 endBlockNumber = previousBlockNumber + _epochSize;

uint256 endBlockNumber = previousBlockNumber + _submitArgs.epochSize;
bytes32[7] memory args = _submitArgs.args;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this?

@MirandaWood MirandaWood merged commit 03b7e0e into master Nov 25, 2024
101 checks passed
@MirandaWood MirandaWood deleted the mw/blob-circuit branch November 25, 2024 19:15
TomAFrench added a commit that referenced this pull request Nov 25, 2024
* master: (106 commits)
  feat: blobs. (#9302)
  chore(avm): operands reordering (#10182)
  feat: UltraRollupRecursiveFlavor (#10088)
  feat: one liner for nodes to join rough-rhino (#10168)
  feat!: rename sharedimmutable methods (#10164)
  chore(master): Release 0.64.0 (#10043)
  feat: e2e metrics reporting (#9776)
  chore: fix pool metrics (#9652)
  chore: Initial draft of testnet-runbook (#10085)
  feat: Improved data storage metrics (#10020)
  chore: Remove handling of duplicates from the note hash tree (#10016)
  fix: add curl to aztec nargo container (#10173)
  fix: Revert "feat: integrate base fee computation into rollup" (#10166)
  feat!: rename SharedMutable methods (#10165)
  git subrepo push --branch=master noir-projects/aztec-nr
  git_subrepo.sh: Fix parent in .gitrepo file. [skip ci]
  chore: replace relative paths to noir-protocol-circuits
  git subrepo push --branch=master barretenberg
  feat: sync tags as sender (#10071)
  feat: integrate base fee computation into rollup (#10076)
  ...
MirandaWood added a commit that referenced this pull request Nov 25, 2024
ludamad pushed a commit that referenced this pull request Nov 25, 2024
This reverts commit 03b7e0e.

(womp womp)
MirandaWood added a commit that referenced this pull request Nov 25, 2024
@MirandaWood MirandaWood mentioned this pull request Nov 25, 2024
TomAFrench added a commit that referenced this pull request Nov 26, 2024
* master: (64 commits)
  fix: docker compose aztec up fix (#10197)
  fix: aztec-nargo curl in the earthfile also (#10199)
  chore: fix devbox (#10201)
  chore: misc cleanup (#10194)
  fix: release l1-contracts (#10095)
  git subrepo push --branch=master noir-projects/aztec-nr
  git_subrepo.sh: Fix parent in .gitrepo file. [skip ci]
  chore: replace relative paths to noir-protocol-circuits
  git subrepo push --branch=master barretenberg
  feat: Origin tags implemented in biggroup (#10002)
  fix: Revert "feat: blobs. (#9302)" (#10187)
  feat!: remove SharedImmutable (#10183)
  fix(bb.js): don't minify bb.js - webpack config (#10170)
  feat: blobs. (#9302)
  chore(avm): operands reordering (#10182)
  feat: UltraRollupRecursiveFlavor (#10088)
  feat: one liner for nodes to join rough-rhino (#10168)
  feat!: rename sharedimmutable methods (#10164)
  chore(master): Release 0.64.0 (#10043)
  feat: e2e metrics reporting (#9776)
  ...
critesjosh pushed a commit that referenced this pull request Nov 26, 2024
🤖 I have created a release *beep* *boop*
---


<details><summary>aztec-package: 0.65.0</summary>

##
[0.65.0](aztec-package-v0.64.0...aztec-package-v0.65.0)
(2024-11-26)


### Features

* **avm:** New public inputs witgen
([#10179](#10179))
([ac8f13e](ac8f13e))
</details>

<details><summary>barretenberg.js: 0.65.0</summary>

##
[0.65.0](barretenberg.js-v0.64.0...barretenberg.js-v0.65.0)
(2024-11-26)


### Bug Fixes

* **bb.js:** Don't minify bb.js - webpack config
([#10170](#10170))
([6e7fae7](6e7fae7))
</details>

<details><summary>aztec-packages: 0.65.0</summary>

##
[0.65.0](aztec-packages-v0.64.0...aztec-packages-v0.65.0)
(2024-11-26)


### ⚠ BREAKING CHANGES

* remove SharedImmutable
([#10183](#10183))
* rename sharedimmutable methods
([#10164](#10164))

### Features

* **avm:** New public inputs witgen
([#10179](#10179))
([ac8f13e](ac8f13e))
* Blobs.
([#9302](#9302))
([03b7e0e](03b7e0e))
* One liner for nodes to join rough-rhino
([#10168](#10168))
([3a425e9](3a425e9))
* Origin tags implemented in biggroup
([#10002](#10002))
([c8696b1](c8696b1))
* Remove SharedImmutable
([#10183](#10183))
([a9f3b5f](a9f3b5f))
* Rename sharedimmutable methods
([#10164](#10164))
([ef7cd86](ef7cd86))
* UltraRollupRecursiveFlavor
([#10088](#10088))
([4418ef2](4418ef2))


### Bug Fixes

* Aztec-nargo curl in the earthfile also
([#10199](#10199))
([985a678](985a678))
* **bb.js:** Don't minify bb.js - webpack config
([#10170](#10170))
([6e7fae7](6e7fae7))
* Docker compose aztec up fix
([#10197](#10197))
([d7ae959](d7ae959))
* Increase test timeouts
([#10205](#10205))
([195aa3d](195aa3d))
* Release l1-contracts
([#10095](#10095))
([29f0d7a](29f0d7a))
* Revert "feat: blobs.
([#9302](#9302))"
([#10187](#10187))
([a415f65](a415f65))


### Miscellaneous

* Added ref to env variables
([#10193](#10193))
([b51fc43](b51fc43))
* **avm:** Operands reordering
([#10182](#10182))
([69bdf4f](69bdf4f)),
closes
[#10136](#10136)
* Fix devbox
([#10201](#10201))
([323eaee](323eaee))
* Misc cleanup
([#10194](#10194))
([dd01417](dd01417))
* Reinstate docs-preview, fix doc publish
([#10213](#10213))
([ed9a0e3](ed9a0e3))
* Replace relative paths to noir-protocol-circuits
([1650446](1650446))
</details>

<details><summary>barretenberg: 0.65.0</summary>

##
[0.65.0](barretenberg-v0.64.0...barretenberg-v0.65.0)
(2024-11-26)


### Features

* **avm:** New public inputs witgen
([#10179](#10179))
([ac8f13e](ac8f13e))
* Origin tags implemented in biggroup
([#10002](#10002))
([c8696b1](c8696b1))
* UltraRollupRecursiveFlavor
([#10088](#10088))
([4418ef2](4418ef2))


### Miscellaneous

* **avm:** Operands reordering
([#10182](#10182))
([69bdf4f](69bdf4f)),
closes
[#10136](#10136)
</details>

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
AztecBot added a commit to AztecProtocol/barretenberg that referenced this pull request Nov 27, 2024
🤖 I have created a release *beep* *boop*
---


<details><summary>aztec-package: 0.65.0</summary>

##
[0.65.0](AztecProtocol/aztec-packages@aztec-package-v0.64.0...aztec-package-v0.65.0)
(2024-11-26)


### Features

* **avm:** New public inputs witgen
([#10179](AztecProtocol/aztec-packages#10179))
([ac8f13e](AztecProtocol/aztec-packages@ac8f13e))
</details>

<details><summary>barretenberg.js: 0.65.0</summary>

##
[0.65.0](AztecProtocol/aztec-packages@barretenberg.js-v0.64.0...barretenberg.js-v0.65.0)
(2024-11-26)


### Bug Fixes

* **bb.js:** Don't minify bb.js - webpack config
([#10170](AztecProtocol/aztec-packages#10170))
([6e7fae7](AztecProtocol/aztec-packages@6e7fae7))
</details>

<details><summary>aztec-packages: 0.65.0</summary>

##
[0.65.0](AztecProtocol/aztec-packages@aztec-packages-v0.64.0...aztec-packages-v0.65.0)
(2024-11-26)


### ⚠ BREAKING CHANGES

* remove SharedImmutable
([#10183](AztecProtocol/aztec-packages#10183))
* rename sharedimmutable methods
([#10164](AztecProtocol/aztec-packages#10164))

### Features

* **avm:** New public inputs witgen
([#10179](AztecProtocol/aztec-packages#10179))
([ac8f13e](AztecProtocol/aztec-packages@ac8f13e))
* Blobs.
([#9302](AztecProtocol/aztec-packages#9302))
([03b7e0e](AztecProtocol/aztec-packages@03b7e0e))
* One liner for nodes to join rough-rhino
([#10168](AztecProtocol/aztec-packages#10168))
([3a425e9](AztecProtocol/aztec-packages@3a425e9))
* Origin tags implemented in biggroup
([#10002](AztecProtocol/aztec-packages#10002))
([c8696b1](AztecProtocol/aztec-packages@c8696b1))
* Remove SharedImmutable
([#10183](AztecProtocol/aztec-packages#10183))
([a9f3b5f](AztecProtocol/aztec-packages@a9f3b5f))
* Rename sharedimmutable methods
([#10164](AztecProtocol/aztec-packages#10164))
([ef7cd86](AztecProtocol/aztec-packages@ef7cd86))
* UltraRollupRecursiveFlavor
([#10088](AztecProtocol/aztec-packages#10088))
([4418ef2](AztecProtocol/aztec-packages@4418ef2))


### Bug Fixes

* Aztec-nargo curl in the earthfile also
([#10199](AztecProtocol/aztec-packages#10199))
([985a678](AztecProtocol/aztec-packages@985a678))
* **bb.js:** Don't minify bb.js - webpack config
([#10170](AztecProtocol/aztec-packages#10170))
([6e7fae7](AztecProtocol/aztec-packages@6e7fae7))
* Docker compose aztec up fix
([#10197](AztecProtocol/aztec-packages#10197))
([d7ae959](AztecProtocol/aztec-packages@d7ae959))
* Increase test timeouts
([#10205](AztecProtocol/aztec-packages#10205))
([195aa3d](AztecProtocol/aztec-packages@195aa3d))
* Release l1-contracts
([#10095](AztecProtocol/aztec-packages#10095))
([29f0d7a](AztecProtocol/aztec-packages@29f0d7a))
* Revert "feat: blobs.
([#9302](AztecProtocol/aztec-packages#9302))"
([#10187](AztecProtocol/aztec-packages#10187))
([a415f65](AztecProtocol/aztec-packages@a415f65))


### Miscellaneous

* Added ref to env variables
([#10193](AztecProtocol/aztec-packages#10193))
([b51fc43](AztecProtocol/aztec-packages@b51fc43))
* **avm:** Operands reordering
([#10182](AztecProtocol/aztec-packages#10182))
([69bdf4f](AztecProtocol/aztec-packages@69bdf4f)),
closes
[#10136](AztecProtocol/aztec-packages#10136)
* Fix devbox
([#10201](AztecProtocol/aztec-packages#10201))
([323eaee](AztecProtocol/aztec-packages@323eaee))
* Misc cleanup
([#10194](AztecProtocol/aztec-packages#10194))
([dd01417](AztecProtocol/aztec-packages@dd01417))
* Reinstate docs-preview, fix doc publish
([#10213](AztecProtocol/aztec-packages#10213))
([ed9a0e3](AztecProtocol/aztec-packages@ed9a0e3))
* Replace relative paths to noir-protocol-circuits
([1650446](AztecProtocol/aztec-packages@1650446))
</details>

<details><summary>barretenberg: 0.65.0</summary>

##
[0.65.0](AztecProtocol/aztec-packages@barretenberg-v0.64.0...barretenberg-v0.65.0)
(2024-11-26)


### Features

* **avm:** New public inputs witgen
([#10179](AztecProtocol/aztec-packages#10179))
([ac8f13e](AztecProtocol/aztec-packages@ac8f13e))
* Origin tags implemented in biggroup
([#10002](AztecProtocol/aztec-packages#10002))
([c8696b1](AztecProtocol/aztec-packages@c8696b1))
* UltraRollupRecursiveFlavor
([#10088](AztecProtocol/aztec-packages#10088))
([4418ef2](AztecProtocol/aztec-packages@4418ef2))


### Miscellaneous

* **avm:** Operands reordering
([#10182](AztecProtocol/aztec-packages#10182))
([69bdf4f](AztecProtocol/aztec-packages@69bdf4f)),
closes
[#10136](AztecProtocol/aztec-packages#10136)
</details>

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
charlielye pushed a commit that referenced this pull request Dec 16, 2024
Electric boogaloo (cont. of #9302)

This reverts commit a415f65.

---------

Co-authored-by: Tom French <15848336+TomAFrench@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
e2e-all CI: Enables this CI job. team-turing Leila's team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants