Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial commit for Native Account Abstraction EIP #1

Open
wants to merge 10 commits into
base: origin_master
Choose a base branch
from

Conversation

forshtat
Copy link

@forshtat forshtat commented Sep 29, 2023

EIP draft to support a native account abstraction transaction type. This is one of three EIPs: AA transaction, aggregation, validation rules. See https://hackmd.io/@alexforshtat/native-account-abstraction-roadmap
and https://vitalik.eth.limo/general/2023/09/30/enshrinement.html

This is an internal PR. The public PR to the RIP repo is located here:
ethereum/RIPs#3

Copy link

@lightclient lightclient left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial thoughts. There is a lot here, still don't fully follow the entire flow. I think a diagram of the stages of execution could help a bit. Maybe you already have this.

--

The rest of my feedback is uninteresting and predictable. This is a very complex change to the protocol. I think you are going to have a hard to getting EL teams interested in this change for this reason. I think the strongest motivator for this is point 3) in the motivation: 4337 not benefiting as much from crLists. Otherwise, I don't think the other motivating factors are strong enough to take on the risk and complexity of this EIP.

Additionally, I don't think it is a good idea to enshrine application layer standards into the protocol (solidity abi, func selector, etc). I think it is best to keep the two worlds separate.

Anyways, nice work on this proposal. A lot of work has gone into it and


If `paymasterData` is specified, its first 20 bytes contain the address of a `paymaster` contract.

If `deployerData` is specified, its first 20 bytes contain the address of a `deployer` contract.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not have deployerAddr and deployerData (same for paymaster) instead?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not have deployerAddr and deployerData (same for paymaster) instead?

One reason we combined the address and the data passed to it is that this is what the ERC-4337 does.
"Packing" these four fields into two saved some calldata for the bundlers.
This is not as relevant for the native TransactionType4.
However, many transactions don't have a Paymaster, and almost all of the transactions don't have a deployerData as only the first transaction by an address has it.
This optimization remains a somewhat valid concern for the native TransactionType4 as loading the TransactionType4 into memory and passing it to a call will be a little more expensive.
If 1% of the transactions have a Deployer, there is no reason to waste data on another address.

EIPS/eip-00.md Outdated Show resolved Hide resolved
This approach guarantees unique transaction nonce and hash but removes the requirement of nonce being sequential
numbers.

This `nonce` is exposed to the EVM in a `NonceManager` pre-deployed contract located at the AA_NONCE_MANAGER address.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a small amount of precedence with 4788 to not have "pre-deploys", (i.e. irregular state transitions). You may consider deploying the nonce manager as a normal contract and "enshrine" it's address instead.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a small amount of precedence with 4788 to not have "pre-deploys", (i.e. irregular state transitions). You may consider deploying the nonce manager as a normal contract and "enshrine" it's address instead.

We were planning to deploy it and force a certain deployment address, which is what EIP-4788 used to do with HISTORY_STORAGE_ADDRESS, but it was changed on Aug 24 to "A special synthetic address is generated by working backward from the desired deployment transaction". We could do that, but there's a downside that EIP-4788 doesn't have to deal with. This EIP is going to be used on different chains, whereas EIP-4788 only lives on L1 mainnet. Some chains can't generate the same deployment address, such as zkSync. It means we'll have different addresses on different chains.

The wallet needs to read the account's current nonce for the given key from this contract, and we wouldn't want wallets to maintain a chain->NonceManager list. We can solve it in two ways:

  1. Use a fixed address as the EIP suggests, which wallets always read from.
  2. RPC nodes of each network will abstract the address by adding an eth_getNonceTxType4(account,key) method.

Which one is better in your opinion?

}
}

```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a nice and concise description of validation


The validation frames of the AA_TX_TYPE transaction are represented as individual virtual transactions by the clients.
They are assigned their own sequential `transactionIndex`, and their `transactionHash` is defined as
(`AA_TX_TYPE transaction hash + 1`).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this affect the tx root?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also why does this need to happen? why can't it just be related to the original transaction hash - e.g. extend eth_getTransactionByHash to return an object stages or something which has info about the validation frames

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also why does this need to happen? why can't it just be related to the original transaction hash - e.g. extend eth_getTransactionByHash to return an object stages or something which has info about the validation frames

The reason we added the artificial 'validation tx hash' is so that clients can observe the "outcomes" for the validation phase separately.

We believe that we do need to provide access to this part of the transaction rather than hiding state changes that happened during validation. One of the L2 AA implementations seems to do this, and it makes debugging hard when a state change during validation has an effect on execution. There is a state change that just magically happens.

We can't concatenate it into one transaction because it is not atomic and there may be state transitions between the two, that would affect the execution transaction. E.g. tx1.validation and tx2.validation both use the same Paymaster, then tx1.execution accesses the Paymaster balance, which is changed in tx2.validation.
From EVM perspective these are two separate transactions. The consensus just enforces a dependency between them.

At the API level, we could extend eth_getTransactionByHash to return two objects as you have suggested, but we can't merge it into one object.

At the block level, it looks like two transactions but at the API level, we can abstract it away.


```

### RPC methods (eth namespace)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these changes should be made to ethereum/execution-apis

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, they belong there, but since this EIP currently targets L2 networks, would it be appropriate to add RPC methods to the ethereum doc despite not being supported by L1? Or should we clone ethereum/execution-apis to a new execution-apis-l2 repo?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love if each L2 maintained their own fork of ethereum/execution-apis that highlights the differences vs the L1 API.

I think I'd be useful to see these RPC methods on a fork.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But until they do, where should this API live if not in the EIP? Should we start a fork of ethereum/execution-apis ourselves for this?

```

func validateAccountAbstractionTransaction(tx *Transaction) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert ! (sender.code.length > 0 && deployerData.length > 0)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. We should add that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert ! (sender.code.length > 0 && deployerData.length > 0)

That's right, added this check. Thanks!

Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
Copy link

@hazim-j hazim-j left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First off, amazing effort on this initial draft for native account abstraction! A lot of thought has clearly been put into this. I read it a couple of times over the weekend and had some initial thoughts.

Backwards compatibility

It would be great to have more emphasis on what a realistic migration path is going to look like for various stakeholders. On a high level the EIP has a very similar flow to ERC-4337, but there seems to be enough differences to make it not entirely backwards compatible (although do correct me if I'm wrong).

This allows the same client code and smart contracts to be used in both systems with minimal to no modifications

From my understanding, this does not seem to be the case. Current smart contracts would still need to be upgraded to support new interfaces. AFAIK, this is the equivalent mapping of 4337 to AA_TX_TYPE:

  • validateUserOp --> validateTransaction
  • validatePaymasterUserOp --> validatePaymasterTransaction
  • postOp --> postPaymasterTransaction
  • UserOperation --> TransactionType4

It looks like the current migration path for CAs is to expand and contract. i.e. Upgrade to support both 4337 and AA_TX_TYPE and then reduce to just the latter post transition period. In this case, what would be the rationale for CAs to upgrade their EP to an adapter contract? Is it purely to help re-route AA_TX_TYPE methods to their 4337 counterparts?

On the client side, there would still be some effort required to migrate from UOs to an AA_TX_TYPE since the types aren't identical and both are required to be routed to their own RPC methods. A clients POV for building a UO vs AA_TX_TYPE (high level):

  • Building a UO: eth_estimateUserOperationGas --> eth_sendUserOperation --> poll eth_getLog for UserOperationEvent or eth_getUserOperationReciept
  • Building a AA_TX_TYPE: eth_estimateGasAccountAbstraction --> eth_sendTransaction--> poll eth_getTransactionReceipt

The participating RPC nodes MAY be able to handle the eth_sendUserOperation call and convert it into an eth_sendTransaction call if necessary.

How is this possible if a userOpHash is signed based on different values to a TransactionType4?

I think more details/diagrams on the migration path would helpful here. Especially for teams who are currently building production applications on 4337 and need to understand what is exactly required to transition to native AA.

Gas estimation

Not much to say on this part yet, but based on how complex gas estimation has been for UserOperations, I'd be interested to see how similar the process for estimating an AA_TX_TYPE would be.

paymasterGasLimit

For instance, I see that we now have paymasterGasLimit as opposed to using validationGasLimit like in 4337. What was the rationale behind this as opposed to using validationGasLimit for the paymaster validation frame andcallGasLimit for the paymaster post-transaction frame?

eth_estimateGasAccountAbstraction

What was the rationale behind a new RPC method rather than extending eth_estimateGas?

Also since fee value isn't returned by eth_estimateGas, why does this method return builderFee instead of having something similar to eth_maxPriorityFeePerGas for AA_TX_TYPE? I presume there could be some sort of coupling between builderFee and gas limits during estimation, in which case it would be worth explicitly mentioning it?

@sherifahmed990
Copy link

Just finished reading the full EIP, great work and congratulations on this milestone.

@yoavw
Copy link

yoavw commented Oct 4, 2023

Thanks for reviewing, @lightclient !

Some initial thoughts. There is a lot here, still don't fully follow the entire flow. I think a diagram of the stages of execution could help a bit. Maybe you already have this.

Good idea, we will add a diagram.

This is a very complex change to the protocol. I think you are going to have a hard to getting EL teams interested in this change for this reason.

Initially we're not aiming for L1 mainnet. The early adopters are likely to be L2 networks that are already considering some form of native account abstraction. Some networks already enshrined modified versions of ERC-4337 in different ways. This EIP is an attempt to standardize a safe way of doing this, to ensure protocol safety and to avoid ecosystem fragmentation. E.g. the need for chain-specific wallets.

Having said that, we do need feedback from L1 devs. After gaining traction on multiple L2s, there may be user demand to backporting it to L1 so we need to ensure that design choices don't conflict with L1 plans and make this harder than it needs to be.

As for the level of complexity, we tried to make it as simple as possible while retaining the safety properties of ERC-4337. For that reason we moved the complexity of the ERC-4337 validation rules to a separate EIP which can be handled at the mempool level (and is not mandatory for the use of type-4 transactions unless they use that mempool). If you see ways to further simplify this EIP without losing these properties, we're very interested to discuss them.

I think the strongest motivator for this is point 3) in the motivation: 4337 not benefiting as much from crLists. Otherwise, I don't think the other motivating factors are strong enough to take on the risk and complexity of this EIP.

crLists are indeed one of the primary motivations, but Vitalik pointed out a few others - see the "Enshrining ERC-4337" section of his recent post. Gas efficiency, for example, is an important factor. The network currently subsidizes EOA, loading and executing its validation code for free (21000 gas, which AA accounts also need to pay in addition to loading and executing their own validation code). This is a major hindrance to the adoption of AA.

Additionally, I don't think it is a good idea to enshrine application layer standards into the protocol (solidity abi, func selector, etc). I think it is best to keep the two worlds separate.

I share that concern. However, smart accounts are implemented as a contract, and the protocol needs a way to communicate with it in order to separate validation from execution, which is essential for AA if we want a DoS resistant mempool. My initial thought was to try to achieve this with EIP-2938 PAYGAS instead of enshrining an interface, but it became harder to retain some of the safety properties of ERC-4337. For example, running all validations before any execution is essential for protecting block builders against DoS, and can't be done if the separation happens during transaction execution.

Another option would be to use EOF and add multiple code entrypoints to the contract, so it would have a separate entrypoint for validation (which only smart accounts will populate, while other contracts won't). But adding a dependency on EOF would significantly delay adoption, which may result in more L2 networks enshrining an incompatible solution. Therefore we chose to favor a solution that doesn't add dependencies on other future EIPs. When EOF is finally merged, it'll be possible to have a modified version of this EIP that uses code sections, and combine it with an ERC that makes it compatible with the current EIP.

Smart account developers are typically solidity developers, and are comfortable with using interfaces. If we come up with a different scheme we'd probably have to wrap it in a way that is compatible with how ERC-4337 interfaces work.

Do you see another way to avoid enshrining an interface?

As a side note, Ethereum already enshrined a solidity ABI in DepositContract. CL has an implicit dependency on an EL contract, even though the ABI may not be a part of the protocol specs.

Another question I wanted to get your input on, is SSZ vs. RLP. The EIP uses RLP encoding and adds an EIP-2718 transaction type, because it's how wallets currently work. However, there's a plan to move things to SSZ. My initial preference was to use RLP in order not to add unnecessary friction to existing wallets that want to support the new type. WDYT?

Anyways, nice work on this proposal. A lot of work has gone into it and

Thanks! We really appreciate your feedback and looking forward to collaborating further.

@yoavw
Copy link

yoavw commented Oct 4, 2023

Thanks @hazim-j !

First off, amazing effort on this initial draft for native account abstraction!

Thanks!

the EIP has a very similar flow to ERC-4337, but there seems to be enough differences to make it not entirely backwards compatible

One of the goals is actually to make the flows equivalent, making migration as easy as possible.

From my understanding, this does not seem to be the case. Current smart contracts would still need to be upgraded to support new interfaces.

Yes, contracts will need to be upgraded to support the new standard, but the different phases, the validation rules, etc. remain the same. The struct is different so it requires a different function, but validateUserOp and validateTransaction could just parse the struct and wrap a unified implementation of the account’s validation. We will provide a template for this.

and then reduce to just the latter post transition period.

Not necessarily reduce it after transition. The account can be deployed on different chains. Ones that don’t use native AA and continue to use ERC-4337, ones that use native AA from the start, and ones that go through a transition process. I think it’ll make sense for accounts to support both interfaces as long as there are chains without native AA, which will continue to be the case at least for a number of years.

In this case, what would be the rationale for CAs to upgrade their EP to an adapter contract? Is it purely to help re-route AA_TX_TYPE methods to their 4337 counterparts?

An adapter is just the easiest way for an account to migrate to native AA with as little changes as possible. But in the long run I think it’ll make more sense to directly support the new interface in the account and not add the overhead of the adapter. The EIP is not opinionated about this. It’ll be up to each account developer, and we’d like to offer the tools to make the transition as easy as possible.

On the client side, there would still be some effort required to migrate from UOs to an AA_TX_TYPE

Yes, wallets will need to support the new transaction type and become familiar with the associated RPC. But if the account supports both modes, it’ll remain usable with UserOps as well so it’ll be able to continue using wallets that are still unaware. It’s similar to the transition to EIP-1559. Legacy transactions remain possible if you’re using a legacy wallet - they’re just less efficient.

How is this possible if a userOpHash is signed based on different values to a TransactionType4?

Right. It would require the account to convert the TransactionType4 back to a UserOperation and hash it (on-chain). I’m not sure this would make sense, because if the account is already aware of both structs, it could just accept both UserOps from the 4337 EntryPoint and TransactionType4s from the native EntryPoint. Maybe we'll just remove it, and the account will instead support both modes directly.

I think more details/diagrams on the migration path would helpful here. Especially for teams who are currently building production applications on 4337 and need to understand what is exactly required to transition to native AA.

Yes, we should have a separate document outside of the EIP, that focuses on migration. We’re still far from finalizing the EIP so it may be able to write it when the details are more or less known. Our goal, however, is to ensure that projects built on 4337 will have an easy migration path. You can safely continue building for 4337 and assume that there will be a migration path that doesn’t require a major rewrite :) You are also welcome to join the effort and help us make sure things are done that way.

Not much to say on this part yet, but based on how complex gas estimation has been for UserOperations, I'd be interested to see how similar the process for estimating an AA_TX_TYPE would be.

It should actually be quite similar.

I see that we now have paymasterGasLimit as opposed to using validationGasLimit like in 4337. What was the rationale behind this as opposed to using validationGasLimit for the paymaster validation frame andcallGasLimit for the paymaster post-transaction frame?

The rationale has to do with some corner cases we’ve discussed lately, where staked entities could affect each other’s reputation. For example, a rogue deployer could use its own state to detect on-chain execution (as opposed to single simulation), and waste more validation gas in order to cause the paymaster to revert at its own reputation expense. After discussing a few such cases we came to the conclusion that it’s best to separate it to two limits.

It will be compatible with 4337 since we intend to propose this change to 4337 as well. The same may be true for other ways in which the EIP differs from 4337. By the time we finalize both the ERC and the EIP, they’ll be mostly equivalent and therefore migration will be trivial.

What was the rationale behind a new RPC method rather than extending eth_estimateGas?

Different return values. eth_estimateGas returns one value, whereas the new RPC returns something more complex. It doesn’t seem right to change the return value type of an existing EIP and possibly break existing tools that are unaware of it.

Also since fee value isn't returned by eth_estimateGas, why does this method return builderFee instead of having something similar to eth_maxPriorityFeePerGas for AA_TX_TYPE?

builderFee serves a different purpose. Note that it is denominated in wei rather than gas. It’s a generalization of EIP-4337’s preVerificationGas. A fee that is paid to the bundler/builder in addition to the gas fees. It’ll be used in different ways on different chains. In 4337 it’s things like the 21000, calldata cost, etc. In native AA it may be things like signature aggregation fee, ZK proving cost, L1 calldata gas for rollups, etc. The node will know how to suggest a number that enables inclusion. The user may choose to use a smaller number but it may delay inclusion (or even prevent it if it’s too low). E.g. if the account uses an aggregator, a lower number will mean that the transaction will only be included when there are a lot more transactions with the same aggregator in the mempool. Or if it’s a sequencer that writes calldata to L1, setting it too low may delay inclusion until L1 gas cost drops. These things are not directly associated with the amount of gas consumed by the transaction, so it required a wei-denominated fee.

@yoavw
Copy link

yoavw commented Oct 4, 2023

Just finished reading the full EIP, great work and congratulations on this milestone.

Thanks!

Copy link

@dancoombs dancoombs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very excited about this. Great work team!

Left some initial thoughts. Will probably read this a few more times and add more. Apologies for nits, left comments on things that were unclear to me coming from a 4337 background.

From my perspective, the hardest things to get right as an ERC-4337 bundler, aside from the validation rules, have involved gas estimation. My most pressing comments are around the builderFee definition and changes to the gas estimation RPC.

As it seems like the function names are changing for the smart contracts, we should ensure that ERC-6900 can handle both definitions as users may want to use both 4337 and enshrined-AA.

Comment on lines +197 to +203
As we need to account for an additional off-chain work that block builders have to perform to
include `AA_TX_TYPE` transactions in their blocks, as well as a potential L1 gas cost for builders
operating on L2 rollups, and given that this work does not correspond to the amount of gas spent on
validation and is not linked to the gas price, the `sender` may decide
to pay an extra `builderFee` as a "tip" to the block builder.

This value is denominated in wei and is passed from the `sender` to the `coinbase` as part of the gas pre-charge.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? We already have priority fees for tips. Builders can just charge higher priority fees for type 4 transactions? L2s already have their own ways of charging for transaction calldata, that they could just apply to type 4 transactions.

In general, the preVerificationGas field has been a nuisance in 4337 because there isn't a way for a user to signal the maximum amount of gas they want to spend, but to be refunded if its lower than that. Have we considered a limit based field here instead? Or just applying these costs directly to validationGasLimit?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought about this more, I understand why this concept is needed for full gas abstraction on L2s.

However, I'd like to know what costs can contribute to this field that are not better charged by the VM itself can modeled as a limit. Also, callGasLimit would be a better place to subtract these costs from, so that validationGasLimit can stay relatively static for a particular account's validation path.

On Arbitrum, for example, my understanding the VM charges the L1 calldata costs by subtracting from the initial gasLimit. That same concept could be applied here, and across L2s that already have ways of charging for this built into their VMs.

Its pretty important to model this as a limit, especially when it comes to calldata. We've found that users who want to ensure that their preVerificationGas is always valid while L2 fees are changing quickly, need to increase their preVerificationGas by up to 25%. Since L1 calldata costs are the majority of the cost of a L2 transaction, this basically makes using 4337 20%+ more expensive.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

builderFee is a generalization of preVerificationGas. It's deliberately denominated in wei rather than gas, because it accounts for costs not directly associated with the amount of gas consumed by the transaction.

One example is aggregation fees, especially when using a Snark aggregator with heavy proving costs - a cost that differs between types of aggregators. Another is L1 calldata gas for rollups, which may or may not be possible to calculate via the L2's gas oracle, depending on each L2 architecture. Yet another one is ZK rollup proving costs involving the number of keccaks in the transaction.

Some L2 networks are actually considering a new transaction standard for multidimensional gas to address their individual cost dependencies.

On chains where everything can be calculated as part of the transaction gas, such as on Ethereum mainnet, this value may always be zero. But on other chains there will always be costs not associated with gas, that the user may not be able to calculate. The sequencer (or RPC node) will have to propose a sensible value based on the information it has (which the user might not have, such as the GPU cost of proving the transaction, combined with the number of transactions in the next batch to be proven).

I hope this value can be zero in as many cases as possible, but I think we should have it in the standard to support the cases where it's needed.

Do you see a way to model this as a limit, while still supporting every such case of off-chain costs associated with inclusion of the transaction? The above are just a few examples. We need this value unless we can come up with a limit-based model that would work with every possible future off-chain cost.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its very important to model costs that may be time-variable in a limit field. Else users are stuck paying extra to achieve reliable time to mine.

The solution here may vary from network to network. On Arbitrum or Optimism the protocol could easily be modified to account for L1 calldata costs in the validationGasLimit field. Any network that charges variable off-chain fees to their users can just convert those fees to gas by dividing by the base fee, and then subtract that gas from the validationGasLimit.

I could see builderFee being used to charge a static fee for offchain signature aggregator proving costs. Then validationGasLimit can be used to cover the variable aggregated signature onchain validation cost (which will reduce with the number of aggregated signatures).

I'm not against the builderFee field existing, but I hope as this EIP is adopted onto chains they utilize limit fields to charge for variable costs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yoavw - what is the reason not to model builderFee as a limit? with erc4337, we have a contract to apply the value, and thus it has to be fixed. With native code, we can define its usage differently for different chains - while still letting the account put a limit on it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@drortirosh I'm not sure I understand how it would work, when it comes to offchain costs such as ZK proving costs.

For certain use cases it could work, if they are a part of that chain's consensus. E.g. the chain's proving cost could take into account things like the number of keccaks in a transaction or the number of transactions in a batch. This will be a part of the proof since it's a part of that chain's consensus.

But for use cases involving a 3rd party, how would it work? For example, when we add the aggregated native AA EIP, the signature prover may be a 3rd party service that uses an aggregation method the network is unaware of. The consensus won't be aware of the off-chain costs associated with every possible aggregation algorithm. Do you see a way to model this as a limit?

However, even if the answer is no, I agree that it would be best to model as many things as possible as a limit, and only resort to a static fee when there's absolutely no other way - which should be quite rare.

@dancoombs @drortirosh do you think we should add two dimensions here, enabling both a builderFeeLimit and a builderStaticFee, or do you see a way we could completely avoid static fees for every possible case?

Copy link

@dancoombs dancoombs Nov 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is very important to allow users to express a limit value for values that can vary (quickly relative to block time) and are part of consensus. The two main use cases being L2->L1 calldata costs and dynamic portions of aggregation validation. I think at a minimum there should be a builderGasLimit (or something) field denominated in gas.

If we are convinced that there are values outside of consensus that are needed (your explanation makes sense, but I haven't seen any of these use cases on a live implementation), I'd prefer the two dimensions.

We should make it very clear to implementors that using builderStaticFee for time-varying fees is bad practice and will force the users to incur extra costs in order to ensure their transaction gets mined.

When processing a transaction of type `AA_TX_TYPE`, however, multiple execution frames will be created.
The full list of possible frames tries to replicate the ERC-4337 flow:

1. Validation Phase

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the nonce validation frame?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See validate_increment() in line 145. It is invoked in the Nonce validation frame described in line 238. But it should be in this list as well.

@forshtat shouldn't it be referenced here, along with the other frames in the Validation Phase?


```solidity

function validateTransaction(uint256 version, bytes32 txHash, bytes transaction) external returns (uint256 validationData);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the signature-less txHash that can be used for signature validation? Should be defined below.

Copy link

@yoavw yoavw Oct 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's the equivalent of 4337 userOpHash.

@forshtat Dan is right that we don't define it anywhere in the EIP.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, need to add it (also, in ERC4337 it is not written in the ERC itself, only in the EntryPoint implementation.


```solidity

function validatePaymasterTransaction(uint256 version, bytes32 txHash, bytes transaction) external returns (bytes context, uint256 validationData);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question on txHash here.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

@forshtat in ERC-4337 we included this definition:
The userOpHash is a hash over the userOp (except signature), entryPoint and chainId.

Let's add a similar one here for txHash.

EIPS/eip-00.md Outdated

The `sender` address is invoked with `callData` input.

The gas limit of this frame is set to `callGasLimit`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be callGasLimit - calldataCost?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in the pseudocode it's called with callGasLimit - calldataCost (line 570). @forshtat I think the correct one is the one in the pseudocode rather than this one, isn't it?


#### `eth_estimateGasAccountAbstraction`

Accepts Transaction Type `AA_TX_TYPE` with fields `validationGasLimit`, `paymasterGasLimit`, `callGasLimit` optional.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should maxPriorityFeePerGas and maxFeePerGas also be optional?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would the node use during simulation if they are not provided?

For the gas limit values, it can set to max and then estimate them. For fees we can't set it to max because the account (or paymaster) doesn't have sufficient funds to pay it, so the call would fail. Besides, paymaster's validation (or account's validation if it's multi-tenant) might enforce a limit to prevent unreasonable fees from being used to drain its balance. Setting it to an arbitrarily high value may trigger a validation failure in this case.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They could default to the existing base fee for maxFeePerGas and 0 for maxPriorityFeePerGas, or really whatever the RPC node wants to default to. Having to do fee estimation before gas estimation increases the latency of the full transaction submission pipeline. We could, however, leave this complexity to the clients.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yoavw What do you think about temporarily "adding" the gasLimit * maxFeePerGas to the balance whenever the estimation is done? That's what we do at zkSync and it has worked well so far.

I think that most wallets won't limit the maximal maxFeePerGas and if they do, they should supply the limited maxFeePerGas themselves.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dancoombs I think it's fine to let the RPC node put something there if it's missing. It might fail validation if the transaction uses a paymaster or an account that enforce some policy on fees, but in that case the wallet should be aware that it's using something of that sort and provide its own values. In the common case it'll work fine.

It just means that the RPC will have to make a distinction between how it checks transactions during estimation and during sending. In the latter case it must verify that the fees are sufficiently high and that the account is sufficiently funded to pay them.

@StanislavBreadless that's a nice usability hack. We'll have to add it either to the account's balance, or to the paymaster's if the paymaster is non-zero.

@forshtat @drortirosh WDYT about both things? Making the fees optional in the RPC, and injecting a balance to the account or the paymaster during estimation?


If any of the frames reverts the call returns the revert data of each reverted frame.

#### `eth_estimateGasAccountAbstraction`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there no way to extend eth_estimateGas to accommodate this new transaction type?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We deliberately avoided it, in order not to break existing code. Changing the return value type of an existing RPC, from a single value to an array, seems risky. Seems safer to make it explicit by using a new name, that only aware code will call.


Accepts Transaction Type `AA_TX_TYPE` with fields `validationGasLimit`, `paymasterGasLimit`, `callGasLimit` optional.

Returns `{validationGasLimit, paymasterGasLimit, callGasLimit, builderFee}` object.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the most complicated portions of ERC-4337 gas estimation is dealing with ERC-20 (or similar) paymasters that perform a transfer during validation, requiring the user to have enough of the ERC-20 token so that it doesn't revert.

However, a common use case is to first estimate gas, then transfer extra funds for gas, then transact.

This could be overcome by allowing state overrides during gas estimation, where the estimator just overrides the ERC-20 balance of the user to something high for valid gas estimation.

Thoughts on adding state overrides to this gas estimation endpoint?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think it'll be useful to add state override here. @forshtat @drortirosh @shahafn any reason not to?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agreed to add stateOverride to estimate.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, let's do that. But also see @StanislavBreadless 's suggestion above, re an automatic stateOverride to seamlessly solve this particular use case?


Returns `{validationGasLimit, paymasterGasLimit, callGasLimit, builderFee}` object.

Note that the `deployerData` and `paymasterData` fields are required for a consistent result.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might make sense to define what a "dummy signature" is, and its requirements here.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't define its requirements, as they are set by the account itself. But we should mention that the account needs to support one, in order to support gas estimation.


#### `eth_estimateGasAccountAbstraction`

Accepts Transaction Type `AA_TX_TYPE` with fields `validationGasLimit`, `paymasterGasLimit`, `callGasLimit` optional.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Designing smart contracts that have complicated validation functions but have an easily calculable "dummy signature" has proven to be difficult for many account devs. I've seen instances of this not being possible, and devs just determining a max value for validationGasLimit that is consistent with their account types. This makes sense as validation gas consumption shouldn't change much.

I think we should add a switch to this method that lets the node skip these estimations if the user doesn't want them to run.

Possibly if the user supplies any of these 3 limit fields the RPC will skip estimation for that portion and just return the provided value?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't skip running validation, as it may change the state in ways that affect the transaction. Besides, validation may check more things besides the signature, so it may revert on chain on one of these checks after being skipped in gas estimation.

The node has to run it for completeness, but the user is free to override the estimated value and use something different. The RPC could get a number from the user and return that number instead of the actual result, but it seems like just syntactic sugar. If the user provides the numbers, the user can also use them instead of the returned ones.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking to save nodes some processing cycles if the user knows how much gas to use for validation. Depending on the account implementation it may be difficult for the user to supply a valid dummy signature that doesn't revert. I think in those cases the node should still be able to return a callGasLimit

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the node has no way to know whether some meaningful state change (or more likely a transient state change) happens during validation, which would alter the execution flow of the call itself. The user may also not know it.

Consider an ERC-6900 account for example:

A 6900 plugin may have both a validation hook and a post-execution hook. Let's say it's a plugin that supports two signature types: 1. Mobile signature. 2. Ledger signature. And it enforces different spending limits on the account in post-execution, depending on whether the transaction was signed by the mobile phone or by the ledger. The behavior of the plugin is undefined if it's post-execution was called without going through its validation.

Similarly, if an account that supports session keys associated with different operations, it has no way to determine how a call should behave if it doesn't know which session key signed the transaction.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. However, this is a suggestion for simple wallets that have very static validation methods and only for gas estimation. For example it doesn't really make sense to estimate the validation gas for SimpleAccount each time. Users could provide a value to the gas estimation RPC and save some compute cycles (and potentially RPC costs)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wallets can cache validation estimation but need to take care, e.g., to invalidate the cache if plugins are modified.
We can only suggest on the non-cached mode.

and the sending node will be blocked as a spammer.
They may be propagated in the alternative mempool that allows them explicitly as defined in EIP-9999.

### All validation state changes apply before all execution ones
Copy link

@dancoombs dancoombs Oct 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand the motivation behind this requirement

Each AA_TX_TYPE txn in the mempool gets simulated beforehand and each piece of state read during validation can be stored alongside the txn. During block building:

  • Builder can gather AA_TX_TYPE txns, and (in parallel) check each of their associated validation state reads against the state after the last block, dropping those where state has changed (similar to conditional txns).
  • Builder sequentially builds the block, keeping track of state changes. When including an AA_TX_TYPE txn the builder checks that the state reads haven't changed in the block. If no collisions, apply the txn (validation and execution in sequence)
  • Validation rules and mempool reputation should protect the builder from too many drops due to changed state and failed validation frames.

Are there other building algorithms that this is attempting to accommodate?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In step 1, it shouldn't drop them. It needs to retest. It's different from conditional txns because those are used by bundlers, that are aware of their tx getting dropped in this case. Users don't expect their transaction to drop, and would have no control over it. For example, the transaction uses a paymaster so validation accesses the paymaster's balance or storage. The transaction waits a couple of blocks in mempool while other transactions use the paymaster and change its state. The transaction shouldn't be dropped as it's still valid.

Due to the above (not being able to drop transactions based on changed state), step 2 becomes exponentially hard. The builder may keep running into transactions that were valid at the beginning of the block but invalidated by the last added transaction.

The builder may try to avoid the issue by deferring transactions that read slots in their validation, that has changed in an earlier transaction in the block. But that introduces a censorship vector. An attacker would find a slot that you access in validation, such as your ERC20 balance in a token you wish to pay with, and send you 1 token to write to that state, paying a slightly higher fee than your transaction. Your transaction gets pushed off every block as long as the attacker keeps doing this, so the cost of censoring you is as low as sending one transaction per block.

To avoid censorship, the builder would have to resimulate validation of each transaction just before including it, which re-introduces the DoS vector.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense, thanks for the explanation!

@yoavw
Copy link

yoavw commented Oct 6, 2023

Very excited about this. Great work team!

Thanks @dancoombs , for so many insightful comments.

Left some initial thoughts. Will probably read this a few more times and add more. Apologies for nits, left comments on things that were unclear to me coming from a 4337 background.

Much appreciated. I think I replied all of them.

From my perspective, the hardest things to get right as an ERC-4337 bundler, aside from the validation rules, have involved gas estimation. My most pressing comments are around the builderFee definition and changes to the gas estimation RPC.

See my explanation of builderFee above. I don't think we can avoid it without losing important use cases, although it could be set to zero in all other cases. See separate comment.

As it seems like the function names are changing for the smart contracts, we should ensure that ERC-6900 can handle both definitions as users may want to use both 4337 and enshrined-AA.

Yes, after this EIP is finalized we should extend ERC-6900 to support this as well. It should be straightforward because the flows remain the same, so it's just different names.

* Issue 1: Add flow and block diagrams

* Issue 2: Remove claim 'eth_sendUserOperation' can be supported

* Issue 3: Explicitly define the signature-less transaction hash value

* Issue 5: Explicitly mention the nonce validation frame in the frames list

* Issue 7: Replace vague "cleanup" with more generic "post-transaction logic"

* Issue 8: Explicitly allow passing a "state override" to AA gas estimation RPC

* Issue 9: Include description of the "dummy signature" pattern

* Issue 10: Fix incorrect execution gas limit description
EIPS/eip-00.md Outdated Show resolved Hide resolved
EIPS/eip-00.md Outdated

```solidity

abi.encodePacked(MAGIC_VALUE_SENDER, validBefore, validAfter)
Copy link

@juniset juniset Oct 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the type and the usage of validBefore and validAfter should be defined somewhere even if it is derived from EIP4337.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@forshtat looks like we missed defining the types. Also, we used validBefore and validUntil interchangeably. In 4337 there is only validUntil so let's just call it that.

@juniset the usage is described in the "Transaction validity time range parameters" section (line 521) and shown in the pseudocode (line 442,426,400), but we should add the type definition.


In order to allow a gas estimation to determine the amount of gas that this frame
requires to complete successfully while not having the actual `signature` value, this function
should avoid reverting on invalid signature, and should return a value different from `MAGIC_VALUE_SENDER`.
Copy link

@juniset juniset Oct 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this also specify that the function should continue its execution until the end on an invalid signature to allow for a correct gas estimation? Returning prematurely for an invalid signature would lead to lower gas estimations.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it should consume the same amount of gas as it would with a correct signature. It might not even be just to continue execution, but actually to use the same amount of gas. For example, a multisig that needs to validate 3 signatures and received none, fails the signature check immediately but should waste the gas of validating three signatures.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the wording should be that it MUST return the MAGIC_VALUE to mark success, and SHOULD continue execution in signature error.

Comment on lines +197 to +203
As we need to account for an additional off-chain work that block builders have to perform to
include `AA_TX_TYPE` transactions in their blocks, as well as a potential L1 gas cost for builders
operating on L2 rollups, and given that this work does not correspond to the amount of gas spent on
validation and is not linked to the gas price, the `sender` may decide
to pay an extra `builderFee` as a "tip" to the block builder.

This value is denominated in wei and is passed from the `sender` to the `coinbase` as part of the gas pre-charge.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yoavw - what is the reason not to model builderFee as a limit? with erc4337, we have a contract to apply the value, and thus it has to be fixed. With native code, we can define its usage differently for different chains - while still letting the account put a limit on it.


Accepts Transaction Type `AA_TX_TYPE` with fields `validationGasLimit`, `paymasterGasLimit`, `callGasLimit` optional.

Returns `{validationGasLimit, paymasterGasLimit, callGasLimit, builderFee}` object.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agreed to add stateOverride to estimate.


```solidity

function validateTransaction(uint256 version, bytes32 txHash, bytes transaction) external returns (uint256 validationData);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, need to add it (also, in ERC4337 it is not written in the ERC itself, only in the EntryPoint implementation.


In order to allow a gas estimation to determine the amount of gas that this frame
requires to complete successfully while not having the actual `signature` value, this function
should avoid reverting on invalid signature, and should return a value different from `MAGIC_VALUE_SENDER`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the wording should be that it MUST return the MAGIC_VALUE to mark success, and SHOULD continue execution in signature error.

EIPS/eip-00.md Outdated

The `sender` address is invoked with `callData` input.

The gas limit of this frame is set to `callGasLimit - calldataCost`.\

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not cover calldataCost during validation? it would simplify execution
(e.g.: when calculating the callDataCost, it is strange that it depends on whether this is the 1st transaction (which also delpoys) or not.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we give the entire calldataCost to this frame, what happens if it actually uses more than callGasLimit - calldataCost, e.g. use exactly calldataCost? It means that the total cost of this validation is now callGasLimit + calldataCost.

Copy link

@StanislavBreadless StanislavBreadless left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your hard work!!! This is a really good path forward towards native AA adoption

contracts to be used in both systems with minimal to no modifications, while providing significant UX improvements.

Existing contracts are not significantly affected by the change.
The assumption that `tx.origin` is guaranteed to be an EOA is no longer valid.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the things that people expect when deploying on L2s is that their contracts will continue working as they did on L1.

There has been a study by dedaub firm on how msg.sender == tx.origin will impact the code: https://github.com/Dedaub/audits/blob/main/Ethereum%20Foundation/EIP-3074%20Impact%20Study.pdf

The result is that generally while the impact is not large, it is not negligible. Not sure if it will be ok to potentially make existing large protocols vulnerable, even though certainly using tx.origin is a bad practice, there might be a difference between "bad" and "completely broken". For instance, if it is used for flash loan protection, a protocol which requires billions USD to be broken, may be "relatively" safe under tx.origin protection, while might become totally REKT the moment this EIP goes live.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I'm familiar with that report, but I think we need to consider the pros and the cons here. If we keep the existing tx.origin behavior, we're effectively enshrining EOA and negating the value of native AA. tx.origin checks have been the biggest obstacle to wide AA adoption (along with lack of EIP-1271 support, which is improving rapidly). @juniset may want to weigh in on this one, being one of the people who had to deal with this problem for a few years.

IMHO the community has been given sufficient time to phase tx.origin out. There has been discussions about deprecating it since 2016, and any tutorial from the past few years warns against using it.

tx.origin also gives a false sense of security and doesn't actually prevent attacks. In the flash loan example, a block builder may include three transactions bundled together, emulating the behavior of the flash loan from an EOA. It would fund the EOA with a large amount, perform the attack, and move the funds back to safety.

If this change is controversial, we could move it to a separate RIP, letting each L2 decide for itself whether it wants to include it in addition to the native AA one. Personally I think that not including it would reduce usability, but that's up to each L2 to decide (and up to @juniset to decide whether Argent will support an L2 that doesn't 😉).

```

If `paymaster` is not specified, the `maxPossibleGasCost` is charged up-front, before any computation is done in any
execution frame, from the balance of the `sender` address.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Charging directly from the contract's balance opens a new attack vector (albeit rarely exploitable one, for sure):

A contract has a function with the same selector as validateTransaction and can be somehow tricked to return the same magic value. Then, the contract will have its funds directly stolen.

A slightly artificial example would be the following:

contract X { 
  fallback() {
     // It is not necessarily valid code, but you get the idea
     return msg.data[0:4];
  } 
}

I think it would be a lot more secure if things like nonce marking, fee payment, etc were done directly by the contract itself.

This way we could avoid both:

  • Impacting existing contracts (the chance of each is low even now)
  • Potential fishing attacks, where users could be tricked into believing that their contract is some normal smart contract, while it can actually invoke transaction out of its name

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We considered this attack vector. That's why we require validation functions to return magic values rather than just true:

| MAGIC_VALUE_SENDER    | 0xbf45c166  // bytes4(keccak256("validateTransaction(uint256,bytes32,bytes)"))          |
| MAGIC_VALUE_PAYMASTER | 0xe0e6183a  // bytes4(keccak256("validatePaymasterTransaction(uint256,bytes32,bytes)")) |

An auditor that looks at a contract and sees that it returns 0xbf45c166 will likely google it and see what it means for the contract.

Your artificial example bypasses that, by making the return value a user-provided value. I can imagine a rugpull scenario where someone plants such a fallback into some contract in order to drain its eth in the future. @forshtat check this out - we discussed the need to either transfer the eth explicitly or to use magic values, but see the fallback above.

As for nonce marking, nonces are handled by the protocol itself rather than the contract, due to an issue you (or maybe it was someone else from zksync) raised in the past. We need replay protection to be enforced by the protocol to avoid infrastructure problems when the same txhash appears on-chain twice. The contract only enforces a policy on the nonce (e.g. require(key==0) for linear nonce). But if a contract is not really an account, and has the fallback above, then there's little effect if someone uses it to increment a nonce associated with that contract in the AA_NONCE_MANAGER address. Unlike the case of value, these nonces are meaningless for contracts that aren't really accounts.

Copy link

@StanislavBreadless StanislavBreadless Nov 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need replay protection to be enforced by the protocol to avoid infrastructure problems when the same txhash appears on-chain twice

That is totally true. What I would suggest is that the contract works with nonce on its own & then the system just double checks that the user has used it correctly (i.e. the nonce was marked as used by the user). But using the native approach is fine as long as we are 100% sure that it is indeed an account that starts a transaction

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NonceManager behaves exactly the same way ERC-4337 does (after we changed it to accommodate zksync's requirement for guaranteed txhash uniqueness). It's handled by the protocol, and the account can only set restrictions on it - but not prevent the nonce from being updated.

The protocol guarantees that the nonce won't change unless a transaction has passed validation. The attack you described above, passing validation from a contract that isn't a smart account, would make it possible to update a nonce in NonceManager for that contract. However, I don't see how it would affect the contract in any way. The contract never uses NonceManager if it's not used as a smart account. Do you see a way that this could be exploited?

Keep in mind that this is not the account's nonce (in the account's own trie) but a separate mapping of nonces in NonceManager. So it has no effect on things like CREATE addresses of that account. It is only used by AA transactions sent for that contract.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you see a way that this could be exploited?

As an example, it could be in theory used to tamper with the nonce of a to-become-in-the-future account, i.e. a contract that wasn't an account, but will migrate to being an account later on might have already some nonces as non-zero. This is indeed not a big issue, but I believe it is a thing to consider.

Having nonces managed not by the system has its own drawbacks also, such as constant need for developers to be aware that a user might accidentally start a transaction that can amend its own nonce.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A non-account proxy contract that has such a fallback and later switches implementation to become an account, may indeed have to deal with non-zero nonces. But this can't lead to a replay attack, just inability to use these nonces. Wallets can't make assumptions on nonces anyway, because the user may be running the wallet on multiple devices and increment a nonce from there. So the wallet must read the nonce from the chain anyway.

Back in March we published this proposal and went through a lengthy community feedback process, to ensure that it works for everyone. By now, every existing ERC-4337 account depends on it. At this point, unless there's a strong reason to change it, I'm not sure we should break compatibility.


If `paymaster` is not specified, the `maxPossibleGasCost` is charged up-front, before any computation is done in any
execution frame, from the balance of the `sender` address.
If `paymaster` is specified, the gas cost is charged from its balance.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same goes for paymaster regarding forceful charging from its balance

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. @forshtat - same as above. Remember the discussion we had about it in Prague? Should we switch back to explicit transfer?

EIPS/eip-00.md Outdated Show resolved Hide resolved
EIPS/eip-00.md Outdated Show resolved Hide resolved

The `deployer` address is invoked with the `deployerData[20:]` as call data input.
It is important that the `deployer` is **not** invoked from the `AA_ENTRY_POINT` but from the `AA_SENDER_CREATOR`.
This is necessary to guarantee that `AA_ENTRY_POINT` may never initiate a call to a `sender` execution function

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure whether it is an appropriate document for it, but would it be possible to elaborate more on the exact issue here?

If the question is about msg.sender being equal to AA_ENTRY_POINT, then AA_ENTRY_POINT would only call the deployer (which can not be the same as sender). If the question is about tx.origin being equal to AA_ENTRY_POINT, then it may happen during paymaster validation for instance.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that AA_ENTRY_POINT is trusted by the account to perform arbitrary calls, and the protocol guarantees that an account would only receive such calls after successful validation. Calling a deployer with arbitrary calldata from AA_ENTRY_POINT breaks that assumption.

Consider a scenario where the victim uses an account that supports batching via an executeBatch(calls[]) function, and an attacker wants to take over the account.

The attacker would send a transaction from a new sender (with no code) and deployerData pointing to the victim's account. The deployer data will be a call to executeBatch where calls[0] is a call to the real factory which will deploy the sender. And calls[1] is a victim.setOwner or some other privileged function in the victim's account.

The transaction will be valid because it properly deploys the sender, and then the sender approves the validity of the transaction. But it violated the victim's expectation, causing the account's executeBatch to be called by AA_ENTRY_POINT without passing validation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thank you


### Execution layer block validation

When validating a block, the validity conditions for a block are extended as follows:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The validation rules here imply that the block will basically have the following structure:

<AA chunk>
<some non-zero number of EOA txs>
<AA chunk>
<some non-zero number of EOA txs>

Where each <AA chunk> is a set of N AA_TX_TYPE transactions, where there are strictly firsly N validations and then N executions. While it is definitely fine in a general case, if a builder wants to promise atomicity of AA transactions (i.e. each validation must be necessarily followed by it execution), it would mean that the block is invalid by the rules above (because a valid <AA chunk> is strictly of form Val1, Val2, Ex1, Ex2, i.e. Val1, Ex1, Val2, Ex2 are not allowed). So if a builder wanted to have atomic AA transactions, while keeping its block valid, it would have to separate these small <AA chunks> s by an EOA transaction.

Builders promising better UX for users is a common service provided by for instance Flashbots. So IMHO it is important to keep more options available.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Recently we've been discussing another reason this is needed, to mitigate a certain DoS attack against builders. We contemplated a NOP transaction (basically a separator transaction that is smaller than an EOA one), or a header transaction that specifies the size of the next AA chunk. We're thinking of going with the latter - see discussion in #5 You're welcome to weigh in on it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that a builder can skip bundling, and create a block of val1 ex1 <sep> val2 ex2
However, this voids the tx isolation.
That is, this bundler must verify that val2 wasn't invalidated by ex1. This can easily be an attack vector.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, and that's why we propose bundling as a general strategy. But if, for whatever reason, the builder wants to separate bundles (or even do bundlers of 1), we should support it. We saw that it's necessary for mitigating unused-gas attacks.

yoavw and others added 4 commits November 7, 2023 06:14
Co-authored-by: Stanislav Bezkorovainyi <stanislavbezkor@gmail.com>
Co-authored-by: Stanislav Bezkorovainyi <stanislavbezkor@gmail.com>
…tive AA EIP (#5)

* Add both unused gas penalty and a no-op transaction subtype to the Native AA EIP

* Replace no-op transaction with an optional "header counter" transaction
…t' (#7)

* Move the 'calldataCost' from the 'callGasLimit' to 'validationGasLimit'

It is reasonable to assume the "inner" call is the first piece of data
known to the transaction sender.
This includes the gas limit needed for the "inner" call to happen.
However, the 'calldataCost' also depends on the validation and paymaster
data, creating a chicken-or-egg problem.

Moving the costs of caldata to validation defers the calculation of
expected calldata cost and allows solving the paradox in most cases.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants