Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form
This article was co-authored by David Philipson.
In ERC-4337 Gas Estimation we discussed how gas works in ERC-4337 and our method for gas estimation. In part 2, Dummy Signatures and Gas Token Transfers, we found out that estimating gas is not always straightforward, and we need to account for edge cases. This post will dive into a few more of the edge cases we encountered.
Part of the definition of an Ethereum Layer 2 rollup is: it “lets layer 1 handle security, data availability, and decentralization, while layer 2s handles scaling.”
To achieve this, L2s will “roll up” many transactions into a single batch and then post them onto the layer 1 blockchain. This transaction cost isn’t free as L2s need to pay for the calldata costs incurred when posting a large batch of data to the L1 chain.
L2s need a way to charge their users for these incurred L1 calldata costs. Rollup frameworks achieve this in different ways.
This article focuses on the two largest EVM rollups: Arbitrum and Optimism.
On Arbitrum, the L2 gas charges to cover the L1 gas cost is calculated using the following formula, where the size of the data is its size in bytes after Brotli compression:
L1 Cost (L1C) = L1 price per byte of data (L1P) * Size of data to be posted in bytes (L1S)
Gas (G) = L1 Cost (L1C) / L2 Gas Price (P)
This gas is charged before a transaction begins execution and counts towards the transaction’s gasLimit. Thus, it must be accounted for during transaction gas estimation.
On Optimism, L1 gas cost is calculated a similar way, where the size in bytes after compression is multiplied by an L1 fee. Instead of translating this value to L2 gas like Arbitrum, Optimism deducts the required ETH directly from the sender’s account.
Senders do not need to take this value into account during gas estimation, but don’t have the ability to set a limit on their spending. Optimism takes care to ensure this fee won’t spike.
In both cases a bundler submitting a bundle transaction on an L2 is charged for L1 fees. The bundler needs a way to charge the bundled user operations for this fee by increasing the L2 gas.
The effective impact on L2 gas can be determined by:
L2_gas = L1_gas * L1_fee / L2_fee
verificationGasLimit and callGasLimit are metered by the entry point and thus can't be used by bundlers to charge for this extra gas. Bundlers need to rely on other methods.
Requiring a higher maxPriorityFeePerGas could allow the bundler to recoup these lost fees.
The calculation would look like:
This could work, but it has terrible UX.
To protect itself, the bundler must assume that the amount of call gas used will be 0 and charge the user as if verification gas is the only component. The user then will over pay by the buffer priority fee component multiplied by any call gas used. This isn’t great for the user.
That leaves preVerificationGas as the only reasonable field to manipulate. This does fit well into the definition provided above that preVerificationGas is “the gas field used to capture any gas usage that the entry point cannot measure." Since this is gas that the entry point doesn’t meter, we would expect to be able to use this field to charge the user.
The calculation would look like:
While this mechanism works, it has a very significant UX issue.
During (2) the bundler must assume L1 and L2 base fee values to perform the gas calculation. Because base fees are dynamic, if between the gas estimation step and the submission/bundling step the ratio of L1_fee / L2_fee increases, a higher preVerificationGas will be required, and user operations that calculated with a lower ratio will be rejected.
The best the user can do to improve this is assume that the ratio will increase between estimation and bundling and provide an overhead on their preVerificationGas to improve their chances.
Since preVerificationGas is always charged in full (i.e. its not a limit field), the user is stuck paying for this overhead regardless of what happens with price. The user is stuck choosing between potentially overpaying or having their operations rejected.
Rundler implements the preVerificationGas calculation above. We recommend that users of these L2s add a 25% buffer on the preVerificationGas returned by eth_estimateUserOperationGas to improve chance that the operation is not rejected.
Optimism’s base fees are incredibly low, often well below 100 wei (yes wei). The preVerificationGas required to charge for the L1 gas fee is inversely related to the L2 gas fee, thus requiring preVerificationGas to be extremely high (in the millions).
Optimism’s priority fee is often orders of magnitude higher than its base fee. Therefore, a user must be very careful not to submit a priority fee proportional to the network’s priority fee, as the entry point requires payment for preVerificationGas (very high) * priority fee (normal) , causing massive overpayment. For this reason Rundler requires the priority fee to be a static percentage of the base fee to incentivize bundling on Optimism.
Signature aggregation is a much talked about feature of ERC-4337 for its ability to:
In the current version of the entry point, the call to the signature aggregators validate function is unmetered. This means that the bundler is required to find a means to charge aggregated user operations for this gas, similar to the L2 problem above.
Like the L2 problem, the only reasonable way to do this is to increase preVerificationGas.
One way to do this is:
There are a few issues with this approach:
Rundler currently doesn’t have support for signature aggregators due to these complications. It is likely that we will add support for the method above and assume some (small, starting at 1) bundle size which will make using signature aggregators very expensive.
The issues above are both due to a lack of metering in the entry point contract for significant gas usage by the user operation. Relying on preVerificationGas to charge for this gas usage has the significant UX problem of requiring users to pay more than they actually use in order to increase the chance of their UO landing onchain quickly.
A solution to these issues could be to modify the entry point contract to meter this extra gas usage and attribute it to limit-based gas fields.
This is a very rough outline of a solution to a tough problem. We would love to hear ideas from the community!
The L1 calldata gas cost can be metered onchain and user operations can be charged only for the exact cost that they incur, and not the overhead. A limit-based field should be used.
A potential solution is to introduce a new field, daCallDataGasLimit (da for Data Availability, working title). This field would be native to an L2+ version of the entry point and used on chains where transaction calldata is posted onto a different system and thus must be charged for in a separate manner.
The entry point logic would be the following:
There are a few, pretty significant, downsides to this approach:
The entry point contract can be modified to measure the amount of gas used during its call to aggregator.validateSignatures, divide that by the amount of user operations aggregated, and attribute the gas usage evenly by deducting from verificationGasLimit .
Each whitelisted signature aggregator should be associated with a static value, the base cost to validate the aggregated signature, and a dynamic value, the per aggregated operation cost increase. For example, a BLS signature aggregator could have its static value as the one-time signature verification cost and the dynamic value as the per op hashing operations cost. Bundlers can increase verificationGasLimit during eth_estimateUserOperationGas by the static value divided by a target bundle size plus the dynamic value.
A potential solution here is to supplement the arguments to eth_estimateUserOperationGas with a minimumBundleSize corresponding to the smallest bundle size (thus highest gas) that the user wants to be included in. Users who are willing to pay more can decrease their minimum bundle size and improve their time to inclusion.
Bundlers then need logic to ensure that they only include a UO in a bundle that is at least as large as that UO’s minimum bundle size. This minimum bundle size can be calculated during simulation by tracking the amount of gas used by the account validation step, subtracting that from verificationGasLimit , and then calculating the minimum size from whats left. Bundlers can store this value alongside the UO in their mempool and use it as a hint for bundle building.
One downside to this approach is that it assumes gas usage by a signature aggregator is uniform per operation. If this isn’t the case, the metering could be pushed onto the aggregator contract and returned by its verification function. Bundlers would need a way to calculate these non-uniform values off-chain as well.
Signature aggregators are likely going to need to be directly whitelisted by bundlers by providing them methods to compute off-chain signatures and with their static/dynamic gas cost components. The cold-start problem is going to be difficult, especially given the inability for a user to “bid” to speed up their operation.
Its important to understand the L2 distinctions described above, especially when it comes time to estimate gas fees. On Optimism, if you use the network provided priority fee, you can massively overpay for a user operation since this priority fee now also applies to L1 costs. If using Alchemy’s bundler endpoint refer to our documentation for tips on estimating fees.
🦀
The next article in this deep dive on ERC-4337 gas estimation provides a walkthrough of the user operation fee estimation process. If you missed part one or two, learn how ERC-4337 gas estimation, dummy values, and the token transfer problem works.