Skip to content

Commit

Permalink
update specs
Browse files Browse the repository at this point in the history
  • Loading branch information
danyalprout committed Apr 26, 2024
1 parent aa515a7 commit 4f67a43
Show file tree
Hide file tree
Showing 3 changed files with 37 additions and 85 deletions.
101 changes: 26 additions & 75 deletions specs/fjord/exec-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,91 +21,42 @@ Fjord updates the L1 cost calculation function to use a FastLZ-based compression
The L1 cost is computed as:

```pseudocode
l1FeeScaled = l1BaseFeeScalar*l1BaseFee*16 + l1BlobFeeScalar*l1BlobBaseFee
estimatedLen = intercept + fastlzCoef*fastlzSize
l1CostSigned = max(71, estimatedLen) * l1FeeScaled / 1e12
l1Cost = uint256(max(0, l1CostSigned))
l1FeeScaled = baseFeeScalar*l1BaseFee*16 + blobFeeScalar*l1BlobBaseFee
estimatedSize = intercept + fastlzCoef*fastlzSize
l1Cost = max(minTransactionSize, estimatedSize) * l1FeeScaled / 1e12
```

| Input arg | Type | Description |
|-----------------|-----------|-------------------------------------------------------------------|
| `l1BaseFee` | `uint256` | L1 base fee of the latest L1 origin registered in the L2 chain |
| `l1BlobBaseFee` | `uint256` | Blob gas price of the latest L1 origin registered in the L2 chain |
| `fastlzSize` | `uint256` | Length of the FastLZ-compressed RLP-encoded signed tx |
| `txSize` | `uint256` | Length of uncompressed RLP-encoded signed tx |
| `baseFeeScalar` | `uint32` | L1 base fee scalar, scaled by `1e6` |
| `blobFeeScalar` | `uint32` | L1 blob fee scalar, scaled by `1e6` |
| `intercept` | `int32` | Intercept constant, scaled by `1e6` (can be negative) |
| `fastlzCoef` | `int32` | FastLZ coefficient, scaled by `1e6` (can be negative) |
The final `l1Cost` computation is an unlimited precision unsigned integer computation, with the result in Wei and
having `uint256` range. The values in this computation, are as follows:

Where:
| Input arg | Type | Description | Value |
|----------------------|-----------|-------------------------------------------------------------------|--------------------------|
| `l1BaseFee` | `uint256` | L1 base fee of the latest L1 origin registered in the L2 chain | varies, L1 fee |
| `l1BlobBaseFee` | `uint256` | Blob gas price of the latest L1 origin registered in the L2 chain | varies, L1 fee |
| `fastlzSize` | `uint256` | Size of the FastLZ-compressed RLP-encoded signed tx | varies, per transaction |
| `baseFeeScalar` | `uint32` | L1 base fee scalar, scaled by `1e6` | varies, L2 configuration |
| `blobFeeScalar` | `uint32` | L1 blob fee scalar, scaled by `1e6` | varies, L2 configuration |
| `intercept` | `int32` | Intercept constant, scaled by `1e6` (can be negative) | -42_585_600 |
| `fastlzCoef` | `uint32` | FastLZ coefficient, scaled by `1e6` | 836_500 |
| `minTransactionSize` | `uint32` | A lower bound on transaction size | 71 |

- the `intercept` is hard-coded as `-27_321_890`

- the `fastlzCoef` is hard-coded as `1_031_462`

- the `l1CostSigned` calculation is an unlimited precision signed integer computation, with the result in Wei and
having `int256` range.

- the final `l1Cost` computation is an unlimited precision unsigned integer computation, with the result in Wei and
having `uint256` range.
Previously, `baseFeeScalar` and `blobFeeScalar` were used to encode the compression ratio, due to the inaccuracy of
the L1 cost function. However, the new cost function takes into account the compression ratio, so these scalars should
be adjusted to account for any previous compression ratio they encoded.

##### L1-Cost linear regression details

The `intercept`, `fastlzCoef`, and `txSizeCoef` constants are calculated by linear regression using a dataset
The `intercept` and `fastlzCoef` constants are calculated by linear regression using a dataset
of previous L2 transactions. The dataset is generated by iterating over all transactions in a given time range, and
performing the following actions. For each transaction:

1. Calculate the RLP-encoded transaction payload. Record the length of this payload as `txSize`.
2. Compress the payload using FastLZ. Record the length of the compressed payload as `fastlzSize`.
3. Compress the payload using a "best estimate" compression, defined as:

- Create two zlib streams, both using the maximum compression level.
- Bootstrap one of the streams with at least 64kb of transaction data (after compression).
- Write each transaction to both streams (flushing after each write), and record the size increase
of the larger buffer as `bestEstimateSize`.
- Each time the larger buffer grows larger than 128kb, empty the stream and swap the buffers.
1. Compress the payload using FastLZ. Record the size of the compressed payload as `fastlzSize`.
2. Emulate the change in batch size adding the transaction to a batch, compressed with Brotli 10. Record the change in
batch size as `bestEstimateSize`.

Once this dataset is generated, a linear regression can be calculated using the `bestEstimateSize` as
the dependent variable and `fastlzSize` and `txSize` as the independent variables.

The following Python code can be used to calculate the linear regression:

```python
import numpy as np
from sklearn.linear_model import LinearRegression

# Dataset format is:
# data = [record...] (one or more records, concatenated)
# record = bestEstimateSize ++ fastlzSize ++ txSize
# bestEstimateSize = little-endian uint32
# fastlzSize = little-endian uint32
# txSize = little-endian uint32

dt = np.dtype([('best', '<u4'), ('fastlz', '<u4'), ('length', '<u4')])
data = np.fromfile('./data.bin', dtype=dt)
input_array = np.array(data.tolist())

x = np.delete(input_array, [0], 1)
y = input_array[:, 0]
model = LinearRegression().fit(x, y)
print(f'model: {model.intercept_} {model.coef_}')
```

As an example, we generated a dataset from all transactions on Optimism Mainnet in October 2023,
and the resulting linear regression was:

`-27.321890037208703 + 1.03146206*fastlzSize`

Scaling these values by `1e6` gives the constants used in the L1-Cost fee estimator:

- `intercept = -27_321_890`
- `fastlzCoef = 1_031_462`

Note that the linear regression takes into account the current compression ratio, so the
scalars `l1BaseFeeScalar` and `l1BlobFeeScalar` should be adjusted to account for any previous
compression ratio they encoded. For example, if the previous compression ratio was `0.684`, then
they should be divided by `0.684` to get the new scalars, e.g.:
the dependent variable and `fastlzSize` as the independent variable.

- `l1BaseFeeScalar = 11_111` (`~= 7600/0.684`)
- `l1BlobFeeScalar = 1_250_000` (`~= 862000/0.684`)
We generated a dataset from two weeks of post-Ecotone transactions on Optimism Mainnet, as we found that was
the most representative of performance across multiple chains and time periods. More details on the linear regression
and datasets used can be found in this [repository](https://github.com/roberto-bayardo/compression-analysis/tree/main).
2 changes: 1 addition & 1 deletion specs/fjord/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@ This document is not finalized and should be considered experimental.
## Execution Layer

- [RIP-7212: Precompile for secp256r1 Curve Support](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md)
- FastLZ compression for L1 data fee calculation
- [FastLZ compression for L1 data fee calculation](./exec-engine.md#fees)

## Consensus Layer
19 changes: 10 additions & 9 deletions specs/fjord/predeploys.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Following the Fjord upgrade, three additional values used for L1 fee computation

- costIntercept
- costFastlzCoef
- costTxSizeCoef
- minTransactionSize

These values are hard-coded constants in the `GasPriceOracle` contract. The
calculation follows the same formula outlined in the
Expand All @@ -28,20 +28,21 @@ write path, and is much more gas efficient than `getL1Fee`.
The upper limit overhead is assumed to be `original/255+16`, borrowed from LZ4. According to historical data, this
approach can encompass more than 99.99% of transactions.

implemented as follows:
This is implemented as follows:

```solidity
function getL1FeeUpperBound(uint256 unsignedTxSize) external view returns (uint256) {
uint256 l1FeeScaled = baseFeeScalar() * l1BaseFee() * 16 + blobBaseFeeScalar() * blobBaseFee();
// txSize / 255 + 16 is the pratical fastlz upper-bound covers 99.99% txs.
// Add 68 to both size values to account for unsigned tx:
// Add 68 to account for unsigned tx
int256 flzUpperBound = int256(unsignedTxSize) + int256(unsignedTxSize) / 255 + 16 + 68;
int256 txSize = int256(_unsignedTxSize) + 68;
uint256 feeScaled = baseFeeScalar() * 16 * l1BaseFee() + blobBaseFeeScalar() * blobBaseFee();
int256 cost = costIntercept + costFastlzCoef * flzUpperBound + costTxSizeCoef * txSize;
if (cost < 0) {
cost = 0;
int256 estimatedSize = costIntercept + costFastlzCoef * flzUpperBound;
if (estimatedSize < minTransactionSize) {
estimatedSize = minTransactionSize;
}
return uint256(cost) * feeScaled / (10 ** (DECIMALS * 2));
return uint256(estimatedSize) * feeScaled / (10 ** (DECIMALS * 2));
}
```

0 comments on commit 4f67a43

Please sign in to comment.