Skip to content

Commit

Permalink
Merge branch 'dev' into eip-7495
Browse files Browse the repository at this point in the history
  • Loading branch information
etan-status committed Jul 22, 2024
2 parents d281ded + a42d670 commit 268166b
Show file tree
Hide file tree
Showing 18 changed files with 841 additions and 145 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Features are researched and developed in parallel, and then consolidated into se
### In-development Specifications
| Code Name or Topic | Specs | Notes |
| - | - | - |
| Electra | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/electra/beacon-chain.md)</li><li>[EIP-6110 fork](specs/electra/fork.md)</li></ul><li>Additions</li><ul><li>[Honest validator guide changes](specs/electra/validator.md)</li></ul></ul> |
| Electra | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/electra/beacon-chain.md)</li><li>[EIP-6110 fork](specs/electra/fork.md)</li></ul><li>Additions</li><ul><li>[Light client sync protocol changes](specs/electra/light-client/sync-protocol.md) ([fork](specs/electra/light-client/fork.md), [full node](specs/electra/light-client/full-node.md), [networking](specs/electra/light-client/p2p-interface.md))</li></ul><ul><li>[Honest validator guide changes](specs/electra/validator.md)</li></ul></ul> |
| Sharding (outdated) | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/_features/sharding/beacon-chain.md)</li></ul><li>Additions</li><ul><li>[P2P networking](specs/_features/sharding/p2p-interface.md)</li></ul></ul> |
| Custody Game (outdated) | <ul><li>Core</li><ul><li>[Beacon Chain changes](specs/_features/custody_game/beacon-chain.md)</li></ul><li>Additions</li><ul><li>[Honest validator guide changes](specs/_features/custody_game/validator.md)</li></ul></ul> | Dependent on sharding |
| Data Availability Sampling (outdated) | <ul><li>Core</li><ul><li>[Core types and functions](specs/_features/das/das-core.md)</li><li>[Fork choice changes](specs/_features/das/fork-choice.md)</li></ul><li>Additions</li><ul><li>[P2P Networking](specs/_features/das/p2p-interface.md)</li><li>[Sampling process](specs/_features/das/sampling.md)</li></ul></ul> | <ul><li> Dependent on sharding</li><li>[Technical explainer](https://hackmd.io/@HWeNw8hNRimMm2m2GH56Cw/B1YJPGkpD)</li></ul> |
Expand Down
8 changes: 3 additions & 5 deletions pysetup/spec_builders/electra.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,10 @@ def imports(cls, preset_name: str):
from eth2spec.deneb import {preset_name} as deneb
'''

## TODO: deal with changed gindices

@classmethod
def hardcoded_ssz_dep_constants(cls) -> Dict[str, str]:
return {
'FINALIZED_ROOT_GINDEX': 'GeneralizedIndex(169)',
'CURRENT_SYNC_COMMITTEE_GINDEX': 'GeneralizedIndex(86)',
'NEXT_SYNC_COMMITTEE_GINDEX': 'GeneralizedIndex(87)',
'FINALIZED_ROOT_GINDEX_ELECTRA': 'GeneralizedIndex(169)',
'CURRENT_SYNC_COMMITTEE_GINDEX_ELECTRA': 'GeneralizedIndex(86)',
'NEXT_SYNC_COMMITTEE_GINDEX_ELECTRA': 'GeneralizedIndex(87)',
}
97 changes: 41 additions & 56 deletions specs/_features/eip7594/polynomial-commitments-sampling.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,13 @@
- [`coset_for_cell`](#coset_for_cell)
- [Cells](#cells-1)
- [Cell computation](#cell-computation)
- [`compute_cells_and_kzg_proofs_polynomialcoeff`](#compute_cells_and_kzg_proofs_polynomialcoeff)
- [`compute_cells_and_kzg_proofs`](#compute_cells_and_kzg_proofs)
- [Cell verification](#cell-verification)
- [`verify_cell_kzg_proof_batch`](#verify_cell_kzg_proof_batch)
- [Reconstruction](#reconstruction)
- [`construct_vanishing_polynomial`](#construct_vanishing_polynomial)
- [`recover_data`](#recover_data)
- [`recover_polynomialcoeff`](#recover_polynomialcoeff)
- [`recover_cells_and_kzg_proofs`](#recover_cells_and_kzg_proofs)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->
Expand Down Expand Up @@ -555,6 +556,24 @@ def coset_for_cell(cell_index: CellIndex) -> Coset:

### Cell computation

#### `compute_cells_and_kzg_proofs_polynomialcoeff`

```python
def compute_cells_and_kzg_proofs_polynomialcoeff(polynomial_coeff: PolynomialCoeff) -> Tuple[
Vector[Cell, CELLS_PER_EXT_BLOB],
Vector[KZGProof, CELLS_PER_EXT_BLOB]]:
"""
Helper function which computes cells/proofs for a polynomial in coefficient form.
"""
cells, proofs = [], []
for i in range(CELLS_PER_EXT_BLOB):
coset = coset_for_cell(CellIndex(i))
proof, ys = compute_kzg_proof_multi_impl(polynomial_coeff, coset)
cells.append(coset_evals_to_cell(ys))
proofs.append(proof)
return cells, proofs
```

#### `compute_cells_and_kzg_proofs`

```python
Expand All @@ -572,17 +591,7 @@ def compute_cells_and_kzg_proofs(blob: Blob) -> Tuple[

polynomial = blob_to_polynomial(blob)
polynomial_coeff = polynomial_eval_to_coeff(polynomial)

cells = []
proofs = []

for i in range(CELLS_PER_EXT_BLOB):
coset = coset_for_cell(CellIndex(i))
proof, ys = compute_kzg_proof_multi_impl(polynomial_coeff, coset)
cells.append(coset_evals_to_cell(ys))
proofs.append(proof)

return cells, proofs
return compute_cells_and_kzg_proofs_polynomialcoeff(polynomial_coeff)
```

### Cell verification
Expand Down Expand Up @@ -668,29 +677,27 @@ def construct_vanishing_polynomial(missing_cell_indices: Sequence[CellIndex]) ->
return zero_poly_coeff
```

### `recover_data`
### `recover_polynomialcoeff`

```python
def recover_data(cell_indices: Sequence[CellIndex],
cells: Sequence[Cell],
) -> Sequence[BLSFieldElement]:
def recover_polynomialcoeff(cell_indices: Sequence[CellIndex],
cells: Sequence[Cell]) -> Sequence[BLSFieldElement]:
"""
Recover the missing evaluations for the extended blob, given at least half of the evaluations.
Recover the polynomial in coefficient form that when evaluated at the roots of unity will give the extended blob.
"""

# Get the extended domain. This will be referred to as the FFT domain.
# Get the extended domain. This will be referred to as the FFT domain
roots_of_unity_extended = compute_roots_of_unity(FIELD_ELEMENTS_PER_EXT_BLOB)

# Flatten the cells into evaluations.
# If a cell is missing, then its evaluation is zero.
# Flatten the cells into evaluations
# If a cell is missing, then its evaluation is zero
extended_evaluation_rbo = [0] * FIELD_ELEMENTS_PER_EXT_BLOB
for cell_index, cell in zip(cell_indices, cells):
start = cell_index * FIELD_ELEMENTS_PER_CELL
end = (cell_index + 1) * FIELD_ELEMENTS_PER_CELL
extended_evaluation_rbo[start:end] = cell
extended_evaluation = bit_reversal_permutation(extended_evaluation_rbo)

# Compute Z(x) in monomial form
# Compute Z(x) in coefficient form
# Z(x) is the polynomial which vanishes on all of the evaluations which are missing
missing_cell_indices = [CellIndex(cell_index) for cell_index in range(CELLS_PER_EXT_BLOB)
if cell_index not in cell_indices]
Expand All @@ -703,7 +710,7 @@ def recover_data(cell_indices: Sequence[CellIndex],
extended_evaluation_times_zero = [BLSFieldElement(int(a) * int(b) % BLS_MODULUS)
for a, b in zip(zero_poly_eval, extended_evaluation)]

# Convert (E*Z)(x) to monomial form
# Convert (E*Z)(x) to coefficient form
extended_evaluation_times_zero_coeffs = fft_field(extended_evaluation_times_zero, roots_of_unity_extended, inv=True)

# Convert (E*Z)(x) to evaluation form over a coset of the FFT domain
Expand All @@ -713,18 +720,12 @@ def recover_data(cell_indices: Sequence[CellIndex],
zero_poly_over_coset = coset_fft_field(zero_poly_coeff, roots_of_unity_extended)

# Compute Q_3(x) = (E*Z)(x) / Z(x) in evaluation form over a coset of the FFT domain
reconstructed_poly_over_coset = [
div(a, b)
for a, b in zip(extended_evaluations_over_coset, zero_poly_over_coset)
]
reconstructed_poly_over_coset = [div(a, b) for a, b in zip(extended_evaluations_over_coset, zero_poly_over_coset)]

# Convert Q_3(x) to monomial form
# Convert Q_3(x) to coefficient form
reconstructed_poly_coeff = coset_fft_field(reconstructed_poly_over_coset, roots_of_unity_extended, inv=True)

# Convert Q_3(x) to evaluation form over the FFT domain and bit reverse the result
reconstructed_data = bit_reversal_permutation(fft_field(reconstructed_poly_coeff, roots_of_unity_extended))

return reconstructed_data
return reconstructed_poly_coeff[:FIELD_ELEMENTS_PER_BLOB]
```

### `recover_cells_and_kzg_proofs`
Expand All @@ -735,7 +736,7 @@ def recover_cells_and_kzg_proofs(cell_indices: Sequence[CellIndex],
Vector[Cell, CELLS_PER_EXT_BLOB],
Vector[KZGProof, CELLS_PER_EXT_BLOB]]:
"""
Given at least 50% of cells/proofs for a blob, recover all the cells/proofs.
Given at least 50% of cells for a blob, recover all the cells/proofs.
This algorithm uses FFTs to recover cells faster than using Lagrange
implementation, as can be seen here:
https://ethresear.ch/t/reed-solomon-erasure-code-recovery-in-n-log-2-n-time-with-ffts/3039
Expand All @@ -745,6 +746,7 @@ def recover_cells_and_kzg_proofs(cell_indices: Sequence[CellIndex],
Public method.
"""
# Check we have the same number of cells and indices
assert len(cell_indices) == len(cells)
# Check we have enough cells to be able to perform the reconstruction
assert CELLS_PER_EXT_BLOB / 2 <= len(cell_indices) <= CELLS_PER_EXT_BLOB
Expand All @@ -757,29 +759,12 @@ def recover_cells_and_kzg_proofs(cell_indices: Sequence[CellIndex],
for cell in cells:
assert len(cell) == BYTES_PER_CELL

# Convert cells to coset evals
# Convert cells to coset evaluations
cosets_evals = [cell_to_coset_evals(cell) for cell in cells]

reconstructed_data = recover_data(cell_indices, cosets_evals)

for cell_index, coset_evals in zip(cell_indices, cosets_evals):
start = cell_index * FIELD_ELEMENTS_PER_CELL
end = (cell_index + 1) * FIELD_ELEMENTS_PER_CELL
assert reconstructed_data[start:end] == coset_evals

recovered_cells = [
coset_evals_to_cell(reconstructed_data[i * FIELD_ELEMENTS_PER_CELL:(i + 1) * FIELD_ELEMENTS_PER_CELL])
for i in range(CELLS_PER_EXT_BLOB)]

polynomial_eval = reconstructed_data[:FIELD_ELEMENTS_PER_BLOB]
polynomial_coeff = polynomial_eval_to_coeff(polynomial_eval)
recovered_proofs = [None] * CELLS_PER_EXT_BLOB

for i in range(CELLS_PER_EXT_BLOB):
coset = coset_for_cell(CellIndex(i))
proof, ys = compute_kzg_proof_multi_impl(polynomial_coeff, coset)
assert coset_evals_to_cell(ys) == recovered_cells[i]
recovered_proofs[i] = proof

return recovered_cells, recovered_proofs
# Given the coset evaluations, recover the polynomial in coefficient form
polynomial_coeff = recover_polynomialcoeff(cell_indices, cosets_evals)

# Recompute all cells/proofs
return compute_cells_and_kzg_proofs_polynomialcoeff(polynomial_coeff)
```
6 changes: 3 additions & 3 deletions specs/altair/light-client/full-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def create_light_client_bootstrap(state: BeaconState,
header=block_to_light_client_header(block),
current_sync_committee=state.current_sync_committee,
current_sync_committee_branch=CurrentSyncCommitteeBranch(
compute_merkle_proof(state, CURRENT_SYNC_COMMITTEE_GINDEX)),
compute_merkle_proof(state, current_sync_committee_gindex_at_slot(state.slot))),
)
```

Expand Down Expand Up @@ -124,7 +124,7 @@ def create_light_client_update(state: BeaconState,
if update_attested_period == update_signature_period:
update.next_sync_committee = attested_state.next_sync_committee
update.next_sync_committee_branch = NextSyncCommitteeBranch(
compute_merkle_proof(attested_state, NEXT_SYNC_COMMITTEE_GINDEX))
compute_merkle_proof(attested_state, next_sync_committee_gindex_at_slot(attested_state.slot)))

# Indicate finality whenever possible
if finalized_block is not None:
Expand All @@ -134,7 +134,7 @@ def create_light_client_update(state: BeaconState,
else:
assert attested_state.finalized_checkpoint.root == Bytes32()
update.finality_branch = FinalityBranch(
compute_merkle_proof(attested_state, FINALIZED_ROOT_GINDEX))
compute_merkle_proof(attested_state, finalized_root_gindex_at_slot(attested_state.slot)))

update.sync_aggregate = block.message.body.sync_aggregate
update.signature_slot = block.message.slot
Expand Down
59 changes: 50 additions & 9 deletions specs/altair/light-client/sync-protocol.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,17 @@
- [`LightClientOptimisticUpdate`](#lightclientoptimisticupdate)
- [`LightClientStore`](#lightclientstore)
- [Helper functions](#helper-functions)
- [`finalized_root_gindex_at_slot`](#finalized_root_gindex_at_slot)
- [`current_sync_committee_gindex_at_slot`](#current_sync_committee_gindex_at_slot)
- [`next_sync_committee_gindex_at_slot`](#next_sync_committee_gindex_at_slot)
- [`is_valid_light_client_header`](#is_valid_light_client_header)
- [`is_sync_committee_update`](#is_sync_committee_update)
- [`is_finality_update`](#is_finality_update)
- [`is_better_update`](#is_better_update)
- [`is_next_sync_committee_known`](#is_next_sync_committee_known)
- [`get_safety_threshold`](#get_safety_threshold)
- [`get_subtree_index`](#get_subtree_index)
- [`is_valid_normalized_merkle_branch`](#is_valid_normalized_merkle_branch)
- [`compute_sync_committee_period_at_slot`](#compute_sync_committee_period_at_slot)
- [Light client initialization](#light-client-initialization)
- [`initialize_light_client_store`](#initialize_light_client_store)
Expand Down Expand Up @@ -171,6 +175,30 @@ class LightClientStore(object):

## Helper functions

### `finalized_root_gindex_at_slot`

```python
def finalized_root_gindex_at_slot(slot: Slot) -> GeneralizedIndex:
# pylint: disable=unused-argument
return FINALIZED_ROOT_GINDEX
```

### `current_sync_committee_gindex_at_slot`

```python
def current_sync_committee_gindex_at_slot(slot: Slot) -> GeneralizedIndex:
# pylint: disable=unused-argument
return CURRENT_SYNC_COMMITTEE_GINDEX
```

### `next_sync_committee_gindex_at_slot`

```python
def next_sync_committee_gindex_at_slot(slot: Slot) -> GeneralizedIndex:
# pylint: disable=unused-argument
return NEXT_SYNC_COMMITTEE_GINDEX
```

### `is_valid_light_client_header`

```python
Expand Down Expand Up @@ -273,6 +301,22 @@ def get_subtree_index(generalized_index: GeneralizedIndex) -> uint64:
return uint64(generalized_index % 2**(floorlog2(generalized_index)))
```

### `is_valid_normalized_merkle_branch`

```python
def is_valid_normalized_merkle_branch(leaf: Bytes32,
branch: Sequence[Bytes32],
gindex: GeneralizedIndex,
root: Root) -> bool:
depth = floorlog2(gindex)
index = get_subtree_index(gindex)
num_extra = len(branch) - depth
for i in range(num_extra):
if branch[i] != Bytes32():
return False
return is_valid_merkle_branch(leaf, branch[num_extra:], depth, index, root)
```

### `compute_sync_committee_period_at_slot`

```python
Expand All @@ -292,11 +336,10 @@ def initialize_light_client_store(trusted_block_root: Root,
assert is_valid_light_client_header(bootstrap.header)
assert hash_tree_root(bootstrap.header.beacon) == trusted_block_root

assert is_valid_merkle_branch(
assert is_valid_normalized_merkle_branch(
leaf=hash_tree_root(bootstrap.current_sync_committee),
branch=bootstrap.current_sync_committee_branch,
depth=floorlog2(CURRENT_SYNC_COMMITTEE_GINDEX),
index=get_subtree_index(CURRENT_SYNC_COMMITTEE_GINDEX),
gindex=current_sync_committee_gindex_at_slot(bootstrap.header.beacon.slot),
root=bootstrap.header.beacon.state_root,
)

Expand Down Expand Up @@ -364,11 +407,10 @@ def validate_light_client_update(store: LightClientStore,
else:
assert is_valid_light_client_header(update.finalized_header)
finalized_root = hash_tree_root(update.finalized_header.beacon)
assert is_valid_merkle_branch(
assert is_valid_normalized_merkle_branch(
leaf=finalized_root,
branch=update.finality_branch,
depth=floorlog2(FINALIZED_ROOT_GINDEX),
index=get_subtree_index(FINALIZED_ROOT_GINDEX),
gindex=finalized_root_gindex_at_slot(update.attested_header.beacon.slot),
root=update.attested_header.beacon.state_root,
)

Expand All @@ -379,11 +421,10 @@ def validate_light_client_update(store: LightClientStore,
else:
if update_attested_period == store_period and is_next_sync_committee_known(store):
assert update.next_sync_committee == store.next_sync_committee
assert is_valid_merkle_branch(
assert is_valid_normalized_merkle_branch(
leaf=hash_tree_root(update.next_sync_committee),
branch=update.next_sync_committee_branch,
depth=floorlog2(NEXT_SYNC_COMMITTEE_GINDEX),
index=get_subtree_index(NEXT_SYNC_COMMITTEE_GINDEX),
gindex=next_sync_committee_gindex_at_slot(update.attested_header.beacon.slot),
root=update.attested_header.beacon.state_root,
)

Expand Down
8 changes: 4 additions & 4 deletions specs/capella/light-client/fork.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->

- [Introduction](#introduction)
- [Upgrading light client data](#upgrading-light-client-data)
- [Upgrading the store](#upgrading-the-store)
- [Upgrading light client data](#upgrading-light-client-data)
- [Upgrading the store](#upgrading-the-store)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- /TOC -->
Expand All @@ -17,7 +17,7 @@

This document describes how to upgrade existing light client objects based on the [Altair specification](../../altair/light-client/sync-protocol.md) to Capella. This is necessary when processing pre-Capella data with a post-Capella `LightClientStore`. Note that the data being exchanged over the network protocols uses the original format.

### Upgrading light client data
## Upgrading light client data

A Capella `LightClientStore` can still process earlier light client data. In order to do so, that pre-Capella data needs to be locally upgraded to Capella before processing.

Expand Down Expand Up @@ -70,7 +70,7 @@ def upgrade_lc_optimistic_update_to_capella(pre: bellatrix.LightClientOptimistic
)
```

### Upgrading the store
## Upgrading the store

Existing `LightClientStore` objects based on Altair MUST be upgraded to Capella before Capella based light client data can be processed. The `LightClientStore` upgrade MAY be performed before `CAPELLA_FORK_EPOCH`.

Expand Down
Loading

0 comments on commit 268166b

Please sign in to comment.