Skip to content

Commit

Permalink
Electra spec changes for v1.5.0-beta.0 (#6731)
Browse files Browse the repository at this point in the history
* First pass

* Add restrictions to RuntimeVariableList api

* Use empty_uninitialized and fix warnings

* Fix some todos

* Merge branch 'unstable' into max-blobs-preset

* Fix take impl on RuntimeFixedList

* cleanup

* Fix test compilations

* Fix some more tests

* Fix test from unstable

* Merge branch 'unstable' into max-blobs-preset

* Implement "Bugfix and more withdrawal tests"

* Implement "Add missed exit checks to consolidation processing"

* Implement "Update initial earliest_exit_epoch calculation"

* Implement "Limit consolidating balance by validator.effective_balance"

* Implement "Use 16-bit random value in validator filter"

* Implement "Do not change creds type on consolidation"

* Rename PendingPartialWithdraw index field to validator_index

* Skip slots to get test to pass and add TODO

* Implement "Synchronously check all transactions to have non-zero length"

* Merge remote-tracking branch 'origin/unstable' into max-blobs-preset

* Remove footgun function

* Minor simplifications

* Move from preset to config

* Fix typo

* Revert "Remove footgun function"

This reverts commit de01f92.

* Try fixing tests

* Implement "bump minimal preset MAX_BLOB_COMMITMENTS_PER_BLOCK and KZG_COMMITMENT_INCLUSION_PROOF_DEPTH"

* Thread through ChainSpec

* Fix release tests

* Move RuntimeFixedVector into module and rename

* Add test

* Implement "Remove post-altair `initialize_beacon_state_from_eth1` from specs"

* Update preset YAML

* Remove empty RuntimeVarList awefullness

* Make max_blobs_per_block a config parameter (#6329)

Squashed commit of the following:

commit 04b3743
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 17:36:58 2025 +1100

    Add test

commit 440e854
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 17:24:50 2025 +1100

    Move RuntimeFixedVector into module and rename

commit f66e179
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 17:17:17 2025 +1100

    Fix release tests

commit e4bfe71
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 17:05:30 2025 +1100

    Thread through ChainSpec

commit 063b79c
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 15:32:16 2025 +1100

    Try fixing tests

commit 88bedf0
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 15:04:37 2025 +1100

    Revert "Remove footgun function"

    This reverts commit de01f92.

commit 32483d3
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 15:04:32 2025 +1100

    Fix typo

commit 2e86585
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 15:04:15 2025 +1100

    Move from preset to config

commit 1095d60
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 14:38:40 2025 +1100

    Minor simplifications

commit de01f92
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 14:06:57 2025 +1100

    Remove footgun function

commit 0c2c8c4
Merge: 21ecb58 f51a292
Author: Michael Sproul <michael@sigmaprime.io>
Date:   Mon Jan 6 14:02:50 2025 +1100

    Merge remote-tracking branch 'origin/unstable' into max-blobs-preset

commit f51a292
Author: Daniel Knopik <107140945+dknopik@users.noreply.github.com>
Date:   Fri Jan 3 20:27:21 2025 +0100

    fully lint only explicitly to avoid unnecessary rebuilds (#6753)

    * fully lint only explicitly to avoid unnecessary rebuilds

commit 7e0cdde
Author: Akihito Nakano <sora.akatsuki@gmail.com>
Date:   Tue Dec 24 10:38:56 2024 +0900

    Make sure we have fanout peers when publish (#6738)

    * Ensure that `fanout_peers` is always non-empty if it's `Some`

commit 21ecb58
Merge: 2fcb293 9aefb55
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Mon Oct 21 14:46:00 2024 -0700

    Merge branch 'unstable' into max-blobs-preset

commit 2fcb293
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Fri Sep 6 18:28:31 2024 -0700

    Fix test from unstable

commit 12c6ef1
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Wed Sep 4 16:16:36 2024 -0700

    Fix some more tests

commit d37733b
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Wed Sep 4 12:47:36 2024 -0700

    Fix test compilations

commit 52bb581
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Tue Sep 3 18:38:19 2024 -0700

    cleanup

commit e71020e
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Tue Sep 3 17:16:10 2024 -0700

    Fix take impl on RuntimeFixedList

commit 13f9bba
Merge: 60100fc 4e675cf
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Tue Sep 3 16:08:59 2024 -0700

    Merge branch 'unstable' into max-blobs-preset

commit 60100fc
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Fri Aug 30 16:04:11 2024 -0700

    Fix some todos

commit a9cb329
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Fri Aug 30 15:54:00 2024 -0700

    Use empty_uninitialized and fix warnings

commit 4dc6e65
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Fri Aug 30 15:53:18 2024 -0700

    Add restrictions to RuntimeVariableList api

commit 25feedf
Author: Pawan Dhananjay <pawandhananjay@gmail.com>
Date:   Thu Aug 29 16:11:19 2024 -0700

    First pass

* Fix tests

* Implement max_blobs_per_block_electra

* Fix config issues

* Simplify BlobSidecarListFromRoot

* Disable PeerDAS tests

* Merge remote-tracking branch 'origin/unstable' into max-blobs-preset

* Bump quota to account for new target (6)

* Remove clone

* Fix issue from review

* Try to remove ugliness

* Merge branch 'unstable' into max-blobs-preset

* Merge remote-tracking branch 'origin/unstable' into electra-alpha10

* Merge commit '04b3743ec1e0b650269dd8e58b540c02430d1c0d' into electra-alpha10

* Merge remote-tracking branch 'pawan/max-blobs-preset' into electra-alpha10

* Update tests to v1.5.0-beta.0

* Resolve merge conflicts

* Linting

* fmt

* Fix test and add TODO

* Gracefully handle slashed proposers in fork choice tests

* Merge remote-tracking branch 'origin/unstable' into electra-alpha10

* Keep latest changes from max_blobs_per_block PR in codec.rs

* Revert a few more regressions and add a comment

* Disable more DAS tests

* Improve validator monitor test a little

* Make test more robust

* Fix sync test that didn't understand blobs

* Fill out cropped comment
  • Loading branch information
michaelsproul authored Jan 13, 2025
1 parent c9747fb commit 06e4d22
Show file tree
Hide file tree
Showing 29 changed files with 309 additions and 178 deletions.
82 changes: 33 additions & 49 deletions beacon_node/beacon_chain/tests/validator_monitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ use beacon_chain::test_utils::{
use beacon_chain::validator_monitor::{ValidatorMonitorConfig, MISSED_BLOCK_LAG_SLOTS};
use logging::test_logger;
use std::sync::LazyLock;
use types::{Epoch, EthSpec, ForkName, Keypair, MainnetEthSpec, PublicKeyBytes, Slot};
use types::{Epoch, EthSpec, Keypair, MainnetEthSpec, PublicKeyBytes, Slot};

// Should ideally be divisible by 3.
pub const VALIDATOR_COUNT: usize = 48;
Expand Down Expand Up @@ -117,7 +117,7 @@ async fn missed_blocks_across_epochs() {
}

#[tokio::test]
async fn produces_missed_blocks() {
async fn missed_blocks_basic() {
let validator_count = 16;

let slots_per_epoch = E::slots_per_epoch();
Expand All @@ -127,13 +127,10 @@ async fn produces_missed_blocks() {
// Generate 63 slots (2 epochs * 32 slots per epoch - 1)
let initial_blocks = slots_per_epoch * nb_epoch_to_simulate.as_u64() - 1;

// The validator index of the validator that is 'supposed' to miss a block
let validator_index_to_monitor = 1;

// 1st scenario //
//
// Missed block happens when slot and prev_slot are in the same epoch
let harness1 = get_harness(validator_count, vec![validator_index_to_monitor]);
let harness1 = get_harness(validator_count, vec![]);
harness1
.extend_chain(
initial_blocks as usize,
Expand All @@ -153,7 +150,7 @@ async fn produces_missed_blocks() {
let mut prev_slot = Slot::new(idx - 1);
let mut duplicate_block_root = *_state.block_roots().get(idx as usize).unwrap();
let mut validator_indexes = _state.get_beacon_proposer_indices(&harness1.spec).unwrap();
let mut validator_index = validator_indexes[slot_in_epoch.as_usize()];
let mut missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
let mut proposer_shuffling_decision_root = _state
.proposer_shuffling_decision_root(duplicate_block_root)
.unwrap();
Expand All @@ -170,7 +167,7 @@ async fn produces_missed_blocks() {
beacon_proposer_cache.lock().insert(
epoch,
proposer_shuffling_decision_root,
validator_indexes.into_iter().collect::<Vec<usize>>(),
validator_indexes,
_state.fork()
),
Ok(())
Expand All @@ -187,12 +184,15 @@ async fn produces_missed_blocks() {
// Let's validate the state which will call the function responsible for
// adding the missed blocks to the validator monitor
let mut validator_monitor = harness1.chain.validator_monitor.write();

validator_monitor.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
validator_monitor.process_valid_state(nb_epoch_to_simulate, _state, &harness1.chain.spec);

// We should have one entry in the missed blocks map
assert_eq!(
validator_monitor.get_monitored_validator_missed_block_count(validator_index as u64),
1
validator_monitor
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
1,
);
}

Expand All @@ -201,23 +201,7 @@ async fn produces_missed_blocks() {
// Missed block happens when slot and prev_slot are not in the same epoch
// making sure that the cache reloads when the epoch changes
// in that scenario the slot that missed a block is the first slot of the epoch
// We are adding other validators to monitor as these ones will miss a block depending on
// the fork name specified when running the test as the proposer cache differs depending on
// the fork name (cf. seed)
//
// If you are adding a new fork and seeing errors, print
// `validator_indexes[slot_in_epoch.as_usize()]` and add it below.
let validator_index_to_monitor = match harness1.spec.fork_name_at_slot::<E>(Slot::new(0)) {
ForkName::Base => 7,
ForkName::Altair => 2,
ForkName::Bellatrix => 4,
ForkName::Capella => 11,
ForkName::Deneb => 3,
ForkName::Electra => 1,
ForkName::Fulu => 6,
};

let harness2 = get_harness(validator_count, vec![validator_index_to_monitor]);
let harness2 = get_harness(validator_count, vec![]);
let advance_slot_by = 9;
harness2
.extend_chain(
Expand All @@ -238,11 +222,7 @@ async fn produces_missed_blocks() {
slot_in_epoch = slot % slots_per_epoch;
duplicate_block_root = *_state2.block_roots().get(idx as usize).unwrap();
validator_indexes = _state2.get_beacon_proposer_indices(&harness2.spec).unwrap();
validator_index = validator_indexes[slot_in_epoch.as_usize()];
// If you are adding a new fork and seeing errors, it means the fork seed has changed the
// validator_index. Uncomment this line, run the test again and add the resulting index to the
// list above.
//eprintln!("new index which needs to be added => {:?}", validator_index);
missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];

let beacon_proposer_cache = harness2
.chain
Expand All @@ -256,7 +236,7 @@ async fn produces_missed_blocks() {
beacon_proposer_cache.lock().insert(
epoch,
duplicate_block_root,
validator_indexes.into_iter().collect::<Vec<usize>>(),
validator_indexes.clone(),
_state2.fork()
),
Ok(())
Expand All @@ -271,30 +251,33 @@ async fn produces_missed_blocks() {
// Let's validate the state which will call the function responsible for
// adding the missed blocks to the validator monitor
let mut validator_monitor2 = harness2.chain.validator_monitor.write();
validator_monitor2.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
validator_monitor2.process_valid_state(epoch, _state2, &harness2.chain.spec);
// We should have one entry in the missed blocks map
assert_eq!(
validator_monitor2.get_monitored_validator_missed_block_count(validator_index as u64),
validator_monitor2
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
1
);

// 3rd scenario //
//
// A missed block happens but the validator is not monitored
// it should not be flagged as a missed block
idx = initial_blocks + (advance_slot_by) - 7;
while validator_indexes[(idx % slots_per_epoch) as usize] == missed_block_proposer
&& idx / slots_per_epoch == epoch.as_u64()
{
idx += 1;
}
slot = Slot::new(idx);
prev_slot = Slot::new(idx - 1);
slot_in_epoch = slot % slots_per_epoch;
duplicate_block_root = *_state2.block_roots().get(idx as usize).unwrap();
validator_indexes = _state2.get_beacon_proposer_indices(&harness2.spec).unwrap();
let not_monitored_validator_index = validator_indexes[slot_in_epoch.as_usize()];
// This could do with a refactor: https://github.com/sigp/lighthouse/issues/6293
assert_ne!(
not_monitored_validator_index,
validator_index_to_monitor,
"this test has a fragile dependency on hardcoded indices. you need to tweak some settings or rewrite this"
);
let second_missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];

// This test may fail if we can't find another distinct proposer in the same epoch.
// However, this should be vanishingly unlikely: P ~= (1/16)^32 = 2e-39.
assert_ne!(missed_block_proposer, second_missed_block_proposer);

assert_eq!(
_state2.set_block_root(prev_slot, duplicate_block_root),
Expand All @@ -306,10 +289,9 @@ async fn produces_missed_blocks() {
validator_monitor2.process_valid_state(epoch, _state2, &harness2.chain.spec);

// We shouldn't have any entry in the missed blocks map
assert_ne!(validator_index, not_monitored_validator_index);
assert_eq!(
validator_monitor2
.get_monitored_validator_missed_block_count(not_monitored_validator_index as u64),
.get_monitored_validator_missed_block_count(second_missed_block_proposer as u64),
0
);
}
Expand All @@ -318,7 +300,7 @@ async fn produces_missed_blocks() {
//
// A missed block happens at state.slot - LOG_SLOTS_PER_EPOCH
// it shouldn't be flagged as a missed block
let harness3 = get_harness(validator_count, vec![validator_index_to_monitor]);
let harness3 = get_harness(validator_count, vec![]);
harness3
.extend_chain(
slots_per_epoch as usize,
Expand All @@ -338,7 +320,7 @@ async fn produces_missed_blocks() {
prev_slot = Slot::new(idx - 1);
duplicate_block_root = *_state3.block_roots().get(idx as usize).unwrap();
validator_indexes = _state3.get_beacon_proposer_indices(&harness3.spec).unwrap();
validator_index = validator_indexes[slot_in_epoch.as_usize()];
missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
proposer_shuffling_decision_root = _state3
.proposer_shuffling_decision_root_at_epoch(epoch, duplicate_block_root)
.unwrap();
Expand All @@ -355,7 +337,7 @@ async fn produces_missed_blocks() {
beacon_proposer_cache.lock().insert(
epoch,
proposer_shuffling_decision_root,
validator_indexes.into_iter().collect::<Vec<usize>>(),
validator_indexes,
_state3.fork()
),
Ok(())
Expand All @@ -372,11 +354,13 @@ async fn produces_missed_blocks() {
// Let's validate the state which will call the function responsible for
// adding the missed blocks to the validator monitor
let mut validator_monitor3 = harness3.chain.validator_monitor.write();
validator_monitor3.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
validator_monitor3.process_valid_state(epoch, _state3, &harness3.chain.spec);

// We shouldn't have one entry in the missed blocks map
assert_eq!(
validator_monitor3.get_monitored_validator_missed_block_count(validator_index as u64),
validator_monitor3
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
0
);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,11 @@ impl<'block, E: EthSpec> NewPayloadRequest<'block, E> {

let _timer = metrics::start_timer(&metrics::EXECUTION_LAYER_VERIFY_BLOCK_HASH);

// Check that no transactions in the payload are zero length
if payload.transactions().iter().any(|slice| slice.is_empty()) {
return Err(Error::ZeroLengthTransaction);
}

let (header_hash, rlp_transactions_root) = calculate_execution_block_hash(
payload,
parent_beacon_block_root,
Expand Down
1 change: 1 addition & 0 deletions beacon_node/execution_layer/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@ pub enum Error {
payload: ExecutionBlockHash,
transactions_root: Hash256,
},
ZeroLengthTransaction,
PayloadBodiesByRangeNotSupported,
InvalidJWTSecret(String),
InvalidForkForPayload,
Expand Down
2 changes: 1 addition & 1 deletion beacon_node/lighthouse_network/src/rpc/protocol.rs
Original file line number Diff line number Diff line change
Expand Up @@ -710,7 +710,7 @@ pub fn rpc_blob_limits<E: EthSpec>() -> RpcLimits {
}
}

// TODO(peerdas): fix hardcoded max here
// TODO(das): fix hardcoded max here
pub fn rpc_data_column_limits<E: EthSpec>(fork_name: ForkName) -> RpcLimits {
RpcLimits::new(
DataColumnSidecar::<E>::empty().as_ssz_bytes().len(),
Expand Down
53 changes: 39 additions & 14 deletions beacon_node/network/src/sync/tests/range.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,15 @@ use crate::sync::manager::SLOT_IMPORT_TOLERANCE;
use crate::sync::range_sync::RangeSyncType;
use crate::sync::SyncMessage;
use beacon_chain::test_utils::{AttestationStrategy, BlockStrategy};
use beacon_chain::EngineState;
use beacon_chain::{block_verification_types::RpcBlock, EngineState, NotifyExecutionLayer};
use lighthouse_network::rpc::{RequestType, StatusMessage};
use lighthouse_network::service::api_types::{AppRequestId, Id, SyncRequestId};
use lighthouse_network::{PeerId, SyncInfo};
use std::time::Duration;
use types::{EthSpec, Hash256, MinimalEthSpec as E, SignedBeaconBlock, Slot};
use types::{
BlobSidecarList, BlockImportSource, EthSpec, Hash256, MinimalEthSpec as E, SignedBeaconBlock,
SignedBeaconBlockHash, Slot,
};

const D: Duration = Duration::new(0, 0);

Expand Down Expand Up @@ -154,7 +157,9 @@ impl TestRig {
}
}

async fn create_canonical_block(&mut self) -> SignedBeaconBlock<E> {
async fn create_canonical_block(
&mut self,
) -> (SignedBeaconBlock<E>, Option<BlobSidecarList<E>>) {
self.harness.advance_slot();

let block_root = self
Expand All @@ -165,19 +170,39 @@ impl TestRig {
AttestationStrategy::AllValidators,
)
.await;
self.harness
.chain
.store
.get_full_block(&block_root)
.unwrap()
.unwrap()
// TODO(das): this does not handle data columns yet
let store = &self.harness.chain.store;
let block = store.get_full_block(&block_root).unwrap().unwrap();
let blobs = if block.fork_name_unchecked().deneb_enabled() {
store.get_blobs(&block_root).unwrap().blobs()
} else {
None
};
(block, blobs)
}

async fn remember_block(&mut self, block: SignedBeaconBlock<E>) {
self.harness
.process_block(block.slot(), block.canonical_root(), (block.into(), None))
async fn remember_block(
&mut self,
(block, blob_sidecars): (SignedBeaconBlock<E>, Option<BlobSidecarList<E>>),
) {
// This code is kind of duplicated from Harness::process_block, but takes sidecars directly.
let block_root = block.canonical_root();
self.harness.set_current_slot(block.slot());
let _: SignedBeaconBlockHash = self
.harness
.chain
.process_block(
block_root,
RpcBlock::new(Some(block_root), block.into(), blob_sidecars).unwrap(),
NotifyExecutionLayer::Yes,
BlockImportSource::RangeSync,
|| Ok(()),
)
.await
.unwrap()
.try_into()
.unwrap();
self.harness.chain.recompute_head_at_current_slot().await;
}
}

Expand Down Expand Up @@ -217,9 +242,9 @@ async fn state_update_while_purging() {
// Need to create blocks that can be inserted into the fork-choice and fit the "known
// conditions" below.
let head_peer_block = rig_2.create_canonical_block().await;
let head_peer_root = head_peer_block.canonical_root();
let head_peer_root = head_peer_block.0.canonical_root();
let finalized_peer_block = rig_2.create_canonical_block().await;
let finalized_peer_root = finalized_peer_block.canonical_root();
let finalized_peer_root = finalized_peer_block.0.canonical_root();

// Get a peer with an advanced head
let head_peer = rig.add_head_peer_with_root(head_peer_root);
Expand Down
Loading

0 comments on commit 06e4d22

Please sign in to comment.