Skip to content

Commit 06e4d22

Browse files
Electra spec changes for v1.5.0-beta.0 (#6731)
* First pass * Add restrictions to RuntimeVariableList api * Use empty_uninitialized and fix warnings * Fix some todos * Merge branch 'unstable' into max-blobs-preset * Fix take impl on RuntimeFixedList * cleanup * Fix test compilations * Fix some more tests * Fix test from unstable * Merge branch 'unstable' into max-blobs-preset * Implement "Bugfix and more withdrawal tests" * Implement "Add missed exit checks to consolidation processing" * Implement "Update initial earliest_exit_epoch calculation" * Implement "Limit consolidating balance by validator.effective_balance" * Implement "Use 16-bit random value in validator filter" * Implement "Do not change creds type on consolidation" * Rename PendingPartialWithdraw index field to validator_index * Skip slots to get test to pass and add TODO * Implement "Synchronously check all transactions to have non-zero length" * Merge remote-tracking branch 'origin/unstable' into max-blobs-preset * Remove footgun function * Minor simplifications * Move from preset to config * Fix typo * Revert "Remove footgun function" This reverts commit de01f92. * Try fixing tests * Implement "bump minimal preset MAX_BLOB_COMMITMENTS_PER_BLOCK and KZG_COMMITMENT_INCLUSION_PROOF_DEPTH" * Thread through ChainSpec * Fix release tests * Move RuntimeFixedVector into module and rename * Add test * Implement "Remove post-altair `initialize_beacon_state_from_eth1` from specs" * Update preset YAML * Remove empty RuntimeVarList awefullness * Make max_blobs_per_block a config parameter (#6329) Squashed commit of the following: commit 04b3743 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 17:36:58 2025 +1100 Add test commit 440e854 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 17:24:50 2025 +1100 Move RuntimeFixedVector into module and rename commit f66e179 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 17:17:17 2025 +1100 Fix release tests commit e4bfe71 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 17:05:30 2025 +1100 Thread through ChainSpec commit 063b79c Author: Michael Sproul <[email protected]> Date: Mon Jan 6 15:32:16 2025 +1100 Try fixing tests commit 88bedf0 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 15:04:37 2025 +1100 Revert "Remove footgun function" This reverts commit de01f92. commit 32483d3 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 15:04:32 2025 +1100 Fix typo commit 2e86585 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 15:04:15 2025 +1100 Move from preset to config commit 1095d60 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 14:38:40 2025 +1100 Minor simplifications commit de01f92 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 14:06:57 2025 +1100 Remove footgun function commit 0c2c8c4 Merge: 21ecb58 f51a292 Author: Michael Sproul <[email protected]> Date: Mon Jan 6 14:02:50 2025 +1100 Merge remote-tracking branch 'origin/unstable' into max-blobs-preset commit f51a292 Author: Daniel Knopik <[email protected]> Date: Fri Jan 3 20:27:21 2025 +0100 fully lint only explicitly to avoid unnecessary rebuilds (#6753) * fully lint only explicitly to avoid unnecessary rebuilds commit 7e0cdde Author: Akihito Nakano <[email protected]> Date: Tue Dec 24 10:38:56 2024 +0900 Make sure we have fanout peers when publish (#6738) * Ensure that `fanout_peers` is always non-empty if it's `Some` commit 21ecb58 Merge: 2fcb293 9aefb55 Author: Pawan Dhananjay <[email protected]> Date: Mon Oct 21 14:46:00 2024 -0700 Merge branch 'unstable' into max-blobs-preset commit 2fcb293 Author: Pawan Dhananjay <[email protected]> Date: Fri Sep 6 18:28:31 2024 -0700 Fix test from unstable commit 12c6ef1 Author: Pawan Dhananjay <[email protected]> Date: Wed Sep 4 16:16:36 2024 -0700 Fix some more tests commit d37733b Author: Pawan Dhananjay <[email protected]> Date: Wed Sep 4 12:47:36 2024 -0700 Fix test compilations commit 52bb581 Author: Pawan Dhananjay <[email protected]> Date: Tue Sep 3 18:38:19 2024 -0700 cleanup commit e71020e Author: Pawan Dhananjay <[email protected]> Date: Tue Sep 3 17:16:10 2024 -0700 Fix take impl on RuntimeFixedList commit 13f9bba Merge: 60100fc 4e675cf Author: Pawan Dhananjay <[email protected]> Date: Tue Sep 3 16:08:59 2024 -0700 Merge branch 'unstable' into max-blobs-preset commit 60100fc Author: Pawan Dhananjay <[email protected]> Date: Fri Aug 30 16:04:11 2024 -0700 Fix some todos commit a9cb329 Author: Pawan Dhananjay <[email protected]> Date: Fri Aug 30 15:54:00 2024 -0700 Use empty_uninitialized and fix warnings commit 4dc6e65 Author: Pawan Dhananjay <[email protected]> Date: Fri Aug 30 15:53:18 2024 -0700 Add restrictions to RuntimeVariableList api commit 25feedf Author: Pawan Dhananjay <[email protected]> Date: Thu Aug 29 16:11:19 2024 -0700 First pass * Fix tests * Implement max_blobs_per_block_electra * Fix config issues * Simplify BlobSidecarListFromRoot * Disable PeerDAS tests * Merge remote-tracking branch 'origin/unstable' into max-blobs-preset * Bump quota to account for new target (6) * Remove clone * Fix issue from review * Try to remove ugliness * Merge branch 'unstable' into max-blobs-preset * Merge remote-tracking branch 'origin/unstable' into electra-alpha10 * Merge commit '04b3743ec1e0b650269dd8e58b540c02430d1c0d' into electra-alpha10 * Merge remote-tracking branch 'pawan/max-blobs-preset' into electra-alpha10 * Update tests to v1.5.0-beta.0 * Resolve merge conflicts * Linting * fmt * Fix test and add TODO * Gracefully handle slashed proposers in fork choice tests * Merge remote-tracking branch 'origin/unstable' into electra-alpha10 * Keep latest changes from max_blobs_per_block PR in codec.rs * Revert a few more regressions and add a comment * Disable more DAS tests * Improve validator monitor test a little * Make test more robust * Fix sync test that didn't understand blobs * Fill out cropped comment
1 parent c9747fb commit 06e4d22

File tree

29 files changed

+309
-178
lines changed

29 files changed

+309
-178
lines changed

beacon_node/beacon_chain/tests/validator_monitor.rs

Lines changed: 33 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ use beacon_chain::test_utils::{
44
use beacon_chain::validator_monitor::{ValidatorMonitorConfig, MISSED_BLOCK_LAG_SLOTS};
55
use logging::test_logger;
66
use std::sync::LazyLock;
7-
use types::{Epoch, EthSpec, ForkName, Keypair, MainnetEthSpec, PublicKeyBytes, Slot};
7+
use types::{Epoch, EthSpec, Keypair, MainnetEthSpec, PublicKeyBytes, Slot};
88

99
// Should ideally be divisible by 3.
1010
pub const VALIDATOR_COUNT: usize = 48;
@@ -117,7 +117,7 @@ async fn missed_blocks_across_epochs() {
117117
}
118118

119119
#[tokio::test]
120-
async fn produces_missed_blocks() {
120+
async fn missed_blocks_basic() {
121121
let validator_count = 16;
122122

123123
let slots_per_epoch = E::slots_per_epoch();
@@ -127,13 +127,10 @@ async fn produces_missed_blocks() {
127127
// Generate 63 slots (2 epochs * 32 slots per epoch - 1)
128128
let initial_blocks = slots_per_epoch * nb_epoch_to_simulate.as_u64() - 1;
129129

130-
// The validator index of the validator that is 'supposed' to miss a block
131-
let validator_index_to_monitor = 1;
132-
133130
// 1st scenario //
134131
//
135132
// Missed block happens when slot and prev_slot are in the same epoch
136-
let harness1 = get_harness(validator_count, vec![validator_index_to_monitor]);
133+
let harness1 = get_harness(validator_count, vec![]);
137134
harness1
138135
.extend_chain(
139136
initial_blocks as usize,
@@ -153,7 +150,7 @@ async fn produces_missed_blocks() {
153150
let mut prev_slot = Slot::new(idx - 1);
154151
let mut duplicate_block_root = *_state.block_roots().get(idx as usize).unwrap();
155152
let mut validator_indexes = _state.get_beacon_proposer_indices(&harness1.spec).unwrap();
156-
let mut validator_index = validator_indexes[slot_in_epoch.as_usize()];
153+
let mut missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
157154
let mut proposer_shuffling_decision_root = _state
158155
.proposer_shuffling_decision_root(duplicate_block_root)
159156
.unwrap();
@@ -170,7 +167,7 @@ async fn produces_missed_blocks() {
170167
beacon_proposer_cache.lock().insert(
171168
epoch,
172169
proposer_shuffling_decision_root,
173-
validator_indexes.into_iter().collect::<Vec<usize>>(),
170+
validator_indexes,
174171
_state.fork()
175172
),
176173
Ok(())
@@ -187,12 +184,15 @@ async fn produces_missed_blocks() {
187184
// Let's validate the state which will call the function responsible for
188185
// adding the missed blocks to the validator monitor
189186
let mut validator_monitor = harness1.chain.validator_monitor.write();
187+
188+
validator_monitor.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
190189
validator_monitor.process_valid_state(nb_epoch_to_simulate, _state, &harness1.chain.spec);
191190

192191
// We should have one entry in the missed blocks map
193192
assert_eq!(
194-
validator_monitor.get_monitored_validator_missed_block_count(validator_index as u64),
195-
1
193+
validator_monitor
194+
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
195+
1,
196196
);
197197
}
198198

@@ -201,23 +201,7 @@ async fn produces_missed_blocks() {
201201
// Missed block happens when slot and prev_slot are not in the same epoch
202202
// making sure that the cache reloads when the epoch changes
203203
// in that scenario the slot that missed a block is the first slot of the epoch
204-
// We are adding other validators to monitor as these ones will miss a block depending on
205-
// the fork name specified when running the test as the proposer cache differs depending on
206-
// the fork name (cf. seed)
207-
//
208-
// If you are adding a new fork and seeing errors, print
209-
// `validator_indexes[slot_in_epoch.as_usize()]` and add it below.
210-
let validator_index_to_monitor = match harness1.spec.fork_name_at_slot::<E>(Slot::new(0)) {
211-
ForkName::Base => 7,
212-
ForkName::Altair => 2,
213-
ForkName::Bellatrix => 4,
214-
ForkName::Capella => 11,
215-
ForkName::Deneb => 3,
216-
ForkName::Electra => 1,
217-
ForkName::Fulu => 6,
218-
};
219-
220-
let harness2 = get_harness(validator_count, vec![validator_index_to_monitor]);
204+
let harness2 = get_harness(validator_count, vec![]);
221205
let advance_slot_by = 9;
222206
harness2
223207
.extend_chain(
@@ -238,11 +222,7 @@ async fn produces_missed_blocks() {
238222
slot_in_epoch = slot % slots_per_epoch;
239223
duplicate_block_root = *_state2.block_roots().get(idx as usize).unwrap();
240224
validator_indexes = _state2.get_beacon_proposer_indices(&harness2.spec).unwrap();
241-
validator_index = validator_indexes[slot_in_epoch.as_usize()];
242-
// If you are adding a new fork and seeing errors, it means the fork seed has changed the
243-
// validator_index. Uncomment this line, run the test again and add the resulting index to the
244-
// list above.
245-
//eprintln!("new index which needs to be added => {:?}", validator_index);
225+
missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
246226

247227
let beacon_proposer_cache = harness2
248228
.chain
@@ -256,7 +236,7 @@ async fn produces_missed_blocks() {
256236
beacon_proposer_cache.lock().insert(
257237
epoch,
258238
duplicate_block_root,
259-
validator_indexes.into_iter().collect::<Vec<usize>>(),
239+
validator_indexes.clone(),
260240
_state2.fork()
261241
),
262242
Ok(())
@@ -271,30 +251,33 @@ async fn produces_missed_blocks() {
271251
// Let's validate the state which will call the function responsible for
272252
// adding the missed blocks to the validator monitor
273253
let mut validator_monitor2 = harness2.chain.validator_monitor.write();
254+
validator_monitor2.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
274255
validator_monitor2.process_valid_state(epoch, _state2, &harness2.chain.spec);
275256
// We should have one entry in the missed blocks map
276257
assert_eq!(
277-
validator_monitor2.get_monitored_validator_missed_block_count(validator_index as u64),
258+
validator_monitor2
259+
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
278260
1
279261
);
280262

281263
// 3rd scenario //
282264
//
283265
// A missed block happens but the validator is not monitored
284266
// it should not be flagged as a missed block
285-
idx = initial_blocks + (advance_slot_by) - 7;
267+
while validator_indexes[(idx % slots_per_epoch) as usize] == missed_block_proposer
268+
&& idx / slots_per_epoch == epoch.as_u64()
269+
{
270+
idx += 1;
271+
}
286272
slot = Slot::new(idx);
287273
prev_slot = Slot::new(idx - 1);
288274
slot_in_epoch = slot % slots_per_epoch;
289275
duplicate_block_root = *_state2.block_roots().get(idx as usize).unwrap();
290-
validator_indexes = _state2.get_beacon_proposer_indices(&harness2.spec).unwrap();
291-
let not_monitored_validator_index = validator_indexes[slot_in_epoch.as_usize()];
292-
// This could do with a refactor: https://github.com/sigp/lighthouse/issues/6293
293-
assert_ne!(
294-
not_monitored_validator_index,
295-
validator_index_to_monitor,
296-
"this test has a fragile dependency on hardcoded indices. you need to tweak some settings or rewrite this"
297-
);
276+
let second_missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
277+
278+
// This test may fail if we can't find another distinct proposer in the same epoch.
279+
// However, this should be vanishingly unlikely: P ~= (1/16)^32 = 2e-39.
280+
assert_ne!(missed_block_proposer, second_missed_block_proposer);
298281

299282
assert_eq!(
300283
_state2.set_block_root(prev_slot, duplicate_block_root),
@@ -306,10 +289,9 @@ async fn produces_missed_blocks() {
306289
validator_monitor2.process_valid_state(epoch, _state2, &harness2.chain.spec);
307290

308291
// We shouldn't have any entry in the missed blocks map
309-
assert_ne!(validator_index, not_monitored_validator_index);
310292
assert_eq!(
311293
validator_monitor2
312-
.get_monitored_validator_missed_block_count(not_monitored_validator_index as u64),
294+
.get_monitored_validator_missed_block_count(second_missed_block_proposer as u64),
313295
0
314296
);
315297
}
@@ -318,7 +300,7 @@ async fn produces_missed_blocks() {
318300
//
319301
// A missed block happens at state.slot - LOG_SLOTS_PER_EPOCH
320302
// it shouldn't be flagged as a missed block
321-
let harness3 = get_harness(validator_count, vec![validator_index_to_monitor]);
303+
let harness3 = get_harness(validator_count, vec![]);
322304
harness3
323305
.extend_chain(
324306
slots_per_epoch as usize,
@@ -338,7 +320,7 @@ async fn produces_missed_blocks() {
338320
prev_slot = Slot::new(idx - 1);
339321
duplicate_block_root = *_state3.block_roots().get(idx as usize).unwrap();
340322
validator_indexes = _state3.get_beacon_proposer_indices(&harness3.spec).unwrap();
341-
validator_index = validator_indexes[slot_in_epoch.as_usize()];
323+
missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
342324
proposer_shuffling_decision_root = _state3
343325
.proposer_shuffling_decision_root_at_epoch(epoch, duplicate_block_root)
344326
.unwrap();
@@ -355,7 +337,7 @@ async fn produces_missed_blocks() {
355337
beacon_proposer_cache.lock().insert(
356338
epoch,
357339
proposer_shuffling_decision_root,
358-
validator_indexes.into_iter().collect::<Vec<usize>>(),
340+
validator_indexes,
359341
_state3.fork()
360342
),
361343
Ok(())
@@ -372,11 +354,13 @@ async fn produces_missed_blocks() {
372354
// Let's validate the state which will call the function responsible for
373355
// adding the missed blocks to the validator monitor
374356
let mut validator_monitor3 = harness3.chain.validator_monitor.write();
357+
validator_monitor3.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
375358
validator_monitor3.process_valid_state(epoch, _state3, &harness3.chain.spec);
376359

377360
// We shouldn't have one entry in the missed blocks map
378361
assert_eq!(
379-
validator_monitor3.get_monitored_validator_missed_block_count(validator_index as u64),
362+
validator_monitor3
363+
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
380364
0
381365
);
382366
}

beacon_node/execution_layer/src/engine_api/new_payload_request.rs

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,6 +128,11 @@ impl<'block, E: EthSpec> NewPayloadRequest<'block, E> {
128128

129129
let _timer = metrics::start_timer(&metrics::EXECUTION_LAYER_VERIFY_BLOCK_HASH);
130130

131+
// Check that no transactions in the payload are zero length
132+
if payload.transactions().iter().any(|slice| slice.is_empty()) {
133+
return Err(Error::ZeroLengthTransaction);
134+
}
135+
131136
let (header_hash, rlp_transactions_root) = calculate_execution_block_hash(
132137
payload,
133138
parent_beacon_block_root,

beacon_node/execution_layer/src/lib.rs

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,7 @@ pub enum Error {
157157
payload: ExecutionBlockHash,
158158
transactions_root: Hash256,
159159
},
160+
ZeroLengthTransaction,
160161
PayloadBodiesByRangeNotSupported,
161162
InvalidJWTSecret(String),
162163
InvalidForkForPayload,

beacon_node/lighthouse_network/src/rpc/protocol.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -710,7 +710,7 @@ pub fn rpc_blob_limits<E: EthSpec>() -> RpcLimits {
710710
}
711711
}
712712

713-
// TODO(peerdas): fix hardcoded max here
713+
// TODO(das): fix hardcoded max here
714714
pub fn rpc_data_column_limits<E: EthSpec>(fork_name: ForkName) -> RpcLimits {
715715
RpcLimits::new(
716716
DataColumnSidecar::<E>::empty().as_ssz_bytes().len(),

beacon_node/network/src/sync/tests/range.rs

Lines changed: 39 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,15 @@ use crate::sync::manager::SLOT_IMPORT_TOLERANCE;
44
use crate::sync::range_sync::RangeSyncType;
55
use crate::sync::SyncMessage;
66
use beacon_chain::test_utils::{AttestationStrategy, BlockStrategy};
7-
use beacon_chain::EngineState;
7+
use beacon_chain::{block_verification_types::RpcBlock, EngineState, NotifyExecutionLayer};
88
use lighthouse_network::rpc::{RequestType, StatusMessage};
99
use lighthouse_network::service::api_types::{AppRequestId, Id, SyncRequestId};
1010
use lighthouse_network::{PeerId, SyncInfo};
1111
use std::time::Duration;
12-
use types::{EthSpec, Hash256, MinimalEthSpec as E, SignedBeaconBlock, Slot};
12+
use types::{
13+
BlobSidecarList, BlockImportSource, EthSpec, Hash256, MinimalEthSpec as E, SignedBeaconBlock,
14+
SignedBeaconBlockHash, Slot,
15+
};
1316

1417
const D: Duration = Duration::new(0, 0);
1518

@@ -154,7 +157,9 @@ impl TestRig {
154157
}
155158
}
156159

157-
async fn create_canonical_block(&mut self) -> SignedBeaconBlock<E> {
160+
async fn create_canonical_block(
161+
&mut self,
162+
) -> (SignedBeaconBlock<E>, Option<BlobSidecarList<E>>) {
158163
self.harness.advance_slot();
159164

160165
let block_root = self
@@ -165,19 +170,39 @@ impl TestRig {
165170
AttestationStrategy::AllValidators,
166171
)
167172
.await;
168-
self.harness
169-
.chain
170-
.store
171-
.get_full_block(&block_root)
172-
.unwrap()
173-
.unwrap()
173+
// TODO(das): this does not handle data columns yet
174+
let store = &self.harness.chain.store;
175+
let block = store.get_full_block(&block_root).unwrap().unwrap();
176+
let blobs = if block.fork_name_unchecked().deneb_enabled() {
177+
store.get_blobs(&block_root).unwrap().blobs()
178+
} else {
179+
None
180+
};
181+
(block, blobs)
174182
}
175183

176-
async fn remember_block(&mut self, block: SignedBeaconBlock<E>) {
177-
self.harness
178-
.process_block(block.slot(), block.canonical_root(), (block.into(), None))
184+
async fn remember_block(
185+
&mut self,
186+
(block, blob_sidecars): (SignedBeaconBlock<E>, Option<BlobSidecarList<E>>),
187+
) {
188+
// This code is kind of duplicated from Harness::process_block, but takes sidecars directly.
189+
let block_root = block.canonical_root();
190+
self.harness.set_current_slot(block.slot());
191+
let _: SignedBeaconBlockHash = self
192+
.harness
193+
.chain
194+
.process_block(
195+
block_root,
196+
RpcBlock::new(Some(block_root), block.into(), blob_sidecars).unwrap(),
197+
NotifyExecutionLayer::Yes,
198+
BlockImportSource::RangeSync,
199+
|| Ok(()),
200+
)
179201
.await
202+
.unwrap()
203+
.try_into()
180204
.unwrap();
205+
self.harness.chain.recompute_head_at_current_slot().await;
181206
}
182207
}
183208

@@ -217,9 +242,9 @@ async fn state_update_while_purging() {
217242
// Need to create blocks that can be inserted into the fork-choice and fit the "known
218243
// conditions" below.
219244
let head_peer_block = rig_2.create_canonical_block().await;
220-
let head_peer_root = head_peer_block.canonical_root();
245+
let head_peer_root = head_peer_block.0.canonical_root();
221246
let finalized_peer_block = rig_2.create_canonical_block().await;
222-
let finalized_peer_root = finalized_peer_block.canonical_root();
247+
let finalized_peer_root = finalized_peer_block.0.canonical_root();
223248

224249
// Get a peer with an advanced head
225250
let head_peer = rig.add_head_peer_with_root(head_peer_root);

0 commit comments

Comments
 (0)