Skip to content

Avoid accidentally skipping blocks when switching to live sync #363

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: unstable
Choose a base branch
from

Conversation

dknopik
Copy link
Member

@dknopik dknopik commented Jun 5, 2025

If a block happens just as we switch from historic to live sync, we may miss a block. This is highly unlikely, but possible.

This PR gets the logs from the block after the last processed block to the relevant block, instead of only the relevant block. In most cases, this should be equivalent, but it can't hurt to make sure.

@diegomrsantos diegomrsantos requested a review from Copilot June 9, 2025 12:17
Copilot

This comment was marked as outdated.

@diegomrsantos diegomrsantos requested a review from Copilot June 23, 2025 20:06
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR addresses a potential issue of skipping blocks during the transition from historic to live sync by adjusting the block range for fetching logs.

  • Introduces a variable (last_processed_block) to capture the last processed block.
  • Updates the fetch_logs call to start fetching from last_processed_block + 1 instead of the relevant block number.

@@ -661,7 +663,7 @@ impl SsvEventSyncer {

Copy link
Preview

Copilot AI Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure that adding 1 to the last_processed_block correctly handles the intended block range without causing an off-by-one error. Consider adding a comment to clarify why the increment is necessary.

Suggested change
// Fetch logs starting from the next block after the last processed block.
// The increment (+1) ensures we do not reprocess the last processed block.

Copilot uses AI. Check for mistakes.

@@ -641,7 +641,9 @@ impl SsvEventSyncer {

// If the relevant block was already processed, do not process it again. This can
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be updated? What does a relevant block mean?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this comment still applies.

The relevant block is the newest block we want to get. It is 8 blocks before the head of the chain.

@@ -661,7 +663,7 @@ impl SsvEventSyncer {

let mut logs = self
.fetch_logs(
relevant_block,
last_processed_block + 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was passing relevant_block twice a bug?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Before, we were not guaranteed to fetch all logs. For example, when switching from historic to live sync, we may miss a block, and passing relevant_block twice returns only logs for that block. By requesting the range of blocks from the last processed block to the relevant block, we avoid this.

@dknopik dknopik added the ready-for-review This PR is ready to be reviewed label Jun 27, 2025
@dknopik dknopik requested a review from diegomrsantos July 2, 2025 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
database ready-for-review This PR is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants