Skip to content

Daemon RPC: add max_block_count field to /getblocks.bin [v0.18] #9901

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: release-v0.18
Choose a base branch
from

Conversation

j-berman
Copy link
Collaborator

@selsta selsta added the daemon label Apr 11, 2025
@selsta
Copy link
Collaborator

selsta commented Apr 11, 2025

What is the use case for having this and the other things in the release branch?

@j-berman
Copy link
Collaborator Author

j-berman commented Apr 11, 2025

For this PR in particular:

  1. The async scanner (or anyone developing something similar) could be released at a later time, pointing to a daemon already running this change. The more daemons running this change, the more likely it would be for wallets to connect to a daemon that supports the async scanner's improved scanning UX.

  2. Wallet devs can tweak params in their wallets to fetch a smaller number of blocks if they want shorter intervals for scanning progress feedback to users today. @sethforprivacy has mentioned an interest in this for Cake Wallet in the past.

The first applies to the other PR's also (#9902, #9903).

The second only applies for this PR.

At this rate with FCMP++, I don't know if I will be able to complete the async scanner in time for a release before the fork. So, I don't have a strong opinion on merging the other PR's before the fork. This PR I think has a reasonable case to be included sooner though.

@spirobel
Copy link

this is very useful please merge. knowing exactly how many blocks will be returned also means we don't need to parse the epee response before sending the next request.

It means we can decouple the fetching from the response processing. Send requests in parallel to potentially multiple nodes if one node throttles or times out.

It would be nice if the response was completely deterministic so clients could try different nodes and compare hashes (of the complete response) and detect if a node acts sus.

@j-berman
Copy link
Collaborator Author

It would be nice if the response was completely deterministic so clients could try different nodes and compare hashes (of the complete response) and detect if a node acts sus.

Agree this would be nice

knowing exactly how many blocks will be returned

Worth pointing out that technicalllly it doesn't enable this, since it's possible to request a starting block > tip - max_block_count, or for the other limits on the response to trip before max_block_count is reached (max tx count or max byte size).

One idea I had in the past is to re-structure to support fetching byte ranges instead of using blocks. Such an approach could enable the smoothest download experience, but would be a pretty hefty lift and likely warrant a new endpoint.

@spirobel
Copy link

One idea I had in the past is to re-structure to support fetching byte ranges instead of using blocks. Such an approach could enable the smoothest download experience, but would be a pretty hefty lift and likely warrant a new endpoint.

How about we take the output of getblocks.bin for every 100 blocks and hash it. Then we serve the hashes from a new endpoint together with a link to where to fetch them. This could be the node itself, s3 compatible storage like storj, ipfs ( a torrent magnet ? )

It is still not quite like byte ranges and 100 might not even be the right size. But as long as the package is roughly a few mb, the download experience will still be smooth.

Having deterministic chunks of a few mb that can be offloaded to blob storage and cross checked with nodes is useful. s3 supports byte range fetches so technically by using it you would get that as a side effect. (and there are open source alternatives that are compatible like seaweedfs or minio)

The caveat mentioned about max_block_count is true, practically it wont be a big problem if the size is picked reasonably. Definitely a step up from the roughly 100mb that come back when the max byte size is reached

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants