-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Daemon RPC: add max_block_count field to /getblocks.bin [v0.18] #9901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release-v0.18
Are you sure you want to change the base?
Daemon RPC: add max_block_count field to /getblocks.bin [v0.18] #9901
Conversation
What is the use case for having this and the other things in the release branch? |
For this PR in particular:
The first applies to the other PR's also (#9902, #9903). The second only applies for this PR. At this rate with FCMP++, I don't know if I will be able to complete the async scanner in time for a release before the fork. So, I don't have a strong opinion on merging the other PR's before the fork. This PR I think has a reasonable case to be included sooner though. |
this is very useful please merge. knowing exactly how many blocks will be returned also means we don't need to parse the epee response before sending the next request. It means we can decouple the fetching from the response processing. Send requests in parallel to potentially multiple nodes if one node throttles or times out. It would be nice if the response was completely deterministic so clients could try different nodes and compare hashes (of the complete response) and detect if a node acts sus. |
Agree this would be nice
Worth pointing out that technicalllly it doesn't enable this, since it's possible to request a One idea I had in the past is to re-structure to support fetching byte ranges instead of using blocks. Such an approach could enable the smoothest download experience, but would be a pretty hefty lift and likely warrant a new endpoint. |
How about we take the output of getblocks.bin for every 100 blocks and hash it. Then we serve the hashes from a new endpoint together with a link to where to fetch them. This could be the node itself, s3 compatible storage like storj, ipfs ( a torrent magnet ? ) It is still not quite like byte ranges and 100 might not even be the right size. But as long as the package is roughly a few mb, the download experience will still be smooth. Having deterministic chunks of a few mb that can be offloaded to blob storage and cross checked with nodes is useful. s3 supports byte range fetches so technically by using it you would get that as a side effect. (and there are open source alternatives that are compatible like seaweedfs or minio) The caveat mentioned about |
#9381