-
Notifications
You must be signed in to change notification settings - Fork 343
feat: FallbackLayer
transport
#2135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly questions, I think overall this makes sense and I was able to follow, but I have some concerns about the make_request impl, because if there are multiple concurrent requests, would this result in empty transports error rn?
let mut transports = self.transports.lock().expect("Lock poisoned"); | ||
for _ in 0..self.active_transport_count.min(transports.len()) { | ||
if let Some(transport) = transports.pop() { | ||
top_transports.push(transport); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not exactly sure I fully understood this but it seems like this pops the transports and on success it adds them back?
what happens if there are no available transports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I've updated this to use a Vec
instead of a BinaryHeap
and re-sort it whenever a score changes.
This way I don't need to take the transports from the list on each request anymore.
6f68e2a
to
45994f8
Compare
2ca3bc0
to
71331a1
Compare
Hi @mattsse, can I have another round of review please? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few more suggestions
360eb6f
to
1ad5d77
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
* feat: initial impl of fallback layer and service * chore: released trait bounds * chore: use random ping id * chore: update fn name * chore: update fn name * chore: rm health check task and sample count * fix: lock scope * chore: adjust logs * chore: small cleanup * chore: cargo fmt * chore: cargo fmt again * chore: addressed review, dont take transports from the list on each request * chore: rm private doc * chore: update mod doc comment ordering * chore: addressed review --------- Co-authored-by: Jennifer <[email protected]>
* feat: define subscription type * feat: `FallbackLayer` transport (#2135) * feat: initial impl of fallback layer and service * chore: released trait bounds * chore: use random ping id * chore: update fn name * chore: update fn name * chore: rm health check task and sample count * fix: lock scope * chore: adjust logs * chore: small cleanup * chore: cargo fmt * chore: cargo fmt again * chore: addressed review, dont take transports from the list on each request * chore: rm private doc * chore: update mod doc comment ordering * chore: addressed review --------- Co-authored-by: Jennifer <[email protected]> * feat: add BlobAndProofV2 (#2202) * feat: add BlobAndProofV2 * Update crates/eips/src/eip4844/engine.rs --------- Co-authored-by: Matthias Seitz <[email protected]> * fix: buffer size go brrr * refactor: rename EthSuscribe to GetSubscription, naming is hard sigh * feat: - fix description - add `method` member - make params optional - handle optional channel size and also handle params case and no params case separately * chore: fmt, rename buffer to channel_size --------- Co-authored-by: nicolas <[email protected]> Co-authored-by: Jennifer <[email protected]> Co-authored-by: Varun Doshi <[email protected]> Co-authored-by: Matthias Seitz <[email protected]>
Motivation
Having a provider be able to consume a list of transports can increase both availability and performance, both desirable features of many applications that need a connection to Ethereum.
Several issues such as #1938 and #1760 outline a feature like this in Alloy.
Other libraries such as Viem and Ethers.js also have something similar.
Solution
This PR implements
FallbackLayer
andFallbackService
, following the usualtower
abstractions. Most notably, the layer here works with aVec<S>
because this service is meant for managing a list of transports.When sending a request to a provider using this layer, it queries the top ranked
N
transports (configurable with theactive_transport_count
option) concurrently, returning the first successful result. Whenever a result arrives, metrics are updated to keep track of the performance of each transport, using a simple combination of average latency and success rate.I tried to keep this simple, but there is potential for future work if someone finds it useful:
Usage
I've added an example usage binary here: merklefruit/alloy-examples#1, but in short here is how it works:
PR Checklist
Tested on a live example, as I couldn't add comprehensive unit tests without running into cyclic dependencies in alloy itself.