Skip to content

feat: Implement RelayDatagramsQueue #2998

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 4, 2024
Merged

Conversation

matheus23
Copy link
Member

@matheus23 matheus23 commented Dec 3, 2024

Based on #2986

Description

Replaces RelayRecvReceiver and RelayRecvSender with the (clonable & shared) RelayDatagramsQueue.
This queue contains a ConcurrentQueue (from the smol library concurrent-queue) and an AtomicWaker.
It should only be polled from one task.
If polled from multiple tasks, then tasks will overwrite each other's wakers.

Unfortunately we can't make it use &mut Self in poll_recv because quinn expects the AsyncUdpSockets poll_recv interface to be &self.

This (un)fortunately doesn't have an effect on performance for me.

(The benchmark is completely broken half the time for some reason, but when it runs it produces normal numbers:)

$ DEV_RELAY_ONLY=true cargo run -p iroh-net-bench --release --features=local-relay -- iroh --with-relay --download-size=100M
      │  Throughput   │ Duration 
──────┼───────────────┼──────────
 AVG  │   55.05 MiB/s │     1.82s
 P0   │   55.03 MiB/s │     1.82s
 P10  │   55.06 MiB/s │     1.82s
 P50  │   55.06 MiB/s │     1.82s
 P90  │   55.06 MiB/s │     1.82s
 P100 │   55.06 MiB/s │     1.82s

And basically exactly the same times for the PR this is based on.

Breaking Changes

Notes & open questions

Todo

  • Rename variables to e.g. relay_datagrams_queue instead of relay_recv_sender or relay_recv_channel etc.
  • Add documentation about multiple tasks polling, etc.

Change checklist

  • Self-review.
  • Documentation updates following the style guide, if relevant.
  • Tests if relevant.
  • All breaking changes documented.

@matheus23 matheus23 self-assigned this Dec 3, 2024
Copy link

github-actions bot commented Dec 3, 2024

Documentation for this PR has been generated and is available at: https://n0-computer.github.io/iroh/pr/2998/docs/iroh/

Last updated: 2024-12-04T14:15:20Z

Copy link

github-actions bot commented Dec 3, 2024

Netsim report & logs for this PR have been generated and is available at: LOGS
This report will remain available for 3 days.

Last updated for commit: b8d633f


match self.queue.pop() {
Ok(value) => {
self.waker.take();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW the Flag examples in AtomicWaker don't bother with removing this waker if it is no longer needed. I really don't know anymore how much this matters. It depends on what suprious wakeups might happen or how concurrently this is called I guess.

This should be reasonably not-concurrently called IIUC, but then I don't understand why I had things not waking up in the broken version.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, yeah, not removing the waker here means you're generating another spurious wakeup AFAIU. Not terrible, but also not great.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it result in spurious wakeups though? You're yielding an item instead of pending. So the task does not suspend and the runtime might(?)/should(?) now drop this waker and create a new one for the next poll?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course custom poll functions could mean some other poll decides to still return pending for other reasons. In that case you'll get spurious wakeups I guess.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it's exactly that case you describe. If another poll fn decides to return pending for other reasons, then there's no need to wake up the future if there's another item in the queue. The only reason to wake up the future is for the other poll fn's reasons.

Base automatically changed from flub/relay-recv-channel to main December 4, 2024 09:55
@matheus23 matheus23 force-pushed the matheus23/relay-datagrams-queue branch from edf5bb1 to 19a8f27 Compare December 4, 2024 14:12
@matheus23 matheus23 marked this pull request as ready for review December 4, 2024 14:23
Copy link
Contributor

@flub flub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit surprised that you joined the sender & receiver back into a single object, after complaining on my PR that they were like this. But... I don't mind.

Good naming though!

@matheus23
Copy link
Member Author

😂 Sorry about that.

It's not that I want the sender & receiver to be separate necessarily.
It's just that sticking both a tokio::mpsc::Sender and tokio::mpsc::Receiver into the same object reads weird to me.

@matheus23 matheus23 added this pull request to the merge queue Dec 4, 2024
Merged via the queue into main with commit b76500d Dec 4, 2024
30 checks passed
@matheus23 matheus23 deleted the matheus23/relay-datagrams-queue branch December 4, 2024 16:05
@flub
Copy link
Contributor

flub commented Dec 4, 2024

Good naming though!

I take this part back. This is only the RelayRecvDatagramsQueue. We'll probably wan a send version too. No big deal though.

@matheus23
Copy link
Member Author

Good naming though!

I take this part back. This is only the RelayRecvDatagramsQueue. We'll probably wan a send version too. No big deal though.

What if we re-use the same implementation for the send side 👀 (and only rename the variable names to relay_recv_queue, not the struct name)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants