-
Notifications
You must be signed in to change notification settings - Fork 141
Reliable data channel #737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
private static let reliableLowThreshold: UInt64 = 2 * 1024 * 1024 // 2 MB | ||
private static let lossyLowThreshold: UInt64 = reliableLowThreshold | ||
|
||
// Keep at least N reliable packets while draining webrtc buffers. | ||
// Should prevent losing packets - requesting retry from lastSeq that's already dequeued. | ||
private static let reliableRetryCapacity = 32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is kinda conservative, but during device testing I discovered that a gap in the history may occur:

When we drain our buffer to 0
as soon as RTC part does it... So drained != sent 😭
We could also keep a fixed-size queue of let's say 1MB - the RAM sacrifice is negligible if we want real consistency here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in what test cases did this happen?
curious if you have repro steps so that we can ensure we don't have the same problem on JS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It happened with real n/w (real latency). Local server (used for unit tests) is so fast that the reconnection delay may not even interfere with the transfer, so it mostly checks the "duplicate logic".
To test it "live" I'm just sending a large packet indefinitely, firing a Quick reconnect
in the meantime, then the server requests retry...
while !Task.isCancelled {
try? await room.localParticipant.streamText(for: "asd").write(String(repeating: "a", count: 1024 * 100))
try? await Task.sleep(100 * NSEC_PER_MSEC)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe still specific to the webrtc internals or even iOS n/w stack, but worth checking 💣
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lukasIO here's an example for bigger test data 4f55e6f
However, keep in mind that:
- everything may depend on timing here, from
yield
ing the cooperative concurrent stuff to n/w delays, chunk size, etc. - cli requests retry from
0
- looks like it does not respect the "receive state" part yet
So this is just an attempt to replicate the real-world condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good!
5d99fc4
to
6c3b038
Compare
ba7be42
to
d6d72b1
Compare
@cnderrauber @lukasIO for the record: I did a lot of semi-automated monkey-testing on iOS and macOS (like reconnecting every N s while streaming data). It turns out that it depends on how you test it aka how hard you stress it. By stress, I mostly mean whether you If you keep streaming continuously, you gotta buffer at least the full webrtc capacity (backpressure amount) + a few packets (I just did 2x backpressure). Otherwise, you get:
Once introducing So in my case, it looks like what entered webrtc can be lost (as well as the in-flight data, these few packets). As I said, it may depend on everything, webrtc stack, Apple's own POSIX impl, etc. |
5948773
to
98a206d
Compare
Ports basic concepts from livekit/client-sdk-js#1546
The differences are mostly syntactic, due to how backpressure is handled via events in Swift. Thus, I decided to separate
SendBuffer
that's tightly coupled to RTC andRetryBuffer
with slightly different semantics.