-
Notifications
You must be signed in to change notification settings - Fork 101
Description
When I use the load test CLI tool to perform load testing using UDP mux mode (port 7882), I’m unable to reach 3,000 participants with just one publisher. As the number of subscribers increases to around 2,200, an error message appears: "could not dial signal connection."
I’ve tried increasing the ulimit and adjusting related network configurations to raise system limits, but the issue still persists.
I suspect the root cause might be port exhaustion due to the creation of too many UDP sockets. When I ran the following command:
lk load-test --url wss://some-domain.com --api-key devkey --api-secret secret --room blah-room --video-publishers 1 --video-resolution low --num-per-second 50 --subscribers 1
and used netstat -anp
to monitor open sockets, I observed a large number of UDP sockets, even though there were only two participants. The output looked like this:
udp 0 0 0.0.0.0:58351 0.0.0.0:* 112424/lk
udp 0 0 172.17.0.1:43664 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:60334 0.0.0.0:* 112424/lk
udp 0 0 172.17.0.1:60700 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:44340 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:44789 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:45743 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:45794 0.0.0.0:* 112424/lk
udp 0 0 172.31.31.181:46976 0.0.0.0:* 112424/lk
udp 0 0 172.18.0.1:47424 0.0.0.0:* 112424/lk
udp 0 0 172.31.31.181:47732 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:33310 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:49716 0.0.0.0:* 112424/lk
udp 0 0 172.17.0.1:33593 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:33724 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:34506 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:51999 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:35877 0.0.0.0:* 112424/lk
udp 0 0 172.31.31.181:35878 0.0.0.0:* 112424/lk
udp 0 0 172.18.0.1:35916 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:52497 0.0.0.0:* 112424/lk
udp 0 0 172.17.0.1:53627 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:37289 0.0.0.0:* 112424/lk
udp 0 0 172.18.0.1:37371 0.0.0.0:* 112424/lk
udp 0 0 172.18.0.1:37910 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:5353 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:5353 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:5353 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:5353 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:38312 0.0.0.0:* 112424/lk
udp 0 0 172.31.31.181:40889 0.0.0.0:* 112424/lk
udp 0 0 0.0.0.0:41151 0.0.0.0:* 112424/lk
udp6 0 0 fe80::43c:4bff:fe:58249 :::* 112424/lk
udp6 0 0 :::43912 :::* 112424/lk
udp6 0 0 fe80::682b:40ff:f:44210 :::* 112424/lk
udp6 0 0 fe80::28fb:7bff:f:44759 :::* 112424/lk
udp6 0 0 fe80::8c93:8eff:f:48349 :::* 112424/lk
udp6 0 0 fe80::43c:4bff:fe:33019 :::* 112424/lk
udp6 0 0 fe80::28fb:7bff:f:49648 :::* 112424/lk
udp6 0 0 fe80::682b:40ff:f:34176 :::* 112424/lk
udp6 0 0 :::51761 :::* 112424/lk
udp6 0 0 fe80::8c93:8eff:f:51772 :::* 112424/lk
udp6 0 0 fe80::28fb:7bff:f:52343 :::* 112424/lk
udp6 0 0 fe80::43c:4bff:fe:52416 :::* 112424/lk
udp6 0 0 fe80::682b:40ff:f:36889 :::* 112424/lk
udp6 0 0 :::5353 :::* 112424/lk
udp6 0 0 :::5353 :::* 112424/lk
udp6 0 0 :::5353 :::* 112424/lk
udp6 0 0 :::5353 :::* 112424/lk
udp6 0 0 :::54925 :::* 112424/lk
udp6 0 0 fe80::43c:4bff:fe:38627 :::* 112424/lk
udp6 0 0 fe80::682b:40ff:f:39211 :::* 112424/lk
udp6 0 0 fe80::5c33:72ff:f:55686 :::* 112424/lk
udp6 0 0 fe80::8c93:8eff:f:39352 :::* 112424/lk
udp6 0 0 :::55952 :::* 112424/lk
udp6 0 0 fe80::28fb:7bff:f:56026 :::* 112424/lk
udp6 0 0 fe80::5c33:72ff:f:56426 :::* 112424/lk
udp6 0 0 fe80::5c33:72ff:f:56929 :::* 112424/lk
udp6 0 0 fe80::5c33:72ff:f:57146 :::* 112424/lk
udp6 0 0 fe80::8c93:8eff:f:57710 :::* 112424/lk
I’m trying to understand why so many UDP sockets are being opened in this case. Could it be related to how WebRTC handles ICE candidate gathering or media stream allocation? Thank you