You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
restorer: Add a lock around cgroupd communication.
Threads are put into cgroups through the cgroupd thread, which
communicates with other threads using a socketpair.
Previously, each thread received a dup'd copy of the socket, and did
the following
sendmsg(socket_dup_fd, my_cgroup_set);
// wait for ack.
while (1) {
recvmsg(socket_dup_fd, &h, MSG_PEEK);
if (h.pid != my_pid) continue;
recvmsg(socket_dup_fd, &h, 0);
}
close(socket_dup_fd);
When restoring many threads, many threads would be spinning in the
above loop waiting for their PID to appear.
In my test-case, restoring a process with a 11.5G heap and 491 threads
could take anywhere between 10 seconds and 60 seconds to complete.
To avoid the spinning, we drop the loop and MSG_PEEK, and add a lock
around the above code. This does not decrease parallelism, as the
cgroupd daemon uses a single thread anyway.
With the lock in place, the same restore consistently takes around 10
seconds on my machine (Thinkpad P14s, AMD Ryzen 8840HS).
There is a similar "daemon" thread for user namespaces. That already
is protected with a similar userns_sync_lock in __userns_call().
Fixes#2614
Signed-off-by: Han-Wen Nienhuys <[email protected]>
0 commit comments