Hi! We are very grateful that we were able to migrate from the redis adapter to this socket.io adapter to get rid of one dependency since Mongo is our DB anyway. However, we are currently investigating an issue where our backends are behaving very strangely when after a so called Rolling Update in our K8s cluster where new backend instances are spawned and the old ones are shut down once the new ones are ready, some backends are unable to deliver any socket messages to clients connected to other backends. Unfortunately, since a couple of days it is also happening without any deployments. **What we observed** It seems like the backends are sending heartbeat signals (though we could not see them in the DB collection because our collection size was quite small: 1 MB) so with 6 backends every socket message requires 5 other backends to respond. However, even after tweaking the requestsTimeout, we still see **timeout reached: only 0 responses received out of 5** or 4/5 in our logs. And today, we noticed the message kept showing up even though the backends themselves were all running perfectly fine. **Questions** - Does anyone have any recommendations or is it not possible to use this adapter in a K8s cluster with dynamic scaling where a node can potentially go offline any minute and new ones can come up? - Is it an issue if the capped collection size is reached due to large socket objects or a large number of connected sockets? I am not sure there is an issue in the adapter so this issue is not just a potential bug report but also an ask for a pointer in the right direction. We (3 senior devs) have been speculating and pondering about this all day and have not come up with a solution (yet). Thank you :)