Skip to content

Commit c6f7fdf

Browse files
committed
type-context: revert client disconnect workaround
This reverts commit 196e6fc from #118 which added a workaround which is no longer needed after 315ff53 from #160 When the workaround was introduced it prevented segfaults that happened when an IPC client disconnected during a long-running asynchronous call. But it prevented them by leaking server objects, and these leaks now sometimes cause the new "disconnecting and blocking" unit test introduced in #160 to hang. Since the workaround is no longer necessary, revert it now to fix the test hangs. The problem with test hangs was reported: - Sjors/bitcoin#90 (comment) - https://github.com/Sjors/bitcoin/actions/runs/15966265407/job/45027248310?pr=90 - Sjors/bitcoin#90 (comment) - https://cirrus-ci.com/task/4999408900636672 And the fix was posted: - Sjors/bitcoin#90 (comment)
1 parent e09143d commit c6f7fdf

File tree

1 file changed

+2
-17
lines changed

1 file changed

+2
-17
lines changed

include/mp/type-context.h

Lines changed: 2 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,6 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
6969
const auto& params = call_context.getParams();
7070
Context::Reader context_arg = Accessor::get(params);
7171
ServerContext server_context{server, call_context, req};
72-
bool disconnected{false};
7372
{
7473
// Before invoking the function, store a reference to the
7574
// callbackThread provided by the client in the
@@ -101,7 +100,7 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
101100
// recursive call (IPC call calling back to the caller which
102101
// makes another IPC call), so avoid modifying the map.
103102
const bool erase_thread{inserted};
104-
KJ_DEFER({
103+
KJ_DEFER(if (erase_thread) {
105104
std::unique_lock<std::mutex> lock(thread_context.waiter->m_mutex);
106105
// Call erase here with a Connection* argument instead
107106
// of an iterator argument, because the `request_thread`
@@ -112,24 +111,10 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn&
112111
// erases the thread from the map, and also because the
113112
// ProxyServer<Thread> destructor calls
114113
// request_threads.clear().
115-
if (erase_thread) {
116-
disconnected = !request_threads.erase(server.m_context.connection);
117-
} else {
118-
disconnected = !request_threads.count(server.m_context.connection);
119-
}
114+
request_threads.erase(server.m_context.connection);
120115
});
121116
fn.invoke(server_context, args...);
122117
}
123-
if (disconnected) {
124-
// If disconnected is true, the Connection object was
125-
// destroyed during the method call. Deal with this by
126-
// returning without ever fulfilling the promise, which will
127-
// cause the ProxyServer object to leak. This is not ideal,
128-
// but fixing the leak will require nontrivial code changes
129-
// because there is a lot of code assuming ProxyServer
130-
// objects are destroyed before Connection objects.
131-
return;
132-
}
133118
KJ_IF_MAYBE(exception, kj::runCatchingExceptions([&]() {
134119
server.m_context.loop->sync([&] {
135120
auto fulfiller_dispose = kj::mv(fulfiller);

0 commit comments

Comments
 (0)