-
-
Notifications
You must be signed in to change notification settings - Fork 31.3k
revert the highwatermak increase #52805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
cc @ronag @nodejs/tsc |
Alternatively, we might want to set it to 16KB for sockets and keep the default higher for the in-memory processing. |
Does changing the default high water mark fixes it? because as you know we have other changes in that release like v8 updates |
Yes. |
I'm fine with reverting as I don't like the increased memory usage, but did you try with two different processes? One for the server and one for the client? Also, |
Note that there isn't any memory increases in this example, as the chunks arrives with length 65535 even on Node 20. My theory is that the outgoing buffer is smaller than 65535, causing some data to be resent over libuv, effectively slowing things down. |
I think we should fix this in the specific case not the general. If someone sets setDefaultHighwaterMark they should not suffer for specific cases. |
Yes, everything is written synchronously and the consumer is faster than the producer. There is a single socket and not much is buffered. Chunks of 65535 bytes are read from a socket even on older versions of Node.js. That is why I am a bit skeptical about this regression, at least in the form of the benchmark in the issue description. |
The regression is that actually we are not reading chunks of 65535 from the other side, but smaller. |
Is the regression in the reading or the writing side? |
Writing side, setting the following "fixes" it: socket._writableState.highWaterMark = 16 * 1024 |
I propose we do that then with a comment. |
Would still be interesting to understand why it's slower. |
Unfortunately it's not enough, the issue is also somewhat on reading side. The same benchmark completes in roughly 7.7s with 65535 (with the highwaterMark overridden in the socket) set on vs 7.3 with a default of 16384. I'll try your patch. |
FWIW I can't reproduce the issue: // server.mjs
import stream from 'stream';
import net from 'net';
// stream.setDefaultHighWaterMark(false, num);
let count = 0;
const readableStream = new stream.Readable({
read() {
this.push('Hello');
if (count++ === 10000000) {
this.push(null);
}
}
});
const server = net.createServer(function (socket) {
readableStream.pipe(socket);
})
server.listen(8000, function () {
console.log('Server listening on port 8000');
}); // client.mjs
import stream from 'stream';
import net from 'net';
// stream.setDefaultHighWaterMark(false, num);
let chunks = 0;
const writableStream = new stream.Writable({
write(chunk, encoding, callback) {
// console.log(chunk.length);
chunks++;
callback();
}
});
writableStream.on('finish', () => {
console.log('Chunks:', chunks);
console.timeEnd('benchmark');
console.log('All data has been written');
socket.end();
});
const socket = net.createConnection(8000);
socket.on('connect', function () {
console.time('benchmark');
});
socket.pipe(writableStream);
|
On Windows, here is the result I have with your code @lpinca:
|
Very interestingly, then this seems to affect only the case of client and server running on the same process. I think we can close this then. |
@mcollina did my patch do anything? |
Nope I'm going to close this. Sorry for the false alarm, however it was good to investigate. |
Which is the case for some tests but still worth it IMO |
should we reopen? |
I think we should have a more comprehensive benchmark if we want to revert it. Also, note if we revert, it will be flagged as semver-major anyway. |
In v22, we increased the highWaterMark to 64k. But it has worsen the performance of socket operations on my mac and linux boxes.
Consider the following example:
Node v22 on Mac:
Node v20 on Mac:
Node v22 on Linux:
Node v20 on Linux:
The text was updated successfully, but these errors were encountered: