-
Notifications
You must be signed in to change notification settings - Fork 7.3k
Default nodelay value needs to be set for all sockets. #9235
Comments
disabling Nagle’s algorithm improves latency at the expense of throughput |
@reqshark That's acceptable for Node's use case. Generally speaking, users will be writing out a complete message. So delaying the final packet isn't what we want. |
@mdawsonibm
my interpretation of the docs leads me to the opposite conclusion:
.. that means that nodelay is off by default (according to the docs). What defaults to true is the nodelay argument if no arguments are passed to setNoDelay(). Do you agree with this interpretation? I don't have an opinion on what the best default would be perf-wise, but it would seem reasonable to stick to what the docs say -which also appears to be the default on most OSes afaict- for sake of compatibility. But whatever default we pick, I think we should strive to be consistent across platforms. WIndows certainly has Nagle on / nodelay off by default. |
@mdawsonibm I'm not sure I understand how the use or absence of On another topic, do you have one or more AIX setups that we can use to investigate/reproduce such issues in the future? |
The test fails because it depends on the data arriving together. Internally the Node functions use two writes to send the data over and without TCP_NODELAY the way the data arrives is such that it is received on two different iterations of the event loop with the end result being the test failing. Changing the TCP_NODELAY behaviour does not actually fix the test, but makes the behaviour consistent with other platforms such that the test passes. We see the same thing with test-http-default-encoding.js. In terms of getting an externally available AIX setup I'll have to look into that. |
In respect to the comment by orangemoca. I see what you mean, it could just mean that the default is true if you don't specify it in the call. Given that context I've looked to see why we thought it was on by default for Linux other that what we saw with the tests, and thinking it was consistent with the API doc. Since I can't find anything to support that and I can see that I likely misread the doc I agree we should leave as is and we'll go back to looking to see if we make the tests tolerate the different timing we see on AIX. |
@mdawsonibm Ok, thank you for the clarification. I have the same interpretation of the documentation as @orangemocha. So if I understand correctly we have two distinct issues here:
So I guess the question is whether we consider 2) as a documentation bug or as inconsistent behavior that needs to be fixed. It seems that for v0.12, fixing the documentation would be safer, and for v0.13 having a more consistent behavior would be better. What do you think? |
@mdawsonibm I hadn't seen your latest comment before posting mine, but it sounds good to me. |
Ok I'm going to close this issue and once we have investigated the tests further will open new issues/pull requests as appropriate |
The API indicates that the nodelay option is enabled by default http://nodejs.org/api/net.html#net_socket_setnodelay_nodelay
However, when debugging failures in this test for 0.12 - test/simple/test-http-request-end.js we discovered that the failures were because nodelay was not set for the sockets on AIX.
For linux, the default at the OS level seems to be to enable TCP_NODELAY so all sockets have TCP_NODELAY set. The default for AIX is for TCP_NODELAY to be false.
Since the Node API specifies that the default for nodelay is true it should make the required calls in order to ensure that nodelay is set to true, regardless of the default for the OS.
I looked at the libuv code and I don't see any code in the OS specific files for any of the platforms that would adjust the default for any platform.
This patch on net.js adjusts the creation of TCP objects so that by default nodelay is true for all sockets.
If this is the right fix I can create a pull request.
The text was updated successfully, but these errors were encountered: