Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

agentkeepalive #8013

Closed
tony-gutierrez opened this issue May 2, 2018 — with docs.microsoft.com · 8 comments
Closed

agentkeepalive #8013

tony-gutierrez opened this issue May 2, 2018 — with docs.microsoft.com · 8 comments

Comments

Copy link

The max sockets setting for agentkeepalive is PER HOST, so the example doesn't make much sense. Also there is not explanation as to where the recommended 160 sockets per VM number comes from.


Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

@BryanTrach-MSFT
Copy link
Member

@tony-gutierrez Thanks for the feedback! We are currently investigating and will update you shortly.

@BryanTrach-MSFT
Copy link
Member

@tony-gutierrez Each individual VM hosting the applition is limited to 160 SNAT sockets, which is where this number comes from. This is an intentional platform design and I am not aware of any upcoming changes to this quota. In regards to the sample provided, I have requested the doc author to investigate this and provide an update an necessary.

@rramachand21 Can you please review the feedback about the agentkeepalive and update the doc as necessary? Thank you

@BryanTrach-MSFT
Copy link
Member

BryanTrach-MSFT commented May 3, 2018

@Tysonn I am having issues assigning the doc author to this issue. The doc lists ranjithr but the peoples site lists rramachand21 as the GitHub profile. Neither profile appears under the assignees menu. Can you please provide input how to proceed?

@BryanTrach-MSFT BryanTrach-MSFT self-assigned this May 3, 2018
@tony-gutierrez
Copy link
Author

The 160 is only a pre-allocation, not a limit.
https://www.theregister.co.uk/2018/02/27/microsoft_rewrites_source_network_address_translation/
https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections

I am experiencing port exhaustion in almost all my azure node instances. Using keep alive (native or the recommended library) helps, but I have the situation of many connections to few hosts as described here: https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#pat

I am having a really hard time finding the amount of sockets to use that gives me decent performance and avoids timeouts. This has never been an issue for me on any AWS vms, even without keepalive. I have never experienced "port exhaustion" until trying to deploy node on azure.

@tony-gutierrez
Copy link
Author

Also would be helpful if there was a way to inspect a given vm to see if it was using the old 160 or the new 1024.

@rramachand21-zz
Copy link
Contributor

Hi Tony, Just FYI, its never 1,024 because our pool is larger. And we are reverting back to the old 160 preallocation value. So the document is still relevant and up to date.

@tony-gutierrez
Copy link
Author

You might want to stop recommending that module, and just recommend using native node keepalive. The options are almost identical, and that module might have a race condition. Native will probably always be faster as well. I replaced the module with native with good results, other than the whole 160 issue.

@BryanTrach-MSFT
Copy link
Member

@tony-gutierrez We will now proceed to close this thread. If there are further questions regarding this matter, please reopen it and tag me in your reply. We will gladly continue the discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants