Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Istio #169

Closed
rsnj opened this issue Jan 10, 2019 · 3 comments
Closed

Support Istio #169

rsnj opened this issue Jan 10, 2019 · 3 comments
Labels

Comments

@rsnj
Copy link

rsnj commented Jan 10, 2019

Using this library with Istio does not work. Istio seems to be blocking the internal calls to the AKS Kubernetes Management Server because they are seen to be external to the cluster. I have opened a ticket with Istio a while back and have no luck finding a solution. Is there a common URL or IP Address I can to my Istio configuration?

More information can be found here:
istio/istio#8696

@xiaomi7732
Copy link
Member

@lisaguthrie by chance, do you know whom to contact for AKS configurations? Thanks in advance.

@m1o1
Copy link

m1o1 commented Jan 23, 2019

We had a similar problem with kubernetes/client-go#527. We worked with Microsoft to get some information, and this information may be relevant to your case too:

  • If you use "includeIPRanges" so the Envoy proxies ignore egress traffic, the problem no longer happens, but then you lose some Istio functionality for egress traffic. If you don't use that, then the rest of the following apply:
  • Since the master node is accessed with a FQDN, you need a ServiceEntry and VirtualService to access it (as mentioned in the client-go issue). FQDN can be found with az aks show -n ${CLUSTER} -g ${CLUSTER_RG} --query fqdn --output tsv. Once you set this up though, you'll probably still have issues with connections abruptly closing, so more info on that:
  • The load balancer sitting in between your AKS cluster and the master node watches for idle connections and closes any connections that are idle for 5 minutes
  • The load balancer does not send a TCP RESET packet to applications signalling the closed connection. There are plans to support a standard load balancer with AKS which will eventually support TCP RESETs, in which case the applications will know to reconnect to the master node.
  • Applications can keep the connection alive by sending TCP KeepAlive packets, but the Envoy proxy currently does not support that: Feature request: support TCP keepalive for downstream sockets envoyproxy/envoy#3634. It's possible that the application can configure this through something like Go's TCPConn but I don't know if the Envoy proxy will forward these packets (we decided to go with option 1 before I tested this)
    EDIT: According to this comment, "TCP keepalives are not sufficient to update the idle timers in common NAT setups, so they won't prevent teardown of the session state for an idle connection at a NAT router.". In that case, I guess this would not be a solution.
  • You can tell Istio to only use one request per connection in the envoy proxies using a DestinationRule, and when combined with disabling HTTP KeepAlive, that should work. This might not be particularly performance-friendly.

We decided to go with the "includeIPRanges" approach because:

  • The envoy proxy does not send TCP KeepAlive packets
  • We can't go through every application we use that interacts with the API server to make sure that it does (or disables HTTP keepalive)
  • We don't need any Istio functionality on egress traffic.
  • Ultimately, I think the best solution is for the LB to send TCP Resets. The next best thing is for the Envoy proxies to send TCP keepalives. Both of which were out of our control.

@xiaomi7732
Copy link
Member

Close this issue since this is tracked by istio/istio#8696

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants