-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: A way to control idle connection timeouts #70
Comments
I like the feature and would love to have a flag that enables it. There are times that I don't care what Pod I am connected to and only concerned for the connection at the service level. At this point, |
Yeah I didn't think this would be an easy task by any stretch, just thought it was necessary to voice what could be a very VERY cool feature, I can't promise I can do anything about it, but I'll surf through the code trying to get how At least for the use case I'm handling right now, my team of developers don't really care to which pod they're connecting to, as long as they get an endpoint to the deployed service they can reference in their local development (having all of our setup running locally, really upsets our laptops hehe) What I've noticed is that the timeout is precisely 5 minutes and has to do with the |
I'll have to explore the timeout issue you are having. I typically run |
That's interesting, I think this "bug" is only true for managed k8s instances (GKE, AKS or EKS), where one doesn't have control of how that's setup. I haven't tried in GKE or EKS yet but I'd imagine they would behave similarly. I'm jealous of your k8s instance running |
I'm also experiencing the disconnection after 5 minutes of idle time in Azure AKS. I use kubefwd extensively, but it does become a bit annoying having to restart kubefwd and enter my password each time (so it can update the hosts file). I haven't looked in any depth whether I can configure the kubelet timeout setting yet. |
The 5-minute timeout looks to be a kubelet setting (feature). I use kubeadm to build custom clusters which must set this timeout to infinity. I plan to explore some options for re-connecting soon. |
Release 1.11.0 should resolve this. I have not extensive testing and will re-open the issue if needed. |
I'm afraid it doesn't reconnect automatically on timeout. Using Azure AKS which has the 5 minute window. I have seen it automatically reconnect when a pod is deployed though. |
I can confirm that not reconnecting on the 5 minute timeout. Works only on pod refresh only. |
I can actually see it reconnecting now, after updating to
I think this is probably what #153 is trying to solve. |
@cjimti Hi. Is this fixed? I still find port forwarding timing out in my GKE cluster every few minutes and it does not reconnect with v1.17.3. If I stop it and then start it again, it works. But the annoying thing is that I have to do this every 5 minutes or so 😢 |
I run version 1.17.3 and forward ~30 services (pods) ~6-8 hours a day. I only drop connections If I suspend my computer or lose my internet connection completely, which is rare. There was some work to check connections and re-connect a few versions ago that caused more problems than it solved. I'm not an expert on socat and the networking around you and GKE and why you would lose connection so often. Let me know if you have tried version 1.17.3. It does not make attempts to check the connections and re-connect, yet in my experience has been more stable. |
@cjimti Hi. Thanks a lot for your response. I am using Actually, I had similar problems when forwarding manually like this (without Kubefwd): #!/bin/sh
kubectl --context <mycontext> --namespace <mynamespace> port-forward $(kubectl get pods --context <mycontext> --namespace <mynamespace> --selector "statefulset.kubernetes.io/pod-name=dgraph-dgraph-alpha-0,release=dgraph" --output jsonpath="{.items[0].metadata.name}") 9999:8080 But to get over this problem and reconnect, I wrapped it within a while like this and it reconnects without issues: #!/bin/sh
while true; do kubectl --context <mycontext> --namespace <mynamespace> port-forward $(kubectl get pods --context <mycontext> --namespace <mynamespace> --selector "statefulset.kubernetes.io/pod-name=dgraph-dgraph-alpha-0,release=dgraph" --output jsonpath="{.items[0].metadata.name}") 9999:8080; done I was wondering how to do the same with Kubefwd since I cant wrap it in a while (it does not exit on forwarding failures) or was wondering if there was an inbuilt way to do this so that Kubefwd can reconnect periodically rather than me having to modify the flag in the kubelet to increase the timeout which is not recommended. Really love kubefwd. Btw, while not related to this issue, is there any way to use kubefwd over remote ssh? I have my setup like this where I use a remote VM to do all my development: Now, while kubefwd works well in remote VM, I was wondering if there was a way to forward the same to my local PC as well so that I need not open an RDP to access the browser remotely when needed. Thanks. |
@cjimti re-opening an old thread as we're also constantly affected by this. Would it be possible to have kubefwd fail (and terminate the process) on a forwarding failure? What we see is:
and then kubefwd just hangs forever. |
This is an AWESOME tool to provide devs an easy and secure way of using our development environment on our k8s cluster.
The issue
Idle connection timeout is very short, and we can't change
streaming-connection-idle-timeout
setting in our managed kubernetes cluster.Proposal
Some kind of flag to either control the port-forwarding tunnel idle timeout (not sure how feasible this would be tho), or one that tells the tool to reconnect the service upon disconnecting.
I'm not a fan of either of those two options but perhaps this is a good conversation starter to find a solution.
The text was updated successfully, but these errors were encountered: