-
Notifications
You must be signed in to change notification settings - Fork 619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[occm] PROXY protocol breaks connections from inside the cluster #1287
Comments
Digital Ocean experienced the same issue in its own cloud provider. Their workaround was to introduce a new annotation, I propose the Openstack provider should have a similar escape hatch: a new annotation that switches the controller from returning a Some considerations for the design:
KEP-1860, once implemented, will introduce an official way for the cloud provider to inform kube-proxy of the desired "ingress mode". With this, the |
good suggestion , should we consider to wait for KEP or refer to seems reasonable to implement it first... |
@lingxiankong do you think we can add this into |
Any PoC available for OCCM? If it could prove to work, we could have it. |
I can try to do this later today |
Ok here it goes: Starting situation, we do have ingress-controller exposed using proxy protocol (octavia). Traffic from outside cluster works fine to ingress:
However, when trying from inside the cluster it does not work, like this issue describes:
First I tried could we just copy ip address to hostname field, it do have validation:
The next thing that I tried is to use those "magical" ips, which do actually work!
Then trying again from inside the cluster:
The only question which now remains: can we really create these things using magical ips? |
I think there is KEP-1860 mentioned above which will solve the problem |
I should have mentioned this earlier... we have a working patch in our internal fork of this project. We've been running it in production for a couple weeks now with success. It uses the
And then changes the status object based on that: // If the load balancer is using the PROXY protocol, expose its IP address via
// the Hostname field to prevent kube-proxy from injecting an iptables bypass.
// This is a workaround for kubernetes#66607 that imitates a similar patch done
// on the Digital Ocean cloud provider.
if useProxyProtocol {
// By default, the universal nip.io service is used to provide a valid domain that is guaranteed
// to resolve to the LB's IP address. This can be changed via the following (optional) annotation.
hostnameSuffix := getStringFromServiceAnnotation(apiService, ServiceAnnotationLoadBalancerProxyHostnameSuffix,
defaultProxyHostnameSuffix)
hostnameSuffix = strings.TrimPrefix(hostnameSuffix, ".")
hostname := fmt.Sprintf("%s.%s", status.Ingress[0].IP, hostnameSuffix)
status.Ingress = []corev1.LoadBalancerIngress{{Hostname: hostname}}
if errs := validation.IsDNS1123Subdomain(hostnameSuffix); len(errs) > 0 {
joinedErrors := strings.Join(errs, ",")
return status, fmt.Errorf("invalid value \"%s\" for annotation %s: %s", hostnameSuffix,
ServiceAnnotationLoadBalancerProxyHostnameSuffix, joinedErrors)
}
} I just wanted to validate the design with the community before submitting a PR. Some improvements I can think of:
The reason we wanted the Depending on community feedback for the points above, I can submit the PR as is or make a few improvements. |
@jichenjc yes, ideally we go for a design that will gracefully evolve to accommodate KEP-1860 once it is implemented |
@zetaab thanks for testing, @bgagnon thanks for the PoC code. They are very helpful. Instead of using |
@lingxiankong there's |
Note that kubernetes/kubernetes#92312 contains the definitive fix for this -- a way for the cloud provider to communicate how the IP should be handled. It is included in k8s 1.20 behind a feature gate: https://kubernetes.io/docs/setup/release/notes/#api-change-1 But it looks like it might have been reverted for 1.20 in the end... to be continued. |
I know there are other options, I'm thinking about does that make much sense for the end user to choose a different one, given all they need is just a workaround to access PROXY load balancer from within the cluster? If providing an annotation can't bring too much value for the user, I'd rather we don't do that. Another possible reason is, considering the k8s community will eventually solve the issue by itself, we'd better not to introduce an annotation that we all know will be deprecated and removed in the future. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
@bgagnon Hi, are you still working on solving this issue? If yes, could you please work on the master branch first? |
Since opening #1345 I have changed jobs and no longer have the ability to test on Openstack. Maybe @davidovich or someone from his team wants to pick it up. Sorry about that! |
no worries @bgagnon, I will see how I can help. |
/kind bug
This is a manifestation of kubernetes/kubernetes#66607. When a service of type
LoadBalancer
is exposed via an IP address (which is the case for OCCM), iptables rules "optimize" the traffic and send the packets directly to the underlying pods.The problem is that:
PROXY
header (it may not even be capable of doing that)The workaround used by other cloud providers is to expose the LB's external endpoint via the
.Hostname
field instead of the.IP
field.The text was updated successfully, but these errors were encountered: