-
-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expose apiserver_port #271
Conversation
Typhoon v1.10.5 switched Once there is a solution on Google Cloud, the Also, this move wasn't taken lightly or just to cause trouble. It helped enable the load balancer consolidations in v1.10.5 that reduced costs of running clusters. Alignment with upstream and consistency across DO and bare-metal were nice-to-have's. |
it makes sense when you have a native LB service available to you but this its semi non existent in bare metal at the moment unless you use something like metallb which has its own issues with calico. |
Hm, switching bare-metal clusters was without difficulty for me. In the simple case, Typhoon only asks that there be some record resolving to controllers so no changes are needed. When using your own software or hardware load balancer, the configuration change depends what you're using. But ordinary load balancing software is perfectly fine for balancing across some backends (nginx, haproxy, etc) - there's nothing special or Kubernetes-centric about the problem. Just N backend servers with a TCP service on 6443. If you still require 443, you can keep this fork around. There's some time before https://github.com/poseidon/terraform-render-bootkube/blob/master/variables.tf#L108 is removed. |
Right, its not the functionality of the change, it's the practicality in this use case. Cloud providers have a tight integration with LB services which typhoon can configure directly. With metal there's now a dependency to use an external LB service managed outside of typhoon which needs to be maintained in sidestep. This very well may be worth the extra work for the added benefit of load balancing if you have LB infrastructure available to you, but not everyone does. Additionally changing from 443 to use 6443 directly without an LB in-between can be cumbersome in physical networks with various layers of firewalls and access restrictions. |
Changing the port did not introduce any new requirement to load balance apiservers.
|
maybe workaround is a better phrase than dependency. the option to control what port is used is being removed, so going forward it must be 6443. 6443 isn't a standard port (which is understandable as its not meant to be public). Along these same lines 6443 wont be open in most physical/corporate firewalls and justification to open it could be challenging. I do use the bare-bones DNS approach, but can't access the new port due to the above reasons. So the last option is to use a LB or NAT to translate the ports. On the other hand I do totally agree with the drive to drop privileges where applicable, in an ideal world it would be nice to have the flexibility for either configuration. |
Ah ok bare-bones DNS and your company's network prevents 6443 traffic. In your shoes, I'd try to avoid introducing new load balancing infrastructure to workaround what is ultimately a policy / people problem. Perhaps, try to impress upon security/networking folks that kubernetes#34719 made the decision a while ago (~Oct 2016). Its by no means required, but certainly seems to be the expectation today. You might also mention that your network doesn't block flannel / Calico traffic (depending on which you use) and a number of other Kubernetes centric, but not necessarily standardized ports. I totally sympathize with the slow moving corporations plights (got plenty of that sorta thing at work too) but I think if there are still those internal blockers, its best you use the patch you've proposed here and carry it until the port changes are permitted. I want to draw the line somewhere and I think merging the option would discourage switching and drag this out further. I do apologize for the change as I try to avoid them. In the v1.10.5 release notes, I should have also that 6443 had to be permitted:
|
expose apiserver_port so users can set it back to 443 if they chose to not use 6443
I've booted this to validate it works