-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Egress #198
Comments
Another minor advantage that comes out of having an HTTP proxy in place would be the possibility to enable caching for certain resources. This would be a huge advantage for our future build systems that typically download massive amounts of static resources from various repositories (maven, apt, npm, ...). Having a caching proxy here would decrease build times immensely. |
Another implementation hint: |
Hi, How is going with that. We have currently a use case where we need to proxy out the egress traffic to give the traffic a defined (set) of IP(s). The Destination has some IP based Filter to only allow requests from this given IPs. To make it pretty we could envision to announce the Source IP of the Node running the egress proxy by a L3 Networking model. ?!? |
@hwinkel we haven't started working on egress yet, but for your use case it could be a simple routing rule in AWS to go via NAT gateways with fixed IP. |
@hjacobs I was looking for something silimar. Point me to relevant meetings or docs? I can help you guys build this :) |
FYI: we are currently experimenting with dante as a SOCKS server on AWS. It really looks promising and Java has automatic SOCKS support (https://docs.oracle.com/javase/7/docs/technotes/guides/net/proxies.html). |
Is the selective egress a supported functionality now? |
@shruti-palshikar No, we have not done any work on this, and it's also not something we are planning right now since we don't need it ourselves at the moment :) |
@shruti-palshikar dependent on what you are searching for you might use calico or cillium and network policies, I know that there is some effort in sig-network specifying egress network policies. |
@mikkeloscar Thanks for the response. @szuecs : My usecase is to allow egress from pods to certain selected external domains. With the default policy of a namespace being deny-all egress traffic, I am looking for ways to whitelist a few domains that the pods are allowed to reach out to. |
@shruti-palshikar domains are not an internet routeable entity for egress nor ingress. It's based on Layer3 of the OSI model and DNS is used to resolve a name to an IP. IP is the routeable entity, which you might can do in kubernetes with the mentioned CNI plugins + egress-traffic-policy object, but this repository is not the right audience. |
* add dualstack (IPv6) support (#209) * added a flag to blacklist certificate ARNs, that will not be considered * Allow control of the SSL policy applied to https listeners via flag (#20 @jhohertz) * Detach non-existing target groups from ASGs (#198) * Reduce cert lookups within a single update (#193) * Cleanup controller initialization and termination (#194) Signed-off-by: Sandor Szücs <sandor.szuecs@zalando.de>
was looking for something similar, the folks of cillium did something which could solve that problem? see: https://cilium.io/blog/2018/09/19/kubernetes-network-policies/ |
The goal is to know where network connections are going to outside of the cluster and authorizing those in advance. All connections that leave the cluster network need to be whitelisted. For that, I propose an Egress resource to specify the whitelist.
Requirements
Not a requirement:
Example specification
In some cases, pinning it down to predetermined domains doesn't work. Examples would be crawler or applications that need to react on user input like webhooks. In this case, one needs a switch to allow everything:
Multiple rule sets can exist at the same time and a union of the targets would determine the set of the whole cluster.
Wildcarding everything is okay as its still an explicit choice that could be checked during deployment. Wildcarding ports should be discouraged (and even not implemented) unless we find any valid use case where this is not purely security insensitive.
Example integration
Since this would probably be implemented as an HTTP proxy, the integration pattern should be that the standard environment variable
http_proxy
is set by default in every container that starts without the user having to specify it.Example implementation
One should set up a HA/scalable HTTP proxy like squid. In addition, an
egress-controller
should observe theEgressRuleSet
resources and reconfigure the squid accordingly.The AWS Security Group of all Kubernetes nodes would not have the default "allow outbound" rule so that every traffic going out would be dropped. The HTTP proxy would need to run outside of the security group in some kind of "DMZ" setup (like the ALBs and ELBs) where Kubernetes nodes can go to. The HTTP proxy server then itself has full outbound rules in its Security Group.
The text was updated successfully, but these errors were encountered: