-
Notifications
You must be signed in to change notification settings - Fork 7.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale-able direct pod addressibility #36596
Comments
For pod connections, one solution is a smarter PassthroughCluster. Today, it is just an original_dest cluster, so we blindly pass everything through. Ideally, what we have is: if destinationIsAPod() {
forwardWithMTLS()
} else {
passthrough()
} As far as I know, there is no scale-able way to do this today. My assumption is that to scale, we need to provide Envoy with the set of pod IPs in EDS (or equivalent), and likely duplicating IPs no more than once. One thing that is close is using Envoy endpoint subsets. For example: clusters:
- name: Passthrough
type: STATIC
lb_subset_config:
subset_selectors:
- keys:
- address
load_assignment:
cluster_name: Passthrough
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.2.3.4
port_value: 80
metadata:
filter_metadata:
envoy.lb:
address: "1.2.3.4:80"
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 2.3.4.5
port_value: 80
metadata:
filter_metadata:
envoy.lb:
address: "2.3.4.5:80" Then add a network filter that sets What this does is allow us to passthrough requests, like orig_dst, but still attach metadata to them. Importantly, this metadata can be our tlsMode metadata and used for a transport_socket_match, allowing upgrading passthrough requests to mTLS. For multinetwork with IP conflicts, we will favor the local network, as the requests to direct pod IPs will not traverse the network gateway so must be the local IP. What is missing here is the fallback if there is no match. Envoy does have a Just doing this alone could get us mTLS but we will still have degraded telemetry and likely other features. To take it a step further, we could consider adding explicit clusters for some groups of pods. What the groups are is open to discussion - it could be Services (and pick one if there are multiple...? we do this for headless), canonical service, or something else. From there, we could add a FCM extension that could match based on EDS data. For example, to implement headless services we could do:
What this would do is look at all pod IPs/ports in the related: envoyproxy/envoy#15750 |
Not stale |
Currently, Istio identifies traffic:
TCP/TLS: Must have VIP
HTTP: Must have Service name in Host header
Auto: Must have VIP (even if HTTP, hostname is not used)
Headless TCP/TLS: Match any Pod IP as a dedicated listener
Headless HTTP: match service name in host header or
*.<service>
headless auto: Match any Pod IP as a dedicated listener (even if HTTP, hostname is not used)
There are two main problems here:
auto
protocol - we prefer users to name the protocol typically. Headless also has weak auto mtls support - it requires a homogeneous cluster (all mtls or none).This issue tracks what we can do to improve this.
The text was updated successfully, but these errors were encountered: