-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restrict access to EC2 metadata #22826
Restrict access to EC2 metadata #22826
Conversation
pkg/network/admission/restrictedendpoints/endpoint_admission.go
Outdated
Show resolved
Hide resolved
f541d1b
to
e2d6cef
Compare
The ingress operator fails to come up, thus blocking all other operators that depend on routes:
|
e2d6cef
to
45550c8
Compare
yes, i had tried to hack the sdn to special-case the |
Looks like this is passing CI now. How do you forsee the "production" version of this being different? One offhand comment, if the OVS action |
Huh. OK, so:
|
Filed openshift/cluster-ingress-operator#235 about making the ingress operator hostNetwork |
45550c8
to
9a9f30f
Compare
Updated, repushed, and updated the initial comment to reflect the new commit. This will fail e2e-aws until cluster-ingress-operator is updated. |
Access to ec2 metadata will soon be restricted (openshift/origin#22826). Eliminate the ec2 metadata dependency by discovering AWS region information from cluster config. This commit uses the deprecated install config for metatadata; once openshift/installer#1725 merges, supported cluster config will provide the region information and the code can be refactored.
Access to ec2 metadata will soon be restricted (openshift/origin#22826). Eliminate the ec2 metadata dependency by discovering AWS region information from cluster config. This commit uses the deprecated install config for metatadata; once openshift/installer#1725 merges, supported cluster config will provide the region information and the code can be refactored.
openshift/cluster-ingress-operator#238 eliminates ingress operator's dependency on ec2 metadata, hope this helps! |
Access to ec2 metadata will soon be restricted (openshift/origin#22826). Eliminate the ec2 metadata dependency by discovering AWS region information from cluster config. This commit uses the deprecated install config for metatadata; once openshift/installer#1725 merges, supported cluster config will provide the region information and the code can be refactored.
/retest |
(Not a network person but) seems sane to me! |
(Also, CI is passing now) |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: danwinship, squeed The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherrypick release-4.1 |
@squeed: once the present PR merges, I will cherry-pick it on top of release-4.1 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
A quick glance at the ci failure, it seems that prometheus was oomkilled... |
/retest |
@squeed: new pull request created: #22849 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey guys, is it possible to configure that behavior (enable for given pods) or disable completely? I understand your security concerns, but I need to assign AWS IAM roles to pods via EC2 metadata (and I accept all possible risks), currently in 3.11 I use kiam/kube2iam, but with 4.1 it is not possible anymore. What is your recommendation for such cases when pod needs access to metadata? I can't use direct assume roles from pods as it involves application code changes. |
Probably simplest is to skip those iptables rules for privileged pods. |
OK, so it looks like kube2iam and kiam work by having a hostNetwork pod that acts as a proxy, and then adding iptables rules so that when pods try to connect to the metadata IP, they reach the hostNetwork pod instead. The first half would still work under OpenShift (we don't block hostNetwork pods from accessing the metadata IP) but the second half does not. In OpenShift, the "client" pods would have to explicitly connect to the proxy pod rather than trying to connect to the metadata IP. I don't know if there are any kube2iam/kiam alternatives that work that way?
Making a pod |
Connecting to 169.254.169.254 is a default behavior of AWS SDK for automatic credentials obtaining. Connecting explicitly to proxy pod requires application code changes (or AWS SDK library patching)
Totally agree here. |
Ideally I'd prefer to be able to turn off this behavior with some configuration option (turned on by default), and then I can rely on kiam to restrict access to metadata (which pods can ask what metadata paths). |
Perhaps we could rewrite requests from ordinary pods to another IP address that is normally blackholed, but configure kube2iam to listen on that IP? |
The only solution left on the table for kiam / kube2iam for Openshift 4 is recommended by kube2iam (https://github.com/jtblin/kube2iam). It is to hack the deployment configurations of all pods that need access to AWS API to use the Furthermore, the Furthermore, this firewall rule is blocking installation of linkerd... As a result of all of this I tried migrating to OVSKubernetes, but this firewall rule still breaks. I'm quite confused why Amazon API IP address would get special attention here. What does the SDN do with other addresses in the link-local space? Whatever it does, it should probably just do the same thing for the Amazon API rather than raise the firewall. Frankly this PR looks like malice and unfortunately renders Openshift useless for applications that require access to AWS cloud services outside of the dreaded service catalog AFAICT. |
|
agree. |
@paleg @cgwalters did you get through this? |
For the same reason kube2iam exists; because a pod that has unrestricted access to the metadata service can leverage that to get node-level privileges. We are aware that the current situation is broken. We eventually want it to be possible to use kube2iam. Unfortunately, fixing this is not currently high on the priority list. |
Sure, I could imagine that we add a pod security policy for this - I think that'd be something good to take to a Kubernetes upstream enhancement. In the meantime though, I think the workaround is making kube2iam privileged (at least when deployed in OpenShift). |
Unfortunately, the problem isn't with kube2iam, it's with everything else; when any other pod attempts to reach the metadata service, we kill that connection before kube2iam has a chance to intercept it. |
Block access to the EC2 metadata service from (non-hostNetwork) pods.
The EC2 metadata IP is actually in the IPv4 link-local address range, so technically it's incorrect for us to be forwarding packets addressed to that IP range out of the SDN anyway. So in addition to the iptables rules, this also adjusts the OVS flows to drop all packets destined for that range. (The iptables rules are still needed to block output via egress-router and multus interfaces.)
So, there are three parts: