Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Require network API objects to have IPv4 addresses #16807

Merged
merged 1 commit into from
Oct 15, 2017

Conversation

danwinship
Copy link
Contributor

https://bugzilla.redhat.com/show_bug.cgi?id=1500664 points out problems that occur if you specify an IPv6 EgressIP. In fact, we shouldn't be allowing IPv6 addresses for any network API objects:

  • ClusterNetwork.Network / ClusterNetwork.ClusterNetworks[].CIDR must be IPv4 because we only support an IPv4 SDN right now, for multiple reasons throughout kube and the SDN code. (Among other things, the HostSubnet allocator implicitly assumes IPv4, so while it's currently possible to start up a master with an IPv6 clusterNetworkCIDR value, it won't ever succeed in allocating any HostSubnets, so no nodes will ever successfully start up.)
  • ClusterNetwork.ServiceNetwork must be IPv4 because kube-proxy only supports all-IPv4 or all-IPv6, and service rules must be able to refer to pod IPs, and since those are IPv4, then service IPs must be IPv4 too.
  • HostSubnet.HostIP must be IPv4 because the node's IP address is used in kube-proxy rules, which per the above must be IPv4 only
  • HostSubnet.Subnet must be IPv4 because it's a subset of the cluster network, which is IPv4
  • HostSubnet.EgressIPs must be IPv4 because they are used for NATting in iptables rules.
  • NetNamespace.EgressIPs must be IPv4 because they have to match some hostsubnet's EgressIPs.
  • EgressNetworkPolicyPeer.CIDRSelector ... well, ... actually I guess these don't need to be IPv4; it's pointless to specify an IPv6 value here since pods don't have IPv6 connectivity, but nothing will break if you do.

So this PR requires all of the above except EgressNetworkPolicyPeer.CIDRSelector to be IPv4.

@danwinship danwinship added component/networking kind/bug Categorizes issue or PR as related to a bug. sig/networking labels Oct 11, 2017
@openshift-merge-robot openshift-merge-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 11, 2017
@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Oct 11, 2017
@smarterclayton
Copy link
Contributor

Will tightening validations lead to breakage of a cluster that would otherwise limp along? If a user has invalid values in these fields, what impact will it have when they upgrade?

@danwinship
Copy link
Contributor Author

If ClusterNetwork is IPv6, you can start the master, but you won't be able to successfully start any nodes. (Even if you created the IPv6 HostSubnets by hand, the nodes would just fail a few steps later, trying to add IPv6 addresses to an IPv4 iptables rule.)

It might currently be possible to create a cluster with an IPv4 ClusterNetwork but an IPv6 ServiceNetwork, but only if you didn't actually use services, since pods wouldn't be able to connect to IPv6 service addresses, and services wouldn't be able to point to IPv4 pod addresses. And because the alleged IPv6 support in kube-proxy is a lie; looking more closely, there are bunches of IPv4-isms, like using "%s/32" to refer to IP addresses (meaning any such rule trying to refer to only a single IPv6 service address would end up referring to the entire service network, plus a large unrelated swath of the IPv6 internet).

It might be possible to limp along with an IPv6 HostSubnet.HostIP. Some things wouldn't work but it wouldn't necessarily cause the node to exit... So I can revert that check.

HostSubnet.Subnet is by definition a subnet of ClusterNetwork, so that has to be IPv4. And EgressIPs is new so we can do whatever we want with that.

return nil, fmt.Errorf("invalid IP address: %s", ip)
}
if bytes.To4() == nil {
return nil, fmt.Errorf("must be an IPv4 address: %s", ip)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't include the value in the error message, it's already included in the validation output.

return nil, err
}
if ipnet.IP.To4() == nil {
return nil, fmt.Errorf("must be an IPv4 network: %s", cidr)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

@smarterclayton
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 13, 2017
@openshift-merge-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: danwinship, smarterclayton

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@pravisankar
Copy link

Fixes #16660

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

2 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot
Copy link
Contributor

Automatic merge from submit-queue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. component/networking kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. sig/networking size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants