-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DNM] OKD: Combined test of PR #7484 and PR #7634 #7641
Conversation
OCP requires DNS records api.<cluster_domain> and *.apps.\ <cluster_domain> to be externally resolvable (<cluster_domain> is <cluster_name>.<base_domain>). For SNO this list also includes DNS record api-int.<cluster_domain>. However, OCP does not enforce ownership of all subdomains of <cluster_domain>. For example, it is allowed to host a disconnected image registry at <registry_hostname>.<cluster_domain> and OCP shall be able to resolve it using the user-supplied external DNS resolver. PR openshift#7516 changed the systemd-resolved config of the bootstrap node / rendezvous host to associate the complete <cluster_domain> with the DNS server at 127.0.0.1 where CoreDNS is supposed to be listening. When a disconnected image registry is used for cluster installation, the registry is hosted at <registry_hostname>.<cluster_domain> and the bootstrap node / rendezvous host does not retrieve its domain from the DHCP server, then the registry's DNS name cannot be resolved. That is because in order to pull the CoreDNS image, the disconnected registry must be connected. The split dns mechanism of systemd-\ resolved would cause it to send DNS requests for <registry_hostname>.<cluster_domain> to 127.0.0.1 where CoreDNS is expected to be running which is not. When a bootstrap node / rendezvous host retrieves its domain <cluster_domain> from a DHCP server (e.g. dnsmasq's '--domain' option) then systemd-resolved would associate <cluster_domain> not only with 127.0.0.1 but also with the physical network interface, causing DNS requests for <registry_hostname>.<cluster_domain> to be send out to 127.0.0.1 as well as the external DNS resolver. This patch mitigates the DNS issue for other network setups. It changes the systemd-resolved config to forward DNS requests to CoreDNS only for domains which are resolvable by CoreDNS: * api.<cluster_domain> * api-int.<cluster_domain>. * apps.<cluster_domain> DNS requests for <registry_hostname>.<cluster_domain> and other subdomains of <cluster_domain> will be send out to the external DNS resolver. Fixes openshift#7516
…d Installer OKD/FCOS uses FCOS as its bootimage, i.e. when booting cluster nodes the first time during installation. FCOS does not provide tools such as OpenShift Client (oc) or crio.service which Agent-based Installer uses at the rendezvous host, e.g. to launch the bootstrap control plane. RHCOS and SCOS include these tools, but FCOS has to pivot the root fs [1] to okd-machine-os [2] first in order to make those tools available. Pivoting uses 'rpm-ostree rebase' but the rendezvous host is booted the first time the node boots from a FCOS Live ISO where the root fs and /sysroot are mounted read-only. Thus 'rpm-ostree rebase' fails and necessary tools will not be available, causing the setup to stall. Until rpm-ostree has implemented support for rebasing Live ISOs [3], this patch adapts the workaround for SNO installations [4] to also support Agent-based Installer. In particular, the Go conditional {{- if .BootstrapInPlace }} which is used to mark a SNO install has been replaced with a shell if-else which checks at runtime whether the system is launched from are on a Live ISO. Most code in the OpenShift ecosystem is written with RHCOS in mind and often assumes that tools like oc or crio.service are available. These assumptions can be satisfied by applying this workaround to all Live ISO boots. It will not remove functionality or overwrite configuration files in /etc and thus side effects should be minimal. The Go conditional {{- if .BootstrapInPlace }} in the release-image-\ pivot.service has been dropped completely. This service is only used in OKD only, so OCP will not be impacted at all. The 'Before=' option will not cause systemd to fail if a service does not exist. So, in case bootkube.service or kubelet.service do not exist, the option will have no effect. When bootkube.service or kubelet.service do exist, it must always be ensured that release-image-pivot.service is started first because it might reboot the system or change /usr in the Live ISO use case. So it is safe to drop the Go conditional and ask systemd to always launch release-image-pivot.service before bootkube.service and kubelet.service. [0] https://github.com/openshift/installer/blob/master/data/data/bootstrap/files/usr/local/bin/bootkube.sh.template [1] https://github.com/openshift/installer/blob/master/data/data/bootstrap/files/usr/local/bin/bootstrap-pivot.sh.template [2] https://github.com/openshift/okd-machine-os [3] coreos/rpm-ostree#4547 [4] openshift#7445
/test okd-e2e-aws-ovn |
/test okd-e2e-aws-ovn |
/test okd-e2e-agent-compact-ipv4 (past job didn't get a resource from ofcir) |
(need release image to test) /test okd-e2e-agent-compact-ipv4 |
(need release image to test again) /test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-aws-ovn |
/test okd-e2e-agent-compact-ipv4 |
2 similar comments
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-compact-ipv4 |
Let's try also ha flavor: /test okd-e2e-agent-ha-dualstack |
/test okd-e2e-agent-ha-dualstack |
/test okd-e2e-agent-compact-ipv4 |
2 similar comments
/test okd-e2e-agent-compact-ipv4 |
/test okd-e2e-agent-compact-ipv4 |
Latest agent compact failures was due etcd operator being degraded, not sure it's related to the current patch /test okd-e2e-agent-compact-ipv4 |
All the okd agent jobs are green /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andfasano The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@JM1: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Code works as expected. Closing PR and continuing in original PRs. |
TEST. DO NOT MERGE
Combined test of PR #7484 and PR #7634 for OKD/FCOS.