Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use zstd for sdk container images #950

Merged
merged 3 commits into from
Jul 3, 2023
Merged

Use zstd for sdk container images #950

merged 3 commits into from
Jul 3, 2023

Conversation

jepio
Copy link
Member

@jepio jepio commented Jun 28, 2023

Use zstd for sdk container images

This PR switches to using zstd for compressing/decompressing container images. Zstd tarballs are a bit smaller and compression/decompression is significantly faster gzip (even with pigz). Zstd can handle decompressing gzip payloads, so introduce a fallback to fetch gzip payloads so that we can still consume nightly sdks produced before this commit.

How to use

./run_sdk_container -t or in CI.

Testing done

running: http://192.168.42.7:8080/job/container/job/packages_all_arches/2066/cldsv/

  • Changelog entries added in the respective changelog/ directory (user-facing change, bug fix, security fix, update)
  • Inspected CI output for image differences: /boot and /usr size, packages, list files for any missing binaries, kernel modules, config files, kernel modules, etc.

/update-sdk

We currently use gzip together with pigz (parallel gzip) for importing
container images, and this is a lengthy operation (takes multiple minutes). By
moving to zstd we gain on all fronts: zstd produces smaller files, and is
faster to decompress/compress then pigz while using less resources.

Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
This replaces pigz, so remove the related variables (PIGZ).

Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
@jepio jepio temporarily deployed to development June 28, 2023 14:49 — with GitHub Actions Inactive
@pothos
Copy link
Member

pothos commented Jun 28, 2023

Maybe one more CI job with an SDK build, otherwise it looks good

@github-actions
Copy link

github-actions bot commented Jun 28, 2023

Test report for 3648.0.0+nightly-20230627-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _harness.go:612: Cluster failed starting machines: machine __d7387667-a7ea-46c7-ac63-0955e396adb6__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.56:22: connect: ?no route to host_"
    L2: " "
    L3: "  "

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.torcx-manifest-pkgs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (2); qemu_update-arm64 (1) ❌ Failed: qemu_update-amd64 (1)

                Diagnostic output for qemu_update-amd64, run 1
    L1: " Error: _cluster.go:117: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L2: "update.go:212: Triggering update_engine"
    L3: "update.go:231: Rebooting test machine"
    L4: "update.go:234: reboot failed: machine __527ae060-a405-472a-a777-0642d5852379__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.3:22: connect: no route to host_"
    L5: " "
    L6: "  "

ok kubeadm.v1.24.14.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (4) ❌ Failed: qemu_uefi-arm64 (1, 2, 3, 5)

                Diagnostic output for qemu_uefi-arm64, run 5
    L1: " Error: _cluster.go:117: I0629 12:39:31.988938    1593 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0629 12:39:44.263251    1758 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.3?]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 7.003750 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: 019pc8.4n95yj2cfx877djk"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.3:6443 --token 019pc8.4n95yj2cfx877djk _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:99929632a76cba29f04827b1352f6223a4de8fa867b68771b3856522603da655 "
    L78: "cluster.go:117: namespace/tigera-operator created"
    L79: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:117: serviceaccount/tigera-operator created"
    L101: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: deployment.apps/tigera-operator created"
    L104: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:117: installation.operator.tigera.io/default created"
    L107: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.calico.cgroupv1.base/nginx_deployment (93.40s)"
    L110: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _cluster.go:117: I0630 11:58:05.940629    1592 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0630 11:58:19.439676    1759 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.3?]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 7.503601 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: rbvz3z.px2tuiv2r6jlhu7g"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.3:6443 --token rbvz3z.px2tuiv2r6jlhu7g _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:ad8ac05cc6b19bcbd1603dd60bc79d85cb65c84404895a51963cc139f69976f3 "
    L78: "cluster.go:117: namespace/tigera-operator created"
    L79: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:117: serviceaccount/tigera-operator created"
    L101: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: deployment.apps/tigera-operator created"
    L104: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:117: installation.operator.tigera.io/default created"
    L107: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.calico.cgroupv1.base/nginx_deployment (93.45s)"
    L110: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _cluster.go:117: I0630 11:54:09.543330    1606 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0630 11:54:22.916006    1769 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.3?]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 7.004211 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: k5sk1e.x5jh6r0md6ypu764"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.3:6443 --token k5sk1e.x5jh6r0md6ypu764 _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:76df32d8faed96191420df8d9d469d169a410758799d4181f25e64cf77b13e77 "
    L78: "cluster.go:117: namespace/tigera-operator created"
    L79: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:117: serviceaccount/tigera-operator created"
    L101: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: deployment.apps/tigera-operator created"
    L104: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:117: installation.operator.tigera.io/default created"
    L107: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.calico.cgroupv1.base/nginx_deployment (93.40s)"
    L110: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:117: I0630 11:40:52.686494    1613 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L10: "cluster.go:117: I0630 11:41:03.805753    1781 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L11: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L12: "cluster.go:117: [preflight] Running pre-flight checks"
    L13: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.8?6]"
    L20: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:117: [apiclient] All control plane components are healthy after 7.504381 seconds"
    L43: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:117: [bootstrap-token] Using token: jhabjt.ncp46anb4sespvmu"
    L49: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:117: "
    L59: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:117: "
    L61: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:117: "
    L63: "cluster.go:117:   mkdir -p $HOME/.kube"
    L64: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:117: "
    L67: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:117: "
    L69: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:117: "
    L71: "cluster.go:117: You should now deploy a pod network to the cluster."
    L72: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:117: "
    L75: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:117: "
    L77: "cluster.go:117: kubeadm join 10.0.0.86:6443 --token jhabjt.ncp46anb4sespvmu _"
    L78: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:79f7db07a2458dcdd17f55d8583b175c8a87b69defb139c380fad43ef66e9ac5 "
    L79: "cluster.go:117: namespace/tigera-operator created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L100: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L101: "cluster.go:117: serviceaccount/tigera-operator created"
    L102: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:117: deployment.apps/tigera-operator created"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L106: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L107: "cluster.go:117: installation.operator.tigera.io/default created"
    L108: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L109: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L110: "--- FAIL: kubeadm.v1.24.14.calico.cgroupv1.base/nginx_deployment (93.57s)"
    L111: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L112: " "

ok kubeadm.v1.24.14.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (4); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1, 2, 3)

                Diagnostic output for qemu_uefi-amd64, run 3
    L1: " Error: _cluster.go:117: I0629 12:31:18.259031    1605 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0629 12:31:29.511384    1771 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.3?]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 5.001691 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: d5pvhr.b6k6vy0903hw7qbp"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.3:6443 --token d5pvhr.b6k6vy0903hw7qbp _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:c15a520e1dca356084f8f454a58ea7948376be4d93f695029eb72f470bd1af63 "
    L78: "cluster.go:117: i  Using Cilium version 1.12.1"
    L79: "cluster.go:117: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:117: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:117: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:117: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:117: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:117: ? Created CA in secret cilium-ca"
    L85: "cluster.go:117: ? Generating certificates for Hubble..."
    L86: "cluster.go:117: ? Creating Service accounts..."
    L87: "cluster.go:117: ? Creating Cluster roles..."
    L88: "cluster.go:117: ? Creating ConfigMap for Cilium version 1.12.1..."
    L89: "cluster.go:117: i Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:117: i Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:117: ? Creating Agent DaemonSet..."
    L92: "cluster.go:117: ? Creating Operator Deployment..."
    L93: "cluster.go:117: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:117: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:117: daemonset.apps/cilium patched"
    L96: "cluster.go:117: ?[33m    /??_"
    L97: "cluster.go:117: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:117: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:117: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:117: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:117: ?[34m    ___/"
    L102: "cluster.go:117: ?[0m"
    L103: "cluster.go:117: Deployment       cilium-operator    "
    L104: "cluster.go:117: DaemonSet        cilium             "
    L105: "cluster.go:117: Containers:      cilium             "
    L106: "cluster.go:117:                  cilium-operator    "
    L107: "cluster.go:117: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.cilium.cgroupv1.base/node_readiness (91.73s)"
    L110: "kubeadm.go:295: nodes are not ready: ready nodes should be equal to 2: 1_"
    L111: " "
                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _cluster.go:117: I0629 12:27:22.046763    1593 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0629 12:27:33.196257    1762 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.8?]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 5.001986 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: bg5utj.yqtw41povuo3555x"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.8:6443 --token bg5utj.yqtw41povuo3555x _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:66c7da683a112822302b886e4fbde8c462ee85ecdee69fc0d9f190b12c6aefe7 "
    L78: "cluster.go:117: i  Using Cilium version 1.12.1"
    L79: "cluster.go:117: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:117: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:117: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:117: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:117: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:117: ? Created CA in secret cilium-ca"
    L85: "cluster.go:117: ? Generating certificates for Hubble..."
    L86: "cluster.go:117: ? Creating Service accounts..."
    L87: "cluster.go:117: ? Creating Cluster roles..."
    L88: "cluster.go:117: ? Creating ConfigMap for Cilium version 1.12.1..."
    L89: "cluster.go:117: i Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:117: i Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:117: ? Creating Agent DaemonSet..."
    L92: "cluster.go:117: ? Creating Operator Deployment..."
    L93: "cluster.go:117: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:117: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:117: daemonset.apps/cilium patched"
    L96: "cluster.go:117: ?[33m    /??_"
    L97: "cluster.go:117: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:117: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:117: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:117: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:117: ?[34m    ___/"
    L102: "cluster.go:117: ?[0m"
    L103: "cluster.go:117: Deployment       cilium-operator    "
    L104: "cluster.go:117: DaemonSet        cilium             "
    L105: "cluster.go:117: Containers:      cilium             "
    L106: "cluster.go:117:                  cilium-operator    "
    L107: "cluster.go:117: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.cilium.cgroupv1.base/node_readiness (91.72s)"
    L110: "kubeadm.go:295: nodes are not ready: ready nodes should be equal to 2: 1_"
    L111: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:117: I0629 12:20:44.338770    1600 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0629 12:20:56.226253    1769 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?29]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 5.002277 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: okd0nx.dyr56ln1fz6kymh1"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.129:6443 --token okd0nx.dyr56ln1fz6kymh1 _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:e0dca71ffca9db0d3211bbb8e47ac0d45448083e8bbe6c8621fd193c92d5f21b "
    L78: "cluster.go:117: i  Using Cilium version 1.12.1"
    L79: "cluster.go:117: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:117: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:117: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:117: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:117: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:117: ? Created CA in secret cilium-ca"
    L85: "cluster.go:117: ? Generating certificates for Hubble..."
    L86: "cluster.go:117: ? Creating Service accounts..."
    L87: "cluster.go:117: ? Creating Cluster roles..."
    L88: "cluster.go:117: ? Creating ConfigMap for Cilium version 1.12.1..."
    L89: "cluster.go:117: i Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:117: i Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:117: ? Creating Agent DaemonSet..."
    L92: "cluster.go:117: ? Creating Operator Deployment..."
    L93: "cluster.go:117: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:117: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:117: daemonset.apps/cilium patched"
    L96: "cluster.go:117: ?[33m    /??_"
    L97: "cluster.go:117: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:117: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:117: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:117: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:117: ?[34m    ___/"
    L102: "cluster.go:117: ?[0m"
    L103: "cluster.go:117: DaemonSet        cilium             "
    L104: "cluster.go:117: Deployment       cilium-operator    "
    L105: "cluster.go:117: Containers:      cilium             "
    L106: "cluster.go:117:                  cilium-operator    "
    L107: "cluster.go:117: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.cilium.cgroupv1.base/node_readiness (91.76s)"
    L110: "kubeadm.go:295: nodes are not ready: ready nodes should be equal to 2: 1"
    L111: "--- FAIL: kubeadm.v1.24.14.cilium.cgroupv1.base/IPSec_encryption (65.36s)"
    L112: "cluster.go:117: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
    L113: "cluster.go:130: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
    L114: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[31m1 errors?[0m, ?[33m1 warnings?[0m"
    L115: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L116: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
    L117: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L118: "?[34m    ___/"
    L119: "?[0m"
    L120: "Deployment        cilium-operator    Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
    L121: "DaemonSet         cilium             Desired: 2, Ready: ?[33m1/2?[0m, Available: ?[33m1/2?[0m, Unavailable: ?[31m1/2?[0m"
    L122: "Containers:       cilium             Running: ?[32m1?[0m, Pending: ?[32m1?[0m"
    L123: "cilium-operator    Running: ?[32m1?[0m"
    L124: "Cluster Pods:     3/3 managed by Cilium"
    L125: "Image versions    cilium             quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b: 2"
    L126: "cilium-operator    quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1: 1"
    L127: "Errors:           cilium             cilium          1 pods of DaemonSet cilium are not ready"
    L128: "Warnings:         cilium             cilium-npdgp    pod is pending, status Process exited with status 1_"
    L129: " "
    L130: "  "

ok kubeadm.v1.24.14.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:117: I0629 12:17:13.031152    1490 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.26"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.6"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.6"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.6"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.6"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:117: I0629 12:17:24.410836    1654 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.26"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.26.6"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.9?9]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 4.501279 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: llhz6z.hoyosoeeye919x9y"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.99:6443 --token llhz6z.hoyosoeeye919x9y _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:3a84122c84965bd9dafd5b4f3d7efcec4b52281080c2802497a7b05587d0c911 "
    L78: "cluster.go:117: i  Using Cilium version 1.12.5"
    L79: "cluster.go:117: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:117: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:117: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:117: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:117: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:117: ? Created CA in secret cilium-ca"
    L85: "cluster.go:117: ? Generating certificates for Hubble..."
    L86: "cluster.go:117: ? Creating Service accounts..."
    L87: "cluster.go:117: ? Creating Cluster roles..."
    L88: "cluster.go:117: ? Creating ConfigMap for Cilium version 1.12.5..."
    L89: "cluster.go:117: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:117: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:117: ? Creating Agent DaemonSet..."
    L92: "cluster.go:117: ? Creating Operator Deployment..."
    L93: "cluster.go:117: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:117: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:117: daemonset.apps/cilium patched"
    L96: "cluster.go:117: ?[33m    /??_"
    L97: "cluster.go:117: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:117: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:117: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:117: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:117: ?[34m    ___/"
    L102: "cluster.go:117: ?[0m"
    L103: "cluster.go:117: Deployment       cilium-operator    "
    L104: "cluster.go:117: DaemonSet        cilium             "
    L105: "cluster.go:117: Containers:      cilium             "
    L106: "cluster.go:117:                  cilium-operator    "
    L107: "cluster.go:117: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.26.5.cilium.base/node_readiness (91.88s)"
    L110: "kubeadm.go:295: nodes are not ready: ready nodes should be equal to 2: 1"
    L111: "--- FAIL: kubeadm.v1.26.5.cilium.base/IPSec_encryption (64.85s)"
    L112: "cluster.go:117: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
    L113: "cluster.go:130: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
    L114: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[31m1 errors?[0m, ?[33m1 warnings?[0m"
    L115: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L116: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
    L117: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L118: "?[34m    ___/"
    L119: "?[0m"
    L120: "Deployment        cilium-operator    Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
    L121: "DaemonSet         cilium             Desired: 2, Ready: ?[33m1/2?[0m, Available: ?[33m1/2?[0m, Unavailable: ?[31m1/2?[0m"
    L122: "Containers:       cilium             Running: ?[32m1?[0m, Pending: ?[32m1?[0m"
    L123: "cilium-operator    Running: ?[32m1?[0m"
    L124: "Cluster Pods:     3/3 managed by Cilium"
    L125: "Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 2"
    L126: "cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
    L127: "Errors:           cilium             cilium          1 pods of DaemonSet cilium are not ready"
    L128: "Warnings:         cilium             cilium-mbtdc    pod is pending, status Process exited with status 1_"
    L129: " "
    L130: "  "

ok kubeadm.v1.26.5.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:117: W0629 12:13:30.002075    1498 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.3, falling back to the nearest etcd version (3.5.7-0)"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.3"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.3"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.3"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.3"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L9: "cluster.go:117: [init] Using Kubernetes version: v1.27.3"
    L10: "cluster.go:117: [preflight] Running pre-flight checks"
    L11: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L12: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L13: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L14: "cluster.go:117: W0629 12:13:40.538939    1663 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.6__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?4]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 4.503254 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: ay3op2.ziqsto1lmrllah3t"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.64:6443 --token ay3op2.ziqsto1lmrllah3t _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:de65dc25e0dd8aaed08fbfda0229886f284e8173ff72c10d4b729bbbec530fa0 "
    L78: "cluster.go:117: i  Using Cilium version 1.12.5"
    L79: "cluster.go:117: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:117: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:117: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:117: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:117: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:117: ? Created CA in secret cilium-ca"
    L85: "cluster.go:117: ? Generating certificates for Hubble..."
    L86: "cluster.go:117: ? Creating Service accounts..."
    L87: "cluster.go:117: ? Creating Cluster roles..."
    L88: "cluster.go:117: ? Creating ConfigMap for Cilium version 1.12.5..."
    L89: "cluster.go:117: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:117: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:117: ? Creating Agent DaemonSet..."
    L92: "cluster.go:117: ? Creating Operator Deployment..."
    L93: "cluster.go:117: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:117: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:117: daemonset.apps/cilium patched"
    L96: "cluster.go:117: ?[33m    /??_"
    L97: "cluster.go:117: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:117: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:117: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:117: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:117: ?[34m    ___/"
    L102: "cluster.go:117: ?[0m"
    L103: "cluster.go:117: Deployment       cilium-operator    "
    L104: "cluster.go:117: DaemonSet        cilium             "
    L105: "cluster.go:117: Containers:      cilium             "
    L106: "cluster.go:117:                  cilium-operator    "
    L107: "cluster.go:117: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.27.2.cilium.base/node_readiness (92.01s)"
    L110: "kubeadm.go:295: nodes are not ready: ready nodes should be equal to 2: 1_"
    L111: " "
    L112: "  "

ok kubeadm.v1.27.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysext.custom-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _harness.go:612: Cluster failed starting machines: machine __e98168e8-f5c2-427c-b62d-06a15bb3bda2__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.66:22: connect: ?no route to host_"
    L2: " "
    L3: "  "

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok torcx.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

Zstd is now used for container images compression so make sure it is part of
our runners.

Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
@jepio jepio temporarily deployed to development June 29, 2023 08:07 — with GitHub Actions Inactive
@jepio
Copy link
Member Author

jepio commented Jun 29, 2023

Pushed a fix to install zstd in our github actions runners.

The sdk job is running here: http://192.168.42.7:8080/job/container/job/sdk/886/cldsv/.

@jepio
Copy link
Member Author

jepio commented Jun 29, 2023

/update-sdk

@jepio jepio temporarily deployed to development June 29, 2023 08:26 — with GitHub Actions Inactive
@jepio jepio closed this Jun 29, 2023
@jepio jepio reopened this Jun 29, 2023
@jepio jepio temporarily deployed to development June 29, 2023 08:28 — with GitHub Actions Inactive
@jepio jepio temporarily deployed to development June 30, 2023 11:29 — with GitHub Actions Inactive
@jepio jepio requested a review from a team July 3, 2023 08:59
@jepio jepio merged commit ff09287 into main Jul 3, 2023
@jepio jepio deleted the zstd-container-images branch July 3, 2023 10:21
dongsupark added a commit that referenced this pull request Aug 10, 2023
Since #950 was merged,
tarball files `flatcar-{packages,sdk}-*.tar.zst` have been created
with mode 0600 instead of 0644. As a result, the files with mode 0600
were uploaded to bincache, but afterwards `copy-to-origin.sh` that in
turn runs rsync from bincache to the origin server could not read the
tarballs.

To fix that, it is necessary to chmod from 0600 to 0644 to make it
readable by rsync during the release process.

All of that happens because zstd sets the mode of the output file to
0600 in case of temporary files to avoid race condition.

See also facebook/zstd#1644.
dongsupark added a commit that referenced this pull request Aug 10, 2023
Since #950 was merged,
tarball files `flatcar-{packages,sdk}-*.tar.zst` have been created
with mode 0600 instead of 0644. As a result, the files with mode 0600
were uploaded to bincache, but afterwards `copy-to-origin.sh` that in
turn runs rsync from bincache to the origin server could not read the
tarballs.

To fix that, it is necessary to chmod from 0600 to 0644 to make it
readable by rsync during the release process.

All of that happens because zstd sets the mode of the output file to
0600 in case of temporary files to avoid race condition.

See also facebook/zstd#1644.
dongsupark added a commit that referenced this pull request Aug 11, 2023
Since #950 was merged,
tarball files `flatcar-{packages,sdk}-*.tar.zst` have been created
with mode 0600 instead of 0644. As a result, the files with mode 0600
were uploaded to bincache, but afterwards `copy-to-origin.sh` that in
turn runs rsync from bincache to the origin server could not read the
tarballs.

To fix that, it is necessary to chmod from 0600 to 0644 to make it
readable by rsync during the release process.

All of that happens because zstd sets the mode of the output file to
0600 in case of temporary files to avoid race condition.

See also facebook/zstd#1644,
facebook/zstd#3432.
dongsupark added a commit that referenced this pull request Aug 11, 2023
Since #950 was merged,
tarball files `flatcar-{packages,sdk}-*.tar.zst` have been created
with mode 0600 instead of 0644. As a result, the files with mode 0600
were uploaded to bincache, but afterwards `copy-to-origin.sh` that in
turn runs rsync from bincache to the origin server could not read the
tarballs.

To fix that, it is necessary to chmod from 0600 to 0644 to make it
readable by rsync during the release process.

All of that happens because zstd sets the mode of the output file to
0600 in case of temporary files to avoid race condition.

See also facebook/zstd#1644,
facebook/zstd#3432.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants