Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kind in Google Cloud Build #451

Closed
costinm opened this issue Apr 23, 2019 · 28 comments
Closed

Kind in Google Cloud Build #451

costinm opened this issue Apr 23, 2019 · 28 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question.

Comments

@costinm
Copy link

costinm commented Apr 23, 2019

What happened:

Trying to run a build/test using kind, in GCB.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

steps:
- name: 'istionightly/kind:latest'
  args: ["-c", 'make test']
  env:
  - 'GOPATH=/workspace'
  timeout: 600s
  entrypoint: /bin/bash

(the image has kind installed, and attempts to do a kind start cluster - in the GCB environment)

Note that it works fine on the local environment, using cloud-build-local.

Error:

kind create cluster --name test --wait 60s  --image istionightly/kind:latest
Creating cluster "test" ...
 • Ensuring node image (istionightly/kind:latest) 🖼  ...
 ✓ Ensuring node image (istionightly/kind:latest) 🖼
 • Preparing nodes 📦  ...
 ✓ Preparing nodes 📦
 • Creating kubeadm config 📜  ...
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
 ✗ Starting control-plane 🕹️
Error: failed to create cluster: failed to init node with kubeadm: exit status 1

(I'm using a modified base image, with some extra tools added)

Anything else we need to know?:
With debug enabled:

kind create cluster --loglevel debug --name test --wait 60s  --image istionightly/kind:latest
time="22:59:16" level=debug msg="Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}}]"
Creating cluster "test" ...
 • Ensuring node image (istionightly/kind:latest) 🖼  ...
time="22:59:16" level=debug msg="Running: /usr/bin/docker [docker inspect --type=image istionightly/kind:latest]"
time="22:59:16" level=info msg="Image: istionightly/kind:latest present locally"
 ✓ Ensuring node image (istionightly/kind:latest) 🖼
 • Preparing nodes 📦  ...
time="22:59:16" level=debug msg="Running: /usr/bin/docker [docker info --format '{{json .SecurityOptions}}']"
time="22:59:16" level=debug msg="Running: /usr/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-control-plane --name test-control-plane --label io.k8s.sigs.kind.cluster=test --label io.k8s.sigs.kind.role=control-plane --entrypoint=/usr/local/bin/entrypoint --expose 39939 -p 127.0.0.1:39939:6443 istionightly/kind:latest /sbin/init]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane rm -f /etc/machine-id]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane systemd-machine-id-setup]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker info --format '{{json .SecurityOptions}}']"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane mount -o remount,ro /sys]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane mount --make-shared /]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane mount --make-shared /run]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane mount --make-shared /var/lib/docker]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker kill -s SIGUSR1 test-control-plane]"
time="22:59:17" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane systemctl is-active docker]"
time="22:59:18" level=debug msg="Running: /usr/bin/docker [docker exec --privileged test-control-plane /bin/bash -c find /kind/images -name *.tar -print0 | xargs -0 -n 1 -P $(nproc) docker load -i]"
time="22:59:32" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane cat /kind/version]"
 ✓ Preparing nodes 📦
time="22:59:32" level=debug msg="Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}} --filter label=io.k8s.sigs.kind.cluster=test]"
time="22:59:32" level=debug msg="Running: /usr/bin/docker [docker inspect -f {{index .Config.Labels \"io.k8s.sigs.kind.role\"}} test-control-plane]"
 • Creating kubeadm config 📜  ...
time="22:59:32" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane cat /kind/version]"
time="22:59:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane mkdir -p /kind]"
time="22:59:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -i test-control-plane cp /dev/stdin /kind/kubeadm.conf]"
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
time="22:59:33" level=debug msg="Running: /usr/bin/docker [docker exec --privileged -t test-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"

time="23:01:32" level=debug msg="I0423 22:59:33.372393     826 initconfiguration.go:186] loading configuration from \"/kind/kubeadm.conf\"\nW0423 22:59:33.373244     826 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubelet.config.k8s.io\", Version:\"v1beta1\", Kind:\"KubeletConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0423 22:59:33.374054     826 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0423 22:59:33.374613     826 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"ClusterConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0423 22:59:33.375273     826 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"InitConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0423 22:59:33.375741     826 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"JoinConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=JoinConfiguration\nI0423 22:59:33.375793     826 initconfiguration.go:105] detected and using CRI socket: /var/run/dockershim.sock\nI0423 22:59:33.375910     826 interface.go:384] Looking for default routes with IPv4 addresses\nI0423 22:59:33.375924     826 interface.go:389] Default route transits interface \"eth0\"\nI0423 22:59:33.376120     826 interface.go:196] Interface eth0 is up\nI0423 22:59:33.376166     826 interface.go:244] Interface \"eth0\" has 1 addresses :[172.17.0.4/16].\nI0423 22:59:33.376185     826 interface.go:211] Checking addr  172.17.0.4/16.\nI0423 22:59:33.376195     826 interface.go:218] IP found 172.17.0.4\nI0423 22:59:33.376205     826 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface \"eth0\".\nI0423 22:59:33.376216     826 interface.go:395] Found active IP 172.17.0.4 \nI0423 22:59:33.376413     826 feature_gate.go:226] feature gates: &{map[]}\n[init] Using Kubernetes version: v1.14.0\n[preflight] Running pre-flight checks\nI0423 22:59:33.376566     826 checks.go:581] validating Kubernetes and kubeadm version\nI0423 22:59:33.376589     826 checks.go:172] validating if the firewall is enabled and active\nI0423 22:59:33.385377     826 checks.go:209] validating availability of port 6443\nI0423 22:59:33.385532     826 checks.go:209] validating availability of port 10251\nI0423 22:59:33.385568     826 checks.go:209] validating availability of port 10252\nI0423 22:59:33.385602     826 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml\nI0423 22:59:33.385621     826 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml\nI0423 22:59:33.385629     826 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml\nI0423 22:59:33.385642     826 checks.go:292] validating the existence of file /etc/kubernetes/manifests/etcd.yaml\nI0423 22:59:33.385653     826 checks.go:439] validating if the connectivity type is via proxy or direct\nI0423 22:59:33.385705     826 checks.go:475] validating http connectivity to first IP address in the CIDR\nI0423 22:59:33.385724     826 checks.go:475] validating http connectivity to first IP address in the CIDR\nI0423 22:59:33.385734     826 checks.go:105] validating the container runtime\nI0423 22:59:33.452672     826 checks.go:131] validating if the service is enabled and active\n\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/\nI0423 22:59:33.533756     826 checks.go:341] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables\n\t[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist\nI0423 22:59:33.533797     826 checks.go:341] validating the contents of file /proc/sys/net/ipv4/ip_forward\nI0423 22:59:33.533839     826 checks.go:653] validating whether swap is enabled or not\nI0423 22:59:33.533883     826 checks.go:382] validating the presence of executable ip\nI0423 22:59:33.533936     826 checks.go:382] validating the presence of executable iptables\nI0423 22:59:33.534151     826 checks.go:382] validating the presence of executable mount\nI0423 22:59:33.534174     826 checks.go:382] validating the presence of executable nsenter\nI0423 22:59:33.534232     826 checks.go:382] validating the presence of executable ebtables\nI0423 22:59:33.534345     826 checks.go:382] validating the presence of executable ethtool\nI0423 22:59:33.534422     826 checks.go:382] validating the presence of executable socat\nI0423 22:59:33.534464     826 checks.go:382] validating the presence of executable tc\nI0423 22:59:33.534536     826 checks.go:382] validating the presence of executable touch\nI0423 22:59:33.534564     826 checks.go:524] running all checks\n[preflight] The system verification failed. Printing the output from the verification:\n\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m4.15.0-1029-gcp\x1b[0m\n\x1b[0;37mDOCKER_VERSION\x1b[0m: \x1b[0;32m18.06.3-ce\x1b[0m\n\x1b[0;37mDOCKER_GRAPH_DRIVER\x1b[0m: \x1b[0;32moverlay2\x1b[0m\n\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m\n\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m\n\x1b[0;37mCGROUPS_CPUACCT\x1b[0m: \x1b[0;32menabled\x1b[0m\n\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m\n\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m\n\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m\n\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1029-gcp\\n\", err: exit status 1\nI0423 22:59:33.548406     826 checks.go:412] checking whether the given node name is reachable using net.LookupHost\nI0423 22:59:33.548608     826 checks.go:622] validating kubelet version\nI0423 22:59:33.598265     826 checks.go:131] validating if the service is enabled and active\nI0423 22:59:33.608417     826 checks.go:209] validating availability of port 10250\nI0423 22:59:33.608480     826 checks.go:209] validating availability of port 2379\nI0423 22:59:33.608505     826 checks.go:209] validating availability of port 2380\nI0423 22:59:33.608539     826 checks.go:254] validating the existence and emptiness of directory /var/lib/etcd\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\nI0423 22:59:33.666366     826 checks.go:842] image exists: k8s.gcr.io/kube-apiserver:v1.14.0\nI0423 22:59:33.724435     826 checks.go:842] image exists: k8s.gcr.io/kube-controller-manager:v1.14.0\nI0423 22:59:33.780319     826 checks.go:842] image exists: k8s.gcr.io/kube-scheduler:v1.14.0\nI0423 22:59:33.836406     826 checks.go:842] image exists: k8s.gcr.io/kube-proxy:v1.14.0\nI0423 22:59:33.893782     826 checks.go:842] image exists: k8s.gcr.io/pause:3.1\nI0423 22:59:33.952835     826 checks.go:842] image exists: k8s.gcr.io/etcd:3.3.10\nI0423 22:59:34.009642     826 checks.go:842] image exists: k8s.gcr.io/coredns:1.3.1\nI0423 22:59:34.009702     826 kubelet.go:61] Stopping the kubelet\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\nI0423 22:59:34.087512     826 kubelet.go:79] Starting the kubelet\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\nI0423 22:59:34.146267     826 certs.go:110] creating a new certificate authority for ca\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [test-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.4]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\nI0423 22:59:34.554167     826 certs.go:110] creating a new certificate authority for etcd-ca\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [test-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [test-control-plane localhost] and IPs [172.17.0.4 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\nI0423 22:59:35.861107     826 certs.go:110] creating a new certificate authority for front-proxy-ca\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\nI0423 22:59:36.959017     826 certs.go:69] creating a new public/private key files for signing service account users\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\nI0423 22:59:37.225866     826 kubeconfig.go:94] creating kubeconfig file for admin.conf\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\nI0423 22:59:37.320474     826 kubeconfig.go:94] creating kubeconfig file for kubelet.conf\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\nI0423 22:59:37.419582     826 kubeconfig.go:94] creating kubeconfig file for controller-manager.conf\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\nI0423 22:59:37.755830     826 kubeconfig.go:94] creating kubeconfig file for scheduler.conf\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\nI0423 22:59:37.952656     826 manifests.go:114] [control-plane] getting StaticPodSpecs\nI0423 22:59:37.959010     826 manifests.go:130] [control-plane] wrote static Pod manifest for component \"kube-apiserver\" to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\nI0423 22:59:37.959055     826 manifests.go:114] [control-plane] getting StaticPodSpecs\nI0423 22:59:37.960536     826 manifests.go:130] [control-plane] wrote static Pod manifest for component \"kube-controller-manager\" to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\nI0423 22:59:37.960574     826 manifests.go:114] [control-plane] getting StaticPodSpecs\nI0423 22:59:37.961215     826 manifests.go:130] [control-plane] wrote static Pod manifest for component \"kube-scheduler\" to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\nI0423 22:59:37.961909     826 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to \"/etc/kubernetes/manifests/etcd.yaml\"\nI0423 22:59:37.961932     826 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy\nI0423 22:59:37.962823     826 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\nI0423 22:59:37.963902     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:38.464591     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:38.964545     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:39.464760     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:39.964629     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:40.464810     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:40.964648     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:41.464720     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:41.964582     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:42.464644     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:42.964602     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:43.464650     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:43.964719     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:44.464620     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:44.964440     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:45.464569     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:45.964534     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:46.464579     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:46.964558     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:47.464574     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:47.964544     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:48.464583     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:48.964560     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:49.464668     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:49.964568     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:50.464675     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:50.964608     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:51.464690     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:51.964568     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:52.464746     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:52.964593     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:53.464701     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:53.964610     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:54.464683     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:54.964582     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:55.464700     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:55.964580     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:56.464719     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:56.964629     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:57.464640     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:57.964549     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:58.464695     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:58.964576     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:59.464763     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 22:59:59.964522     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:00.464704     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:00.964568     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:01.464604     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:01.964583     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:02.464641     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:02.964603     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:03.464586     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:03.964569     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:04.464562     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:04.964598     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:05.464587     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:05.964527     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:06.464660     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:06.964630     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:07.464663     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:07.964610     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:08.464681     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:08.964617     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  in 0 milliseconds\nI0423 23:00:09.464596     826 round_trippers.go:438] GET https://172.17.0.4:6443/healthz?timeout=32s  
....

Environment:

  • kind version: (use kind version): 0.3.0-alpha
  • Kubernetes version: (use kubectl version): v1.14.1
  • Docker version: (use docker info): ???
  • OS (e.g. from /etc/os-release): COS ?
@costinm costinm added the kind/bug Categorizes issue or PR as related to a bug. label Apr 23, 2019
@costinm
Copy link
Author

costinm commented Apr 23, 2019

Last part of the log:

nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster"
 ✗ Starting control-plane 🕹️
time="23:01:32" level=debug msg="Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}} --filter label=io.k8s.sigs.kind.cluster=test]"
time="23:01:33" level=debug msg="Running: /usr/bin/docker [docker rm -f -v test-control-plane]"
Error: failed to create cluster: failed to init node with kubeadm: exit status 1

@BenTheElder
Copy link
Member

Er, are you running istionightly/kind:latest inside istionightly/kind:latest?

@BenTheElder
Copy link
Member

Please do not use the kind image to execute kind, we make no guarantees about the contents other than kind can boot a cluster with them and E.G. it may not have docker in the near future (containerd performs better).

I cannot reproduce this as posted currently, since cloudbuild needs to be pointed at a directory with the makefile.

@BenTheElder
Copy link
Member

/assign

@BenTheElder BenTheElder added the kind/support Categorizes issue or PR as a support question. label Apr 26, 2019
@costinm
Copy link
Author

costinm commented Apr 29, 2019

On the first question: the problem we have is that Kind only exposes the apiserver port to container.
We run all kind of tests - and the current infra with minikube --root had access to the pods and nodes from the container running the tests.

We will eventually move the tests to run in a pod - but that'll take some time.

Short term - we're replicating what we had with minikube--root, by adding the tools and golang to the
kind node. AFAIK it is supported to create a custom image of kind - there is a CLI, etc. I didn't use the CLI, instead I extended the image - I can change the build process to use the kind build image CLI as well.

Re using kind image to execute kind - we're not doing this ( in most cases ), but since we added all the
tools needed to the kind node image it was easier to test cloudbuilder with that container image. If we make it work - we'll break it in separate containers, as it should ( i.e. one container that just includes kind binary and triggers the creation of kind cluster ).

It would be great if Kind would actually provide such container we can reuse :-)

@costinm
Copy link
Author

costinm commented Apr 29, 2019

On a possibly related note: I am also exploring BuildKite, and kind worked great on machine executors, but appears to fail with the K8S BuildKite agent.

The errors I see:

[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=JoinConfiguration
W0426 20:32:54.208727    1117 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
W0426 20:32:54.209827    1117 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
W0426 20:32:54.247860    1117 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
W0426 20:32:54.250364    1117 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
I0426 20:32:54.250896    1117 initconfiguration.go:105] detected and using CRI socket: /var/run/dockershim.sock
I0426 20:32:54.251148    1117 interface.go:384] Looking for default routes with IPv4 addresses
I0426 20:32:54.251182    1117 interface.go:389] Default route transits interface "eth0"
I0426 20:32:54.251427    1117 interface.go:196] Interface eth0 is up
I0426 20:32:54.251522    1117 interface.go:244] Interface "eth0" has 2 addresses :[169.254.123.2/24 fe80::42:a9ff:fefe:7b02/64].
I0426 20:32:54.251557    1117 interface.go:211] Checking addr  169.254.123.2/24.
I0426 20:32:54.251570    1117 interface.go:221] Non-global unicast address found 169.254.123.2
I0426 20:32:54.251594    1117 interface.go:211] Checking addr  fe80::42:a9ff:fefe:7b02/64.
I0426 20:32:54.251604    1117 interface.go:224] fe80::42:a9ff:fefe:7b02 is not an IPv4 address
I0426 20:32:54.251617    1117 interface.go:384] Looking for default routes with IPv6 addresses
I0426 20:32:54.251628    1117 interface.go:400] No active IP found by looking at default routes
unable to select an IP from default routes.
 ✗ Starting control-plane 🕹️
DEBU[20:32:54] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=test]
DEBU[20:32:54] Running: /usr/bin/docker [docker rm -f -v test-control-plane]
Error: failed to create cluster: failed to init node with kubeadm: exit status 1

@costinm
Copy link
Author

costinm commented Apr 29, 2019

This is on GKE, I believe it is with COS base image. Might be related if CloudBuild has a similar config.

@BenTheElder
Copy link
Member

for the k8s buildkite agent see #303

@BenTheElder
Copy link
Member

also possibly #426 which has a WIP PR out

@BenTheElder
Copy link
Member

#426 is fixed now, will need to follow up more on GCB specifically. If the buildkite agent pod is customize-able then #303 is likely the fix for that one.

@costinm
Copy link
Author

costinm commented May 1, 2019

#303 helped - but now we're hitting a different problem:

I0501 21:20:24.609614      94 interface.go:196] Interface eth0 is up
I0501 21:20:24.609706      94 interface.go:244] Interface "eth0" has 2 addresses :[169.254.123.3/24 fe80::42:a9ff:fefe:7b03/64].
I0501 21:20:24.609745      94 interface.go:211] Checking addr  169.254.123.3/24.
I0501 21:20:24.609768      94 interface.go:221] Non-global unicast address found 169.254.123.3
I0501 21:20:24.609785      94 interface.go:211] Checking addr  fe80::42:a9ff:fefe:7b03/64.
I0501 21:20:24.609800      94 interface.go:224] fe80::42:a9ff:fefe:7b03 is not an IPv4 address
I0501 21:20:24.609814      94 interface.go:384] Looking for default routes with IPv6 addresses
I0501 21:20:24.609826      94 interface.go:400] No active IP found by looking at default routes
unable to select an IP from default routes. 
 ✗ Starting control-plane 🕹️

@costinm
Copy link
Author

costinm commented May 1, 2019

This is running in istio-testing GCP project, on weekly10 GKE cluster - let me know if you need access.

@aojea
Copy link
Contributor

aojea commented May 2, 2019

@costinm the problem is the network address, it turns out that kubernetes only considers Global Unicast addresses https://golang.org/pkg/net/#IP.IsGlobalUnicast , that means that it would not consider 169.254.123.3/24 as valid. Are you able to use other network range?

@costinm
Copy link
Author

costinm commented May 2, 2019

I can try creating a new cluster - but I suspect a lot of people use GKE with the default values. And possibly other k8s providers that don't allow customization.

@costinm
Copy link
Author

costinm commented May 2, 2019

@aojea Is there any reason for not supporting link local address in k8s ? It seems a very legitimate use of link local addresses - and having an apiserver visible only on link local seems quite useful as well. Is it a limitation of the apiserver, kubeadm or kind ?

@costinm
Copy link
Author

costinm commented May 2, 2019

Never mind, sorry - the pod has valid 10.x IP, it's not the cluster. Seems to be related to the buildkite agent and how the docker container is allocated, I'll debug further.

@costinm
Copy link
Author

costinm commented May 2, 2019

Confirmed that when running buildkite in a k8s environment, and mounting the docker from the node (at least in gke) I don't get any global IP allocated, only the link local ones.
It may be possible to 'docker create network' and assign IPs in the docker commands - but I suspect this will need to be implemented in Kind when it calls docker ? I don't think I have access to the docker config on the node (and I wouldn't risk touching it)

@aojea
Copy link
Contributor

aojea commented May 2, 2019

@costinm sorry but I can´t follow you 😅 , who configured the docker network?
Can you share docker network inspect bridge and cat /etc/docker/daemon.json ?

@BenTheElder
Copy link
Member

we may very well start creating a specific docker network, started hammering out the details.

NOTE: I would not advise running this by bind mounting the host docker socket from a kubernetes node, you're likely to leak resources. IIRC GKE leaves this around mainly for users that are running builds.

plan to investigate GCB soon.

@costinm
Copy link
Author

costinm commented May 9, 2019

@aojea - BuildKite is running in K8S - in my case GKE - mounting the node docker socket.

I suspect docker is not configured to allocate IPs - since the pods are actually getting IPs via CNI.

@costinm
Copy link
Author

costinm commented May 9, 2019

@BenTheElder agreed - I was just using 'out of box' buildkite agent.
Happy to modify it - but not sure what is the proper way to run Kind from a k8s pod.

@BenTheElder
Copy link
Member

prow.k8s.io runs kind in a k8s pod under the following:

Depending on the cluster, we may also need to configure the pod DNS so as to only include upstream DNS, and not the in-cluster DNS as the nested pods may not be able to reach it.

Which environment is preferable? I can to try to tackle one of these, longer term @munnerz and I were discussing setting up a repo possibly outside the kubernetes org just to enable allll of the CI options and ensure kind works, but that probably won't happen until after KubeCon.

@costinm
Copy link
Author

costinm commented May 15, 2019

Sorry for the delay - from my perspective the GCB is the most important one, since it's the hardest and possibly more secure.

Circle seems fine, multiple projects got it working.
For BuildKite - it works on VMs, so not super urgent.

@BenTheElder
Copy link
Member

BenTheElder commented May 15, 2019 via email

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 13, 2019
@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2019
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 12, 2019
@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@apstndb
Copy link

apstndb commented Oct 16, 2019

FYI, I have found the way to run Kind in GCB.
It needs to use docker run --net=host for access to port-mapped ports.
https://github.com/apstndb/kind-in-gcb

Is it will be improved when Kind supports docker networks?

@BenTheElder
Copy link
Member

kind-ci/examples#17

@kubernetes-sigs kubernetes-sigs deleted a comment from fejta-bot Jun 24, 2021
@kubernetes-sigs kubernetes-sigs deleted a comment from fejta-bot Jun 24, 2021
@kubernetes-sigs kubernetes-sigs deleted a comment from fejta-bot Jun 24, 2021
@BenTheElder
Copy link
Member

To elaborate on above comment: this is supported and documented.
https://kind.sigs.k8s.io/docs/user/resources/#using-kind-in-ci
we have a contrib repo ^ which contains documented CI setups like this (where possible also actively running with the stable kind release). we've also improved on running in environments like this more generically.

@BenTheElder BenTheElder removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 24, 2021
stg-0 pushed a commit to stg-0/kind that referenced this issue Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants