Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not create account failed when deploying cluster with kubeadm in kube-proxy init #2883

Closed
Hacksign opened this issue May 29, 2023 · 4 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@Hacksign
Copy link

Hacksign commented May 29, 2023

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"archive", BuildDate:"2023-04-15T11:34:19Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version): not involved
  • Cloud provider or hardware configuration: Archlinux on self hosted ESXi virtual machine
  • OS (e.g. from /etc/os-release): Archlinux
  • Kernel (e.g. uname -a): Linux ArchLinux-00-X64 6.3.4-arch1-1 kubeadm join on slave node fails preflight checks #1 SMP PREEMPT_DYNAMIC Wed, 24 May 2023 17:44:00 +0000 x86_64 GNU/Linux
  • Container runtime (CRI) (e.g. containerd, cri-o): containerd
  • Container networking plugin (CNI) (e.g. Calico, Cilium):
  • Others: I have setup configuration by command: echo export KUBECONFIG=/etc/kubernetes/admin.conf > ~/.bashrc

What happened?

kubeadm init failed when setup addon/kube-proxy

What you expected to happen?

kubeadm init successfully

How to reproduce it (as minimally and precisely as possible)?

just execute command

Anything else we need to know?

logs of setting up the cluster:

[root@ArchLinux-00-X64 manifests]# kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --v=5
I0530 00:57:52.161300   22567 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0530 00:57:52.161607   22567 interface.go:432] Looking for default routes with IPv4 addresses
I0530 00:57:52.161627   22567 interface.go:437] Default route transits interface "ens34"
I0530 00:57:52.161819   22567 interface.go:209] Interface ens34 is up
I0530 00:57:52.161923   22567 interface.go:257] Interface "ens34" has 2 addresses :[172.16.1.110/24 fe80::20c:29ff:fe24:bcc0/64].
I0530 00:57:52.161953   22567 interface.go:224] Checking addr  172.16.1.110/24.
I0530 00:57:52.161989   22567 interface.go:231] IP found 172.16.1.110
I0530 00:57:52.162017   22567 interface.go:263] Found valid IPv4 address 172.16.1.110 for interface "ens34".
I0530 00:57:52.162031   22567 interface.go:443] Found active IP 172.16.1.110 
I0530 00:57:52.162059   22567 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0530 00:57:52.171784   22567 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
[init] Using Kubernetes version: v1.27.2
[preflight] Running pre-flight checks
I0530 00:57:53.234816   22567 checks.go:563] validating Kubernetes and kubeadm version
I0530 00:57:53.234900   22567 checks.go:168] validating if the firewall is enabled and active
I0530 00:57:53.254640   22567 checks.go:203] validating availability of port 6443
I0530 00:57:53.254901   22567 checks.go:203] validating availability of port 10259
I0530 00:57:53.254985   22567 checks.go:203] validating availability of port 10257
I0530 00:57:53.255070   22567 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0530 00:57:53.255120   22567 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0530 00:57:53.255163   22567 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0530 00:57:53.255212   22567 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0530 00:57:53.255229   22567 checks.go:430] validating if the connectivity type is via proxy or direct
I0530 00:57:53.255274   22567 checks.go:469] validating http connectivity to first IP address in the CIDR
I0530 00:57:53.255311   22567 checks.go:469] validating http connectivity to first IP address in the CIDR
I0530 00:57:53.255335   22567 checks.go:104] validating the container runtime
I0530 00:57:53.299339   22567 checks.go:639] validating whether swap is enabled or not
I0530 00:57:53.299426   22567 checks.go:370] validating the presence of executable crictl
I0530 00:57:53.299463   22567 checks.go:370] validating the presence of executable conntrack
I0530 00:57:53.299506   22567 checks.go:370] validating the presence of executable ip
I0530 00:57:53.299548   22567 checks.go:370] validating the presence of executable iptables
I0530 00:57:53.299597   22567 checks.go:370] validating the presence of executable mount
I0530 00:57:53.299642   22567 checks.go:370] validating the presence of executable nsenter
I0530 00:57:53.299686   22567 checks.go:370] validating the presence of executable ebtables
I0530 00:57:53.299740   22567 checks.go:370] validating the presence of executable ethtool
I0530 00:57:53.299781   22567 checks.go:370] validating the presence of executable socat
I0530 00:57:53.299828   22567 checks.go:370] validating the presence of executable tc
I0530 00:57:53.299871   22567 checks.go:370] validating the presence of executable touch
I0530 00:57:53.299914   22567 checks.go:516] running all checks
I0530 00:57:53.328742   22567 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0530 00:57:53.329565   22567 checks.go:605] validating kubelet version
I0530 00:57:53.457595   22567 checks.go:130] validating if the "kubelet" service is enabled and active
I0530 00:57:53.485110   22567 checks.go:203] validating availability of port 10250
I0530 00:57:53.485201   22567 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0530 00:57:53.485272   22567 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0530 00:57:53.485323   22567 checks.go:203] validating availability of port 2379
I0530 00:57:53.485382   22567 checks.go:203] validating availability of port 2380
I0530 00:57:53.485441   22567 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0530 00:57:53.485673   22567 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0530 00:57:53.485728   22567 checks.go:828] using image pull policy: IfNotPresent
I0530 00:57:53.521711   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.27.2
I0530 00:57:53.555800   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2
I0530 00:57:53.590395   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.27.2
I0530 00:57:53.632197   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.27.2
W0530 00:57:53.668269   22567 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
I0530 00:57:53.704126   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
I0530 00:57:53.739285   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.7-0
I0530 00:57:53.778313   22567 checks.go:846] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0530 00:57:53.778405   22567 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0530 00:57:54.226689   22567 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [archlinux-00-x64 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.1.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0530 00:57:55.049144   22567 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0530 00:57:55.250119   22567 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0530 00:57:55.873437   22567 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0530 00:57:56.179816   22567 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [archlinux-00-x64 localhost] and IPs [172.16.1.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [archlinux-00-x64 localhost] and IPs [172.16.1.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0530 00:57:57.544405   22567 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0530 00:57:57.863466   22567 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0530 00:57:58.804130   22567 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0530 00:57:58.937176   22567 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0530 00:57:59.977224   22567 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0530 00:58:00.138221   22567 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0530 00:58:00.733507   22567 manifests.go:99] [control-plane] getting StaticPodSpecs
I0530 00:58:00.734032   22567 certs.go:519] validating certificate period for CA certificate
I0530 00:58:00.734190   22567 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0530 00:58:00.734205   22567 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0530 00:58:00.734216   22567 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0530 00:58:00.734226   22567 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0530 00:58:00.739521   22567 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0530 00:58:00.739634   22567 manifests.go:99] [control-plane] getting StaticPodSpecs
I0530 00:58:00.740133   22567 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0530 00:58:00.740159   22567 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0530 00:58:00.740246   22567 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0530 00:58:00.740321   22567 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0530 00:58:00.740394   22567 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0530 00:58:00.740517   22567 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0530 00:58:00.742323   22567 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0530 00:58:00.742497   22567 manifests.go:99] [control-plane] getting StaticPodSpecs
I0530 00:58:00.742952   22567 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0530 00:58:00.744049   22567 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0530 00:58:00.744825   22567 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0530 00:58:00.746247   22567 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0530 00:58:00.746266   22567 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.501827 seconds
I0530 00:58:10.251943   22567 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0530 00:58:10.366596   22567 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0530 00:58:10.441799   22567 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
I0530 00:58:10.441831   22567 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "archlinux-00-x64" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node archlinux-00-x64 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node archlinux-00-x64 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: jgbrqu.xn5sdl1g8tcao3jb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0530 00:58:11.727099   22567 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0530 00:58:11.728304   22567 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0530 00:58:11.728983   22567 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0530 00:58:11.738782   22567 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0530 00:58:11.822181   22567 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0530 00:58:11.824250   22567 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
rpc error: code = Unknown desc = malformed header: missing HTTP content-type
unable to create RBAC clusterrolebinding
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrUpdateClusterRoleBinding
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient/idempotency.go:254
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy.printOrCreateKubeProxyObjects
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy/proxy.go:139
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy.EnsureProxyAddon
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy/proxy.go:63
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runKubeProxyAddon
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/addons.go:121
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/lib/go/src/runtime/proc.go:250
runtime.goexit
	/usr/lib/go/src/runtime/asm_amd64.s:1598
error execution phase addon/kube-proxy
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	/build/kubernetes/src/kubernetes-1.27.1/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/lib/go/src/runtime/proc.go:250
runtime.goexit
	/usr/lib/go/src/runtime/asm_amd64.s:1598
@neolit123
Copy link
Member

neolit123 commented May 29, 2023

rpc error: code = Unknown desc = malformed header: missing HTTP content-type

this is not a kubeadm bug. if you search this repository you will find issues like
#2701 (comment)
#2767 (comment)

/support

@github-actions
Copy link

Hello, @Hacksign 🤖 👋

You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.

Please see:

@github-actions github-actions bot added the kind/support Categorizes issue or PR as a support question. label May 29, 2023
@github-actions
Copy link

Hello, @Hacksign 🤖 👋

You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.

Please see:

@neolit123
Copy link
Member

the problem is on the machine.
if you find the exact reason you can help us document it here https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

2 participants