Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

with KUBE_REPO_PREFIX, but the kubeadm is using "gcr.io/google_containers/pause-amd64" #257

Closed
4admin2root opened this issue Apr 28, 2017 · 8 comments

Comments

@4admin2root
Copy link

What keywords did you search in kubeadm issues before filing this one?

KUBE_REPO_PREFIX

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT

Versions

kubeadm version (use kubeadm version):
1.6.1
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:

  • Kubernetes version (use kubectl version): 1.6.1
    Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration: openstack

  • OS (e.g. from /etc/os-release): centos7.2

  • Kernel (e.g. uname -a):Linux cloud4ourself-kubetest.novalocal 3.10.0-327.el7.x86_64 kubeadm join on slave node fails preflight checks #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

  • Others:

What happened?

with KUBE_REPO_PREFIX parameter , but the kubeadm is using "gcr.io/google_containers/pause-amd64"

What you expected to happen?

don't with any gcr.io docker images

How to reproduce it (as minimally and precisely as possible)?

KUBE_ETCD_IMAGE=reg-i.testpay.com/google_containers/etcd-amd64:3.0.17
KUBE_REPO_PREFIX=reg-i.testpay.com/google_containers
kubeadm init

Anything else we need to know?

some log of kubelet as follow
Apr 28 16:57:10 cloud4ourself-kubetest kubelet: E0428 16:57:10.835906 8646 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}
Apr 28 16:57:10 cloud4ourself-kubetest kubelet: E0428 16:57:10.835934 8646 kuberuntime_manager.go:619] createPodSandbox for pod "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}
Apr 28 16:57:10 cloud4ourself-kubetest kubelet: E0428 16:57:10.835974 8646 pod_workers.go:182] Error syncing pod d0de60f648c76b86f28f555b8c14e25d ("etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)"), skipping: failed to "CreatePodSandbox" for "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" with CreatePodSandboxError: "CreatePodSandbox for pod "etcd-cloud4ourself-kubetest.novalocal_kube-system(d0de60f648c76b86f28f555b8c14e25d)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}"
Apr 28 16:57:11 cloud4ourself-kubetest kubelet: E0428 16:57:11.342406 8646 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node 'cloud4ourself-kubetest.novalocal' not found
Apr 28 16:57:14 cloud4ourself-kubetest kubelet: E0428 16:57:14.836318 8646 remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: {"message":"Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout"}

@luxas
Copy link
Member

luxas commented Apr 28, 2017

This is due that kubelet operates independently of kubeadm. You have to edit the kubelet's manifest separately on all nodes in order to set --pod-infra-container-image which is the pause image.

kubeadm does never touch this part of the system.
You could probably fix it this way though:

cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=<your-image>"
EOF
systemctl daemon-reload
systemctl restart kubelet

@pmatety
Copy link

pmatety commented May 1, 2017

how to fix this on minikube, there is no kubelet on minikube

Thanks

@4admin2root
Copy link
Author

@luxas thank you very much. it works
[root@cloud4ourself-kubetest manifests]# cat /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=reg-i.testpay.com/google_containers/pause-amd64:3.0"
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

@maojiawei
Copy link

I use KUBE_REPO_PREFIX and change my kubelet service.But it still doesn't work.
image
image

@anjia0532
Copy link

ping @maojiawei , from k8s source v1.5.x\kubeadm/env.go,v1.6.x\kubeadm/env.go,v1.7.x\kubeadm/env.go ,u can use KUBE_REPO_PREFIX env.but since 1.8+, KUBE_REPO_PREFIX is removed

@RainingNight
Copy link

@anjia0532 How to set registy in 1.8+?

@avnish30jn
Copy link

@RainingNight use imageRepository: <private-registry> in kubeadm config file.

@anjia0532
Copy link

@avnish30jn thanks : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants