Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Option to set registry mirror separately #20006

Open
ygxxii opened this issue Nov 26, 2024 · 0 comments
Open

[Feature Request] Option to set registry mirror separately #20006

ygxxii opened this issue Nov 26, 2024 · 0 comments
Labels
l/zh-CN Issues in or relating to Chinese

Comments

@ygxxii
Copy link

ygxxii commented Nov 26, 2024

Feature Request : Option to set registry mirror separately, not just registry.cn-hangzhou.aliyuncs.com/google_containers , which is not stable.


重现问题所需的命令

My server is hosted in CN, so I tried:

minikube start --driver=docker --image-mirror-country=cn

失败的命令的完整输出

Minikube is hanging at:

Pulling base image v0.0.45 ...

minikube logs命令的输出

==> Audit <==
|---------|--------------------------------|----------|---------|---------|---------------------|----------|
| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
|---------|--------------------------------|----------|---------|---------|---------------------|----------|
| start   | --driver=docker                | minikube | vagrant | v1.34.0 | 26 Nov 24 20:33 CST |          |
|         | --image-mirror-country=cn      |          |         |         |                     |          |
| start   | --driver=docker                | minikube | vagrant | v1.34.0 | 26 Nov 24 20:37 CST |          |
|         | --image-mirror-country=cn      |          |         |         |                     |          |
|---------|--------------------------------|----------|---------|---------|---------------------|----------|


==> Last Start <==
Log file created at: 2024/11/26 20:37:54
Running on machine: playground-docker
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1126 20:37:54.217236   20840 out.go:345] Setting OutFile to fd 1 ...
I1126 20:37:54.217348   20840 out.go:397] isatty.IsTerminal(1) = true
I1126 20:37:54.217351   20840 out.go:358] Setting ErrFile to fd 2...
I1126 20:37:54.217355   20840 out.go:397] isatty.IsTerminal(2) = true
I1126 20:37:54.217525   20840 root.go:338] Updating PATH: /home/vagrant/.minikube/bin
W1126 20:37:54.217618   20840 root.go:314] Error reading config file at /home/vagrant/.minikube/config/config.json: open /home/vagrant/.minikube/config/config.json: no such file or directory
I1126 20:37:54.217785   20840 out.go:352] Setting JSON to false
I1126 20:37:54.218546   20840 start.go:129] hostinfo: {"hostname":"playground-docker","uptime":340968,"bootTime":1732283706,"procs":197,"os":"linux","platform":"rocky","platformFamily":"rhel","platformVersion":"8.10","kernelVersion":"4.18.0-553.el8_10.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9639e832-2ffd-4c89-b3ca-436ef90b96fd"}
I1126 20:37:54.218592   20840 start.go:139] virtualization:
I1126 20:37:54.219681   20840 out.go:177] 😄  minikube v1.34.0 on Rocky 8.10
W1126 20:37:54.220467   20840 preload.go:293] Failed to list preload files: open /home/vagrant/.minikube/cache/preloaded-tarball: no such file or directory
I1126 20:37:54.220490   20840 notify.go:220] Checking for updates...
I1126 20:37:54.220836   20840 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
E1126 20:37:54.220879   20840 start.go:812] api.Load failed for minikube: filestore "minikube": Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I1126 20:37:54.221001   20840 driver.go:394] Setting default libvirt URI to qemu:///system
E1126 20:37:54.221078   20840 start.go:812] api.Load failed for minikube: filestore "minikube": Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I1126 20:37:54.249908   20840 docker.go:123] docker version: linux-26.1.3:Docker Engine - Community
I1126 20:37:54.250087   20840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1126 20:37:54.300413   20840 info.go:266] docker info: {ID:12d52db9-1b03-40af-88a2-f3c01b35e438 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2024-11-26 20:37:54.29247975 +0800 CST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.18.0-553.el8_10.x86_64 OperatingSystem:Rocky Linux 8.10 (Green Obsidian) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://docker.1panel.live/ https://dockerpull.com/ https://docker.m.daocloud.io/] Secure:true Official:true}} Mirrors:[https://docker.1panel.live/ https://dockerpull.com/ https://docker.m.daocloud.io/]} NCPU:2 MemTotal:3843768320 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http://172.16.13.1:6152 HTTPSProxy:http://172.16.13.1:6152 NoProxy:127.0.0.0/8 Name:playground-docker Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:true Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
I1126 20:37:54.300506   20840 docker.go:318] overlay module found
I1126 20:37:54.301283   20840 out.go:177] ✨  Using the docker driver based on existing profile
I1126 20:37:54.301927   20840 start.go:297] selected driver: docker
I1126 20:37:54.301931   20840 start.go:901] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1126 20:37:54.302040   20840 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1126 20:37:54.302113   20840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1126 20:37:54.338798   20840 info.go:266] docker info: {ID:12d52db9-1b03-40af-88a2-f3c01b35e438 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2024-11-26 20:37:54.330830837 +0800 CST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.18.0-553.el8_10.x86_64 OperatingSystem:Rocky Linux 8.10 (Green Obsidian) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://docker.1panel.live/ https://dockerpull.com/ https://docker.m.daocloud.io/] Secure:true Official:true}} Mirrors:[https://docker.1panel.live/ https://dockerpull.com/ https://docker.m.daocloud.io/]} NCPU:2 MemTotal:3843768320 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http://172.16.13.1:6152 HTTPSProxy:http://172.16.13.1:6152 NoProxy:127.0.0.0/8 Name:playground-docker Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:true Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
I1126 20:37:54.339101   20840 cni.go:84] Creating CNI manager for ""
I1126 20:37:54.339110   20840 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1126 20:37:54.339142   20840 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vagrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1126 20:37:54.339757   20840 out.go:177] 👍  Starting "minikube" primary control-plane node in "minikube" cluster
I1126 20:37:54.340459   20840 cache.go:121] Beginning downloading kic base image for docker with docker
I1126 20:37:54.341071   20840 out.go:177] 🚜  Pulling base image v0.0.45 ...
I1126 20:37:54.341745   20840 profile.go:143] Saving config to /home/vagrant/.minikube/profiles/minikube/config.json ...
I1126 20:37:54.342114   20840 cache.go:107] acquiring lock: {Name:mk470cc156d9db1dbf1364ad91d3bfd6a8e7aec0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342281   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5 exists
I1126 20:37:54.342287   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5" took 177.124µs
I1126 20:37:54.342310   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v5 succeeded
I1126 20:37:54.342318   20840 image.go:79] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
I1126 20:37:54.342426   20840 cache.go:107] acquiring lock: {Name:mk4a8cdd2a3fa3d42b8b72b3461aebc9a1c94c4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342459   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.31.0 exists
I1126 20:37:54.342463   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.31.0" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.31.0" took 39.911µs
I1126 20:37:54.342467   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.31.0 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.31.0 succeeded
I1126 20:37:54.342475   20840 cache.go:107] acquiring lock: {Name:mkd9fc4b3a73cf37752d14c6c1f3c1dc5f76886b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342494   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.31.0 exists
I1126 20:37:54.342498   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.31.0" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.31.0" took 23.915µs
I1126 20:37:54.342501   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.31.0 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.31.0 succeeded
I1126 20:37:54.342507   20840 cache.go:107] acquiring lock: {Name:mk5c9979fde9de5593da0192be278abff3659d36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342524   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.31.0 exists
I1126 20:37:54.342527   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.31.0" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.31.0" took 21.694µs
I1126 20:37:54.342531   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.31.0 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.31.0 succeeded
I1126 20:37:54.342536   20840 cache.go:107] acquiring lock: {Name:mkd5d4f9f6d156420127c702d4d94f7b603a1b47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342590   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.31.0 exists
I1126 20:37:54.342596   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.31.0" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.31.0" took 60.564µs
I1126 20:37:54.342599   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.31.0 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.31.0 succeeded
I1126 20:37:54.342608   20840 cache.go:107] acquiring lock: {Name:mkd58808cbbb4340f8f94d3c92835f75239cafe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342627   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.15-0 exists
I1126 20:37:54.342631   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.15-0" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.15-0" took 23.599µs
I1126 20:37:54.342634   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.15-0 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.15-0 succeeded
I1126 20:37:54.342641   20840 cache.go:107] acquiring lock: {Name:mk28c0e5de21326ea1fdecbc9eb9afa28ec2a313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342659   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.10 exists
I1126 20:37:54.342662   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.10" took 22.723µs
I1126 20:37:54.342665   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.10 succeeded
I1126 20:37:54.342670   20840 cache.go:107] acquiring lock: {Name:mk1df024090d72f28b8f96ba5496cc5d1ffe82e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1126 20:37:54.342687   20840 cache.go:115] /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.11.1 exists
I1126 20:37:54.342690   20840 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.1" -> "/home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.11.1" took 20.604µs
I1126 20:37:54.342693   20840 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.1 -> /home/vagrant/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_v1.11.1 succeeded
I1126 20:37:54.342697   20840 cache.go:87] Successfully saved all images to host disk.
I1126 20:37:54.358693   20840 cache.go:149] Downloading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
I1126 20:37:54.358865   20840 image.go:63] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
I1126 20:37:54.359217   20840 image.go:148] Writing registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
I1126 20:37:54.737618   20840 cache.go:168] failed to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85, will try fallback image if available: getting remote image: GET https://registry.cn-hangzhou.aliyuncs.com/v2/google_containers/kicbase/manifests/sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85: MANIFEST_UNKNOWN: manifest unknown; map[Name:google_containers/kicbase Revision:sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85]
I1126 20:37:54.737644   20840 image.go:79] Checking for docker.io/kicbase/stable:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
I1126 20:37:54.748923   20840 cache.go:149] Downloading docker.io/kicbase/stable:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
I1126 20:37:54.749074   20840 image.go:63] Checking for docker.io/kicbase/stable:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
I1126 20:37:54.749089   20840 image.go:148] Writing docker.io/kicbase/stable:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache


🤷  The control-plane node minikube host does not exist
👉  To start a cluster, run: "minikube start"

使用的操作系统版本

Linux xxx 4.18.0-553.el8_10.x86_64 #1 SMP Fri May 24 13:05:10 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

I found that it is because one of the image registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 download failed.

So I tried to pull this image manually:

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85

output:

Error response from daemon: manifest for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 not found: manifest unknown: manifest unknown

Obviously, registry.cn-hangzhou.aliyuncs.com/google_containers is not updating this image.


Actually, there are many registry mirrors in CN (such as DaoCloud/public-image-mirror), but the pain is they are different for different registry.

It would be nice to allow user define to use their own registry mirror for different registry. Current options are not satisfied in this situation:

  • --image-repository: can define just one registry mirror. When I defined to use the mirror of gcr.io, then I cannot pull images from registry.k8s.io's mirror; When I defined to use the mirror of registry.k8s.io, then I cannot pull images from gcr.io's mirror. (Both gcr.io and registry.k8s.io are blocked in CN.)
  • --image-mirror-country=cn: is hard defined in minikube/deploy/addons/aliyun_mirror.json · kubernetes/minikube
@ygxxii ygxxii added the l/zh-CN Issues in or relating to Chinese label Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese
Projects
None yet
Development

No branches or pull requests

1 participant