Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kind local registry: Error response from daemon: endpoint with name kind-registry already exists in network kind #2600

Closed
iamtodor opened this issue Jan 21, 2022 · 28 comments · Fixed by #2601
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@iamtodor
Copy link
Contributor

Hello,

I run the script from the page https://kind.sigs.k8s.io/docs/user/local-registry/ here is the console output:

usr@mcbook  ~  sh kind-local-registry.sh
Creating cluster "kind-registry" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind-registry"
You can now use your cluster with:

kubectl cluster-info --context kind-kind-registry

Have a nice day! 👋
Error response from daemon: endpoint with name kind-registry already exists in network kind
configmap/local-registry-hosting created

And it says to me the error: Error response from daemon: endpoint with name kind-registry already exists in network kind. What should I do to handle it properly? I've searched google for this kind of issue but no luck.

I've walked thru #1213 but there is no mention regarding my topic.

If I miss something or any other details has to be provided from my side please let me know

@iamtodor iamtodor added the kind/support Categorizes issue or PR as a support question. label Jan 21, 2022
@aojea
Copy link
Contributor

aojea commented Jan 21, 2022

most probably you already have a container running as registry kind-registry

@iamtodor
Copy link
Contributor Author

@aojea seems like you are right. However, it's quite interesting: I delete the cluster by running kind delete cluster --name kind-registry but it turned out the container is still running 🤔

Here is the full console output:

itodorenko@mcbook  ~  docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED              STATUS              PORTS                                                                                                                                  NAMES
c4f49cd54e35   registry:2                            "/entrypoint.sh /etc…"   About a minute ago   Up About a minute   127.0.0.1:5000->5000/tcp                                                                                                               kind-registry
14b08c33aeb3   gcr.io/k8s-minikube/kicbase:v0.0.28   "/usr/local/bin/entr…"   7 days ago           Up 2 hours          127.0.0.1:55853->22/tcp, 127.0.0.1:55854->2376/tcp, 127.0.0.1:55856->5000/tcp, 127.0.0.1:55857->8443/tcp, 127.0.0.1:55855->32443/tcp   minikube
 itodorenko@mcbook  ~  kind get clusters

No kind clusters found.
 itodorenko@mcbook  ~ 
 itodorenko@mcbook  ~  kind get clusters
No kind clusters found.
 itodorenko@mcbook  ~  docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED         STATUS              PORTS                                                                                                                                  NAMES
c4f49cd54e35   registry:2                            "/entrypoint.sh /etc…"   2 minutes ago   Up About a minute   127.0.0.1:5000->5000/tcp                                                                                                               kind-registry
14b08c33aeb3   gcr.io/k8s-minikube/kicbase:v0.0.28   "/usr/local/bin/entr…"   7 days ago      Up 2 hours          127.0.0.1:55853->22/tcp, 127.0.0.1:55854->2376/tcp, 127.0.0.1:55856->5000/tcp, 127.0.0.1:55857->8443/tcp, 127.0.0.1:55855->32443/tcp   minikube
 itodorenko@mcbook  ~  sh kind-local-registry.sh
Creating cluster "kind-registry" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind-registry"
You can now use your cluster with:

kubectl cluster-info --context kind-kind-registry

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Error response from daemon: endpoint with name kind-registry already exists in network kind
configmap/local-registry-hosting created

What I expect is by deleting the cluster container would be stopped and deleted as well.
Do I miss something or is it a bug? Please let me know if something else has to be provided from my side :)

@stmcginnis
Copy link
Contributor

The registry isn't part of the cluster, so when you delete the cluster the registry is not affected. This is the desired behavior in a lot of cases because you don't want to have to rebuild that local cache every time.

This is also the reason why there is the script to create the cluster with a registry versus just being able to call kind create.

If your desired behavior is to have the registry removed along with the cluster, one option would be to create a remove-kind-with-registry.sh script that calls kind delete, then docker delete -f kind-registry so you can do it all in one operation.

@aojea
Copy link
Contributor

aojea commented Jan 21, 2022

Thanks Sean for the great answer, I think that we can close it now
/close

@k8s-ci-robot
Copy link
Contributor

@aojea: Closing this issue.

In response to this:

Thanks Sean for the great answer, I think that we can close it now
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@iamtodor
Copy link
Contributor Author

@stmcginnis @aojea thank you for your answer!
Sorry I am may seem foolish, so in case I want to have local registry I need to run the script and then I can delete cluster safety? May I ask you why in this case we need the cluster at the first step?

@iamtodor
Copy link
Contributor Author

@steffengy @aojea
one suggestion: perhaps it's worth to mentioned the error Error response from daemon: endpoint with name kind-registry already exists in network kind on the page https://kind.sigs.k8s.io/docs/user/local-registry/ as some trouble-shooting guide or so by providing the way to solve the above-mentioned issue?

@aojea
Copy link
Contributor

aojea commented Jan 21, 2022

Sorry I am may seem foolish, so in case I want to have local registry I need to run the script and then I can delete cluster safety? May I ask you why in this case we need the cluster at the first step?

Feel free to ask, no problem, this is a helper script for users that want to have a local registry to test their images in the cluster, if you don't create the cluster it means you only wants a local registry, then this doc is not the best place for this informations.
This is not meant to be a supported feature, you can see in the disclaimer that

In the future this will be replaced by a built-in feature, and this guide will cover usage instead.

keep adding information to the page may end in a "supported feature" and supporting this "hacky script" is not something maintainer have time :)

@iamtodor
Copy link
Contributor Author

@aojea got you :)
I just don't understand the whole picture: I want to deploy the app via helm on the local k8s cluster. The docker image should be pulled from my private local registry. I have particularly strict on usage k8s v1.19.7, so I run cluster by kind create a cluster --image kindest/node:v1.19.7.
I've read that I can not run a standalone local private registry as described here https://blog.sleeplessbeastie.eu/2018/04/16/how-to-setup-private-docker-registry/

docker run -d \
  -p 5000:5000 \
  --name registry \
  -v /srv/registry/data:/var/lib/registry \
  --restart always \
  registry:2

In case I want my cluster to communicate with a private local registry I need to run it from the kind cluster, the script I found, right?

Perhaps I was confused cause I run 2 clusters eventually: one for the private local registry and one for the application itself. Taking into consideration I require a particular version of k8s shall I add the flag --image kindest/node:v1.19.7 to this line kind to create a cluster so my cluster would be up and running along with local private registry?

@iamtodor
Copy link
Contributor Author

iamtodor commented Jan 21, 2022

@aojea got you :)
I just don't understand the whole picture: I want to deploy the app via helm on the local k8s cluster. The docker image should be pulled from my private local registry. I have particularly strict on usage k8s v1.19.7, so I run cluster by kind create a cluster --image kindest/node:v1.19.7.
I've read that I can not run a standalone local private registry as described here https://blog.sleeplessbeastie.eu/2018/04/16/how-to-setup-private-docker-registry/

docker run -d \
  -p 5000:5000 \
  --name registry \
  -v /srv/registry/data:/var/lib/registry \
  --restart always \
  registry:2

In case I want my cluster to communicate with a private local registry I need to run it from the kind cluster, the script I found, right?

Perhaps I was confused cause I run 2 clusters eventually: one for the private local registry and one for the application itself. Taking into consideration I require a particular version of k8s shall I add the flag --image kindest/node:v1.19.7 to this line https://github.com/kubernetes-sigs/kind/blob/main/site/static/examples/kind-with-registry.sh#L15 to create a cluster so my cluster would be up and running along with the local private registry?

@stmcginnis
Copy link
Contributor

Perhaps I was confused cause I run 2 clusters eventually: one for the private local registry and one for the application itself.

You should not be running a cluster just for the local registry. Unless your intention is to have the registry pod managed by kubernetes. Otherwise, the registry is just a container run on your local docker engine, and you would have one cluster that would be configured to pull from it. That is what the script provides you.

@aojea
Copy link
Contributor

aojea commented Jan 21, 2022

Echoing previous comment from Sean, I feel that you are thinking that the registry runs inside the kind cluster, but the script automates the steps you find in the article that you linked: it creates a cluster, a docker with local register, and makes them work together

@iamtodor
Copy link
Contributor Author

@stmcginnis @aojea got you guys, thanks!
Taking into consideration all above mentioned what is the best way to the following situation:
let's say I develop an app, I pushed the image to the local registry, I tested it in the cluster. All seems fine. So I delete the cluster by kind delete cluster --name kind-registry, but I don't stop/kill my registry container (it is still running as it is reflected by docker ps). A few days after I find a bug, I need to fix it. What is the best way to start a new cluster and make it work with the running registry?
As @stmcginnis mentioned The registry isn't part of the cluster, so when you delete the cluster the registry is not affected. This is the desired behavior in a lot of cases because you don't want to have to rebuild that local cache every time. how to make it run with the next cluster?

@BenTheElder
Copy link
Member

A few days after I find a bug, I need to fix it. What is the best way to start a new cluster and make it work with the running registry?
As @stmcginnis mentioned The registry isn't part of the cluster, so when you delete the cluster the registry is not affected. This is the desired behavior in a lot of cases because you don't want to have to rebuild that local cache every time. how to make it run with the next cluster?

You can do what is in the sample script at https://kind.sigs.k8s.io/docs/user/local-registry/#create-a-cluster-and-registry again, it is re-entrant with respect to the registry and can be safely run again without affecting a running registry.

@iamtodor
Copy link
Contributor Author

iamtodor commented Jan 23, 2022

@BenTheElder thank you for your answer!

To be honest, now I totally lost..

If I got your right: I need to delete the cluster by kind delete cluster --name kind-registry and then start the new cluster by sh kind-local-registry.sh? Please confirm if I understand everything correctly.
However, it leads us exactly to the point where I started the discussion: if I delete the cluster and run a new one I'd face the issue Error response from daemon: endpoint with name kind-registry already exists in network kind configmap/local-registry-hosting created

@stmcginnis
Copy link
Contributor

I am able to reproduce that error message:

KindEndpointError

I think things are working fine though. Have you confirmed it is not using the local registry after recreating your cluster? It may just be an error that can be ignored.

Reopening though since we should probably understand why it is giving the error.

/reopen

@k8s-ci-robot
Copy link
Contributor

@stmcginnis: Reopened this issue.

In response to this:

I am able to reproduce that error message:

KindEndpointError

I think things are working fine though. Have you confirmed it is not using the local registry after recreating your cluster? It may just be an error that can be ignored.

Reopening though since we should probably understand why it is giving the error.

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Jan 23, 2022
@iamtodor
Copy link
Contributor Author

@stmcginnis Have you confirmed it is not using the local registry after recreating your cluster?, seems like true, once I delete the cluster and spin up a new one - it is able to pull the image from the local registry

@stmcginnis
Copy link
Contributor

Great, thanks for verifying that. So this may just be a case where there is an error that doesn't really mean anything, and we should maybe just swallow that message or something.

@BenTheElder
Copy link
Member

BenTheElder commented Jan 24, 2022

Error response from daemon: endpoint with name kind-registry already exists in network kind configmap/local-registry-hosting created

That is probably just a log message from this script line:

# connect the registry to the cluster network
# (the network may already be connected)
docker network connect "kind" "${reg_name}" || true

The || true is us ignoring errors from this command for the reason mentioned in the code comments. When running the script again if the registry and network were not removed by the user or some other process they will still exist and the registry container will already be on the kind network.

We could instead attempt to detect if it is currently attached to the network or not before attempting to connect (and make the sample script more complex).
Or we could silence output, but that might hide useful information.

@BenTheElder
Copy link
Member

Basically, this is a harmless error message that just confirms that the system is already in the desired state. Everything should continue and work fine despite the logged message.

@stmcginnis
Copy link
Contributor

Or we could silence output, but that might hide useful information

Yeah, my first thought was to just pipe the output to /dev/null, but that would miss if there was actually a legitimate error.

Though... if we are we are ignoring failures there anyway with || true, it probably wouldn't hurt to just ignore that output anyway.

@BenTheElder
Copy link
Member

Though... if we are we are ignoring failures there anyway with || true, it probably wouldn't hurt to just ignore that output anyway.

If we do this, then if a real error happens, no one will be able to see it and debugging will be "fun", which is pretty different from the script opting to continue on.

A patch to detect the situation and avoid producing the error might be nice, except I'm not sure you can do so in a reliable way without introducing a dependency like jq or making the bash script significantly more complex than it needs to be.

Also, to attach more clusters you just need the containerd config patch, not the whole script:

# create a cluster with the local registry enabled in containerd
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:5000"]
EOF

And that would be better done by including the configuration snippet in whatever you choose to manage your kind cluster configs with.

@iamtodor
Copy link
Contributor Author

Perhaps we can just echo in script something like if you face the error: Error response from daemon: endpoint with name kind-registry already exists in network kind configmap/local-registry-hosting created it is just meant the network is already in use. No need to worry about it. The error message is basically harmless
Just to make it transparent to users and to prevent receiving some needed logs in case you need to debug the output

@BenTheElder
Copy link
Member

# (the network may already be connected) could be elaborated on a bit, but I don't think we should echo a message on every run. The script is and should remain quite short so we can improve the comment and there won't be much to look through to find it 🤔

@stmcginnis
Copy link
Contributor

One possible solution proposed in #2601. Would love to hear feedback.

@iamtodor
Copy link
Contributor Author

@BenTheElder remain quite short so we can improve the comment and there won't be much to look through to find it 🤔, there is always a trade-off between keeping things shallow and transparent/understandable. If you had asked me, I doubt adding a simple echo might increase the complexity and pureness of the script. Instead, we gain a clear, unambiguous context of the actions that were taken inside the script.

@BenTheElder
Copy link
Member

If you had asked me, I doubt adding a simple echo might increase the complexity and pureness of the script.

I wasn't saying that adding an echo increased the complexity notably, I was saying that because the complexity is low, this note could just be inserted as comment, because if something goes wrong there is not much to inspect when looking at the script, but the current comment is terse. As in, the low length and complexity of the script means that having the note as a comment would not bury it.

I think echo-ing a comment about a possible benign log on every run is log-noise, and I would prohibit this in most scripts, that effort is better spent detecting / handling the issue or documenting.

The approach in #2601 resolves this, anyhow, thankfully 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants