Skip to content
This repository has been archived by the owner on Jun 28, 2023. It is now read-only.

Question: Do Docker (CAPD) clusters work if the Docker containers are stopped and started again later? #770

Closed
karuppiah7890 opened this issue Jun 14, 2021 · 2 comments

Comments

@karuppiah7890
Copy link
Contributor

I've created multiple TCE Docker clusters and I don't delete them for resuming use later and shutdown the Docker Engine which also stops the containers. Later when I start the TCE Docker cluster - by starting the worker node, control plane node and load balancer, it doesn't seem to work

I tried debugging once a few weeks ago. I had some suspicions on the networking and the IP address of the containers and the configuration of the load balancer that points to the control plane node (API server) which had static config with IP addresses. I didn't dig deeper later as it took a lot of time

I wanted to ask if others keep creating new Docker clusters every time or if they reuse and if the reuse works, especially after restarting containers or the Docker engine

@jpmcb
Copy link
Contributor

jpmcb commented Jun 15, 2021

This is a long standing issue with kind: kubernetes-sigs/kind#148

And although looks like that specific issue has been resolved, there have been follow ups that required changes to IPv6 configurations and allocations. You might be running into something similar.

But for now, I don't think we expect docker based clusters to survive docker engine reboot cycles. Especially since getting a CAPD cluster takes just a few minutes and we don't recommend any real production workloads on CAPD. Closing for now and can re-address if this becomes more of a problem for users in the future.

@jpmcb jpmcb closed this as completed Jun 15, 2021
@karuppiah7890
Copy link
Contributor Author

Cool! Thanks @jpmcb !

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants