Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VirtualCluster fail to create with "cannot find sts/apiserver in ns" on minikube running k8s 1.20.2 #198

Closed
sriram-kannan-infoblox opened this issue Jul 28, 2021 · 17 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sriram-kannan-infoblox
Copy link

The VirtualCluster fail to create the apiserver and the apiserver-0 is in the container creating state
kubectl-vc create -f virtualcluster_1_nodeport.yaml -o vc.kubeconfig
2021/07/28 08:20:03 etcd is ready
cannot find sts/apiserver in ns default-e4d075-vc-sample-1: default-e4d075-vc-sample-1/apiserver is not ready in 120 seconds

kubectl get po -n default-e4d075-vc-sample-1
NAME READY STATUS RESTARTS AGE
apiserver-0 0/1 ContainerCreating 0 46m
etcd-0 1/1 Running 1 47m

What steps did you take and what happened:
Followed the step as per the virtual cluster walkthrough demo and all the steps were successful till the Create VirtualCluster.
https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md

During the Create VirtualCluster the etcd came up fine but the apiserver-0 stayed in the ContainerCreating state.

What did you expect to happen:
Expected to see the apiserver and controller-manager to be in running state.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • cluster-api-provider-nested version:
  • Minikube/KIND version: minikube version: v1.21.0
  • Kubernetes version: (use kubectl version): 1.20.2
  • OS (e.g. from /etc/os-release): darwin
@jichenjc
Copy link
Contributor

apiserver-0 0/1 ContainerCreating 0 46m

what's the reason for this creating status? I ecountered this before due to docker pull limit
but seems you are not, how about check the reason for why container creating in 46 min?
e.g describe the container for additional info?

@sriram-kannan-infoblox
Copy link
Author

Good point, i checked the pod and the failure is due to
Warning FailedMount 5m43s (x50 over 91m) kubelet, minikube MountVolume.SetUp failed for volume "front-proxy-ca" : secret "front-proxy-ca" not found
Warning FailedMount 33s (x12 over 86m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[front-proxy-ca], unattached volumes=[apiserver-ca front-proxy-ca root-ca serviceaccount-rsa default-token-8xxsj]: timed out waiting for the condition

@jichenjc
Copy link
Contributor

ok, looks like the ca has issue and to my limited knowledge those ca are created by CAPN directly
@christopherhein any insight for further trouble shooting on this?

@sriram-kannan-infoblox
Copy link
Author

Looks to me like Minikube doesn't create the certificates in the /etc/kubernetes/pki, unlike kubeadm. Do we need frontend-proxy for the virtualcluster to work?

@gyliu513
Copy link
Contributor

@sriram-kannan-infoblox this was introduced in #167 , can you make sure you are using latest code and build all of the images/binaries from the latest code?

@sriram-kannan-infoblox
Copy link
Author

sriram-kannan-infoblox commented Aug 4, 2021

Hi @gyliu513, I am only following the steps as per the walkthrough demo below and haven't tried to build the image at all.
https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md

My query was do we need the aggregate API for virtual cluster to work? I can go back few commits and try out virtual cluster without aggregate API in minikube provided aggregate API change is not a breaking change.

My plan is to try out virtual cluster in minikube first and then take it to actual cluster.

Thanks

@gyliu513
Copy link
Contributor

gyliu513 commented Aug 4, 2021

@sriram-kannan-infoblox as a workaround, please remove the aggregate api support, check https://github.com/kubernetes-sigs/cluster-api-provider-nested/pull/167/files for how to remove, you only need to update the statefulset for apiserver.

The https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md need some update, as it is not using the latest image.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 2, 2021
@jasonliu747
Copy link

jasonliu747 commented Nov 24, 2021

The https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md need some update, as it is not using the latest image.

Hi @gyliu513 do you have any update on this demo.md? Thanks.

@jasonliu747
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2021
@gyliu513
Copy link
Contributor

The https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md need some update, as it is not using the latest image.

@vincent-pli I recalled you opened another issue to track this? What is the status?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2022
@jasonliu747
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants