Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CrashLoopBackOff nginx controller #2968

Closed
zwhitchcox opened this issue Aug 22, 2018 · 7 comments
Closed

CrashLoopBackOff nginx controller #2968

zwhitchcox opened this issue Aug 22, 2018 · 7 comments

Comments

@zwhitchcox
Copy link

zwhitchcox commented Aug 22, 2018

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):

What keywords did you search in NGINX Ingress controller issues before filing this one? many

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report

NGINX Ingress controller version: latest

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

What happened:
CrashBackOffLoop with nginx controller

do $ kubectl get pods -n ingress-nginx
NAME                                        READY     STATUS             RESTARTS   AGE
default-http-backend-846b65fb5f-j8jvn       1/1       Running            0          37m
nginx-ingress-controller-5798d8fb84-5qhk9   0/1       CrashLoopBackOff   15         37m

description:

do $ kubectl describe -n ingress-nginx pod nginx-ingress-controller-5798d8fb84-5qhk9
Name:               nginx-ingress-controller-5798d8fb84-5qhk9
Namespace:          ingress-nginx
Priority:           0
PriorityClassName:  <none>
Node:               kube-host1/142.93.78.202
Start Time:         Tue, 21 Aug 2018 21:35:16 -0400
Labels:             app=ingress-nginx
                    pod-template-hash=1354849640
Annotations:        prometheus.io/port=10254
                    prometheus.io/scrape=true
Status:             Running
IP:                 192.168.2.2
Controlled By:      ReplicaSet/nginx-ingress-controller-5798d8fb84
Containers:
  nginx-ingress-controller:
    Container ID:  docker://f245d184002168d8591284fdf3b1ddf163b6cfe92870d13bc941eb976cbeb467
    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0
    Image ID:      docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:967d6115725f00dccc05f02782d7ed0ae19a9ca8b2db549608f565484259b197
    Ports:         80/TCP, 443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      --configmap=$(POD_NAMESPACE)/nginx-configuration
      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
      --udp-services-configmap=$(POD_NAMESPACE)/udp-services
      --publish-service=$(POD_NAMESPACE)/ingress-nginx
      --annotations-prefix=nginx.ingress.kubernetes.io
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Tue, 21 Aug 2018 22:10:16 -0400
      Finished:     Tue, 21 Aug 2018 22:10:55 -0400
    Ready:          False
    Restart Count:  15
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-controller-5798d8fb84-5qhk9 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-hwkmt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nginx-ingress-serviceaccount-token-hwkmt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-serviceaccount-token-hwkmt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                 From                 Message
  ----     ------                  ----                ----                 -------
  Normal   Scheduled               39m                 default-scheduler    Successfully assigned ingress-nginx/nginx-ingress-controller-5798d8fb84-5qhk9 to kube-host1
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ab7e2f44bed19cbaeddf736211c2d419de18e6602eaa448b496a9942df379738" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "88ebddbe55fba80328ce05059f12c1b87b81b13fd51e943f98af7d8f1415bab5" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "21229a06b1a3d01d3ca373bd50eb5853186c2c281c1842cd620bf25c6740c5fe" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "00b7b8181f51e4baffdcd7868ae89e1bb732f931e2a9557c1d16e60414441181" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3bd83d71930d71ee70da85764ac10012062de328672694b0692dfd8ea4a89227" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f47a192a2e9a1c616a5b395bcf024f465a74ea0a82b42873bd517461db57b6e5" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  39m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "30d9e4ea72b2c158e2720ba6c3815bf25e6d8f8c887e1e19e494b72f27629ddd" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  38m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1a4ef10ca1f0b16481cbd9529cb7e51f0bdb22bb27c75b331953ee2e1a5db4b5" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  38m                 kubelet, kube-host1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "29097c4e5821753f6d91236536cfd569d0c5789cedf7ad906bfca5f5752c96bc" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged          38m (x12 over 39m)  kubelet, kube-host1  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  38m (x4 over 38m)   kubelet, kube-host1  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2175adba83388f2524349369cffd425c53f806a64e95cb4a958fc1664c6aff58" network for pod "nginx-ingress-controller-5798d8fb84-5qhk9": NetworkPlugin cni failed to set up pod "nginx-ingress-controller-5798d8fb84-5qhk9_ingress-nginx" network: open /run/flannel/subnet.env: no such file or directory
  Warning  BackOff                 9m (x97 over 34m)   kubelet, kube-host1  Back-off restarting failed container
  Warning  Unhealthy               4m (x60 over 38m)   kubelet, kube-host1  Readiness probe failed: Get http://192.168.2.2:10254/healthz: dial tcp 192.168.2.2:10254: connect: connection refused

log:

do $ kubectl log -n ingress-nginx  nginx-ingress-controller-5798d8fb84-5qhk9
log is DEPRECATED and will be removed in a future version. Use logs instead.
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.18.0
  Build:      git-7b20058
  Repository: https://github.com/kubernetes/ingress-nginx.git
-------------------------------------------------------------------------------

nginx version: nginx/1.15.2
W0822 02:10:16.295538       7 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0822 02:10:16.295999       7 main.go:191] Creating API client for https://10.96.0.1:443

What you expected to happen: it to start normally

How to reproduce it (as minimally and precisely as possible):
Go here: https://gitlab.com/zwhitchcox/dohttps://gitlab.com/zwhitchcox/do, follow the instructions, then, when the cluster is up, run:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

then

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

Anything else we need to know:

@aledbf
Copy link
Member

aledbf commented Aug 22, 2018

@zwhitchcox if that's the only output it means the pod cannot reach the apiserver.
You can add the flag - --v=10 to see the timeout.

@aledbf
Copy link
Member

aledbf commented Aug 22, 2018

open /run/flannel/subnet.env: no such file or directory

Forget that, you have some issues with your flannel setup.

@zwhitchcox
Copy link
Author

That's what someone else just told me...I just followed a tutorial that said to run kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml...Maybe I should try a different networking implementation?

@zwhitchcox
Copy link
Author

Will do that and report back

@zwhitchcox
Copy link
Author

That fixed it!!!!!

@zwhitchcox
Copy link
Author

If anyone else runs into the same problem as me, this is the solution:

Remove flannel,

installl calico with the following command:

kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml

@lihao1994
Copy link

@zwhitchcox I meet the same problem !
`Name: default-http-backend-779898f74-x85t7
Namespace: ingress-nginx
Node: k8s02/10.90.136.106
Start Time: Thu, 23 Aug 2018 04:32:13 -0400
Labels: app.kubernetes.io/name=default-http-backend
pod-template-hash=335454930
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/default-http-backend-779898f74
Containers:
default-http-backend:
Container ID:
Image: gcr.io/google_containers/defaultbackend:1.4
Image ID:
Port: 8080/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 10m
memory: 20Mi
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:8080/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fv7kj (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-fv7kj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fv7kj
Optional: false
QoS Class: Guaranteed
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal SuccessfulMountVolume 1m kubelet, k8s02 MountVolume.SetUp succeeded for volume "default-token-fv7kj"
Normal Scheduled 1m default-scheduler Successfully assigned default-http-backend-779898f74-x85t7 to k8s02
Warning FailedCreatePodSandBox 19s (x2 over 54s) kubelet, k8s02 Failed create pod sandbox.`

I also use flannel。
I change flannel to calico。My questions is
in etcd.yaml,what’s the meaning of thisIP
image

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants