Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIGSEGV exception in glbc #1112

Closed
gmile opened this issue Aug 11, 2017 · 4 comments
Closed

SIGSEGV exception in glbc #1112

gmile opened this issue Aug 11, 2017 · 4 comments
Assignees

Comments

@gmile
Copy link
Contributor

gmile commented Aug 11, 2017

I'm using GKE with Kubernetes. Master is 1.6.7, nodes are 1.6.4.

After following instruction from readme here, the ingress is working fine, which is indicated by the following:

Name:                   echomap
Namespace:              default
Address:                35.xxx.xxx.38
Default backend:        echoheadersx:80 (10.24.0.14:8080)
Rules:
  Host          Path    Backends
  ----          ----    --------
  foo.bar.com
                /foo    echoheadersx:80 (10.24.0.14:8080)
  bar.baz.com
                /bar    echoheadersy:80 (10.24.0.14:8080)
                /foo    echoheadersx:80 (10.24.0.14:8080)
Annotations:
  backends:             {"k8s-be-30284--2043b92cf2a8ad1f":"HEALTHY","k8s-be-30301--2043b92cf2a8ad1f":"HEALTHY"}
  forwarding-rule:      k8s-fw-default-echomap--2043b92cf2a8ad1f
  target-proxy:         k8s-tp-default-echomap--2043b92cf2a8ad1f
  url-map:              k8s-um-default-echomap--2043b92cf2a8ad1f
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                    -------------   --------        ------  -------
  47m           47m             1       loadbalancer-controller                 Normal          ADD     default/echomap
  46m           46m             1       loadbalancer-controller                 Normal          CREATE  ip: 35.190.77.38
  46m           5m              9       loadbalancer-controller                 Normal          Service default backend set to echoheadersx:30301

However this is what I see in the logs for l7-lb-controller:

$ kubectl logs l7-lb-controller-wp1k4 l7-lb-controller
I0811 16:09:55.474755       1 main.go:192] Starting GLBC image: glbc:0.9.6, cluster name
I0811 16:09:56.679867       1 main.go:352] Using uid = "2043b92cf2a8ad1f" saved in ConfigMap
I0811 16:09:56.681814       1 main.go:352] Using provider-uid = "2043b92cf2a8ad1f" saved in ConfigMap
I0811 16:09:56.683747       1 utils.go:123] Changing cluster name from  to 2043b92cf2a8ad1f
I0811 16:09:56.683809       1 utils.go:132] Changing firewall name from  to 2043b92cf2a8ad1f
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x4eebbf]

goroutine 1 [running]:
io/ioutil.readAll.func1(0xc4205eb940)
        /usr/local/go/src/io/ioutil/ioutil.go:30 +0x119
panic(0x1a96820, 0x28bf780)
        /usr/local/go/src/runtime/panic.go:489 +0x2cf
bytes.(*Buffer).ReadFrom(0xc4205eb898, 0x0, 0x0, 0xc4200bee00, 0x0, 0x200)
        /usr/local/go/src/bytes/buffer.go:179 +0x13f
io/ioutil.readAll(0x0, 0x0, 0x200, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/io/ioutil/ioutil.go:33 +0x150
io/ioutil.ReadAll(0x0, 0x0, 0x0, 0xc4205eb9e0, 0x49107c, 0x2, 0xc420078a50)
        /usr/local/go/src/io/ioutil/ioutil.go:42 +0x3e
k8s.io/ingress/controllers/gce/controller.getGCEClient(0x0, 0x0, 0x0)
        /var/build/go/src/k8s.io/ingress/controllers/gce/controller/cluster_manager.go:215 +0x4d
k8s.io/ingress/controllers/gce/controller.NewClusterManager(0x0, 0x0, 0xc420430ed0, 0x76aa, 0x1d44234, 0x4, 0x7ffdf931070c, 0x7, 0x7ffdf9310714, 0x14, ...)
        /var/build/go/src/k8s.io/ingress/controllers/gce/controller/cluster_manager.go:274 +0x1175
main.main()
        /var/build/go/src/k8s.io/ingress/controllers/gce/main.go:250 +0x85a

Running:

kubectl describe pod l7-lb-controller-wp1k4

Reveals the following, with the last 3 lines repeating over and over again (removed for brevity):

Events:
  FirstSeen     LastSeen        Count   From                                                    SubObjectPath                           Type            Reason          Message
  ---------     --------        -----   ----                                                    -------------                           --------        ------          -------
  50m           50m             1       default-scheduler                                                                               Normal          Scheduled       Successfully assigned l7-lb-controller-wp1k4 to gke-cluster-1-default-pool-465d4f9f-jtvb
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{default-http-backend}   Normal          Pulling         pulling image "gcr.io/google_containers/defaultbackend:1.0"
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{default-http-backend}   Normal          Pulled          Successfully pulled image "gcr.io/google_containers/defaultbackend:1.0"
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{default-http-backend}   Normal          Created         Created container with id 5726e3860f92596c17559409062dde53478d5a3c897f9a29d9b557ae91f42f38
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{default-http-backend}   Normal          Started         Started container with id 5726e3860f92596c17559409062dde53478d5a3c897f9a29d9b557ae91f42f38
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal          Pulling         pulling image "gcr.io/google_containers/glbc:0.9.6"
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal          Pulled          Successfully pulled image "gcr.io/google_containers/glbc:0.9.6"
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal          Created         Created container with id 44f88cbd7a3f6d649ee6ac3de3f6bb515ea0836bdfd024418f06065d582b472d
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal          Started         Started container with id 44f88cbd7a3f6d649ee6ac3de3f6bb515ea0836bdfd024418f06065d582b472d
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal          Started         Started container with id 75046f0b98d0454b87a07dcfc58f716d0ed3ec53ab93014d45168cc1926fe83e
  50m           50m             1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal          Created         Created container with id 75046f0b98d0454b87a07dcfc58f716d0ed3ec53ab93014d45168cc1926fe83e
  50m           49m             2       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb                                               Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "l7-lb-controller" with CrashLoopBackOff: "Back-off 10s restarting failed container=l7-lb-controller pod=l7-lb-controller-wp1k4_default(0cf358ad-7eab-11e7-af58-42010a9a0fd2)"

  49m   49m     1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal  Started         Started container with id bd9ee0de9d01c964031c54dd0d3ef30b755acabac2c11d109a01c81d75b07014
  49m   49m     1       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb       spec.containers{l7-lb-controller}       Normal  Created         Created container with id bd9ee0de9d01c964031c54dd0d3ef30b755acabac2c11d109a01c81d75b07014
  49m   49m     3       kubelet, gke-cluster-1-default-pool-465d4f9f-jtvb                                               Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "l7-lb-controller" with CrashLoopBackOff: "Back-off 20s restarting failed container=l7-lb-controller pod=l7-lb-controller-wp1k4_default(0cf358ad-7eab-11e7-af58-42010a9a0fd2)"

Filing the bug in case that exception should not happen, and needs to be fixed.

@gmile
Copy link
Contributor Author

gmile commented Aug 11, 2017

Also, a question: do I even have to run kubectl create -f rc.yml?

Per tutorial in that readme, it seems like running kubectl create -f ingress-app.yml should be sufficient to demonstrate the concept of the ingress, which is indeed working perfectly.

@gmile gmile changed the title GCE: Exception in l7-lb-controller SIGSEGV exception in glbc Aug 11, 2017
@nicksardo
Copy link
Contributor

@gmile You don't need to run that command. GKE already runs the ingress controller on the master - though, you can't see it via kubectl get pods -n kube-system.

However, this does expose a bug for those running this controller on an unmanaged cluster without a GCE configuration file. The controller should gracefully handle no configuration provided.

@nicksardo
Copy link
Contributor

These docs are more helpful for GKE:
https://cloud.google.com/container-engine/docs/tutorials/http-balancer

@gmile
Copy link
Contributor Author

gmile commented Aug 13, 2017

You don't need to run that command

@nicksardo do you think the documentation could be updated with a note pointing, once again, that in case of GKE, there's no need to create the controller?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants