Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress can not work in 0.5.0 #3265

Closed
chinajuanbob opened this issue Feb 7, 2018 · 9 comments
Closed

Ingress can not work in 0.5.0 #3265

chinajuanbob opened this issue Feb 7, 2018 · 9 comments
Assignees

Comments

@chinajuanbob
Copy link

istio: 0.5.0
minikube: 0.25.0 k8s 1.9

installed with mTLS disabled

istio-ingress std out:

2018-02-07T08:14:52.491200Z	info	Epoch 0: set retry delay to 1m42.4s, budget to 0
2018-02-07T08:15:06.102212Z	info	Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-07T08:15:36.103530Z	info	Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-07T08:16:06.105148Z	info	Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-07T08:16:34.891616Z	info	Reconciling configuration (budget 0)
2018-02-07T08:16:34.891711Z	info	Epoch 0 starting
2018-02-07T08:16:34.891756Z	info	writing configuration to /etc/istio/proxy/envoy-rev0.json
2018-02-07T08:16:34.892216Z	info	Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-ingress --service-node ingress~~istio-ingress-d8d5fdc86-th4vt.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 -l off]
}{
  "listeners": [],
  "lds": {
    "cluster": "lds",
    "refresh_delay_ms": 1000
  },
  "admin": {
    "access_log_path": "/dev/stdout",
    "address": "tcp://127.0.0.1:15000"
  },
  "cluster_manager": {
    "clusters": [
      {
        "name": "rds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://istio-pilot:15003"
          }
        ]
      },
      {
        "name": "lds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://istio-pilot:15003"
          }
        ]
      },
      "refresh_delay_ms": 1000
    },
    "cds": {
      "cluster": {
        "name": "cds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://istio-pilot:15003"
          }
        ]
      },
      "refresh_delay_ms": 1000
    }
  },
  "statsd_udp_ip_address": "10.96.174.10:9125",
  "tracing": {
    "http": {
      "driver": {
        "type": "zipkin",
        "config": {
          "collector_cluster": "zipkin",
          "collector_endpoint": "/api/v1/spans"
        }
      }
    }
  }
2018-02-07T08:16:35.428718Z warn  Epoch 0 terminated with an error: signal: segmentation fault (core dumped)
2018-02-07T08:16:35.428761Z warn  Aborted all epochs
2018-02-07T08:16:35.428820Z error Permanent error: budget exhausted trying to fulfill the desired configuration
2018-02-07T08:16:35.428850Z error cannot start the proxy with the desired configuration
}

Any clue? Thanks!

@chinajuanbob
Copy link
Author

Also happens in minikube: 0.24.1 k8s 1.8.x

@brandon-bethke-timu
Copy link

This is affecting us as well.

@christian-posta
Copy link
Contributor

Can you post your configuration for ingress?

Yep there's been some critical issues with ingress (and with 0.5.0 in general) which should be sorted with the 0.5.1 release.

#3218

Not sure if this is one of those. You can see what is proposed for 0.5.1 here:

#3179

@christian-posta
Copy link
Contributor

christian-posta commented Feb 9, 2018 via email

@vivek-jain-mt
Copy link

@christian-posta Hi Christian, could you pls advise which is the stable version of Istio I could try? I am currently only looking at Distributed tracing feature.

@chinajuanbob
Copy link
Author

@christian-posta Thanks for reply.

I use the default configuration from istio 0.5.0 package, and no user ingress spec defined in my cluster.

@jjm3
Copy link

jjm3 commented Feb 16, 2018

I had a similar problem with both 0.5.0 and 0.5.1 on minikube v0.25.0 this week.

In my case, it turned out that kube-dns was unhappy before I ever got to istio.

I applied the RBAC workaround described in kubernetes/minikube#1734 and jetstack/navigator#107 and then was able to install istio 0.5.1 and run the bookinfo demo without a problem.

@secat
Copy link

secat commented Feb 27, 2018

I am using:

  • minikube v0.25.0
  • kubernetes v1.9.0
  • istio v0.5.1 (installed with mTLS disabled)

Istio ingress stdout:

2018-02-26T13:31:26.437002Z     info    Version root@2e4a18076b04-docker.io/istio-0.5.1-30acfe6528107ea333543309095659b93364b30d-Clean
2018-02-26T13:31:26.437041Z     info    Proxy role: model.Node{Type:"ingress", IPAddress:"", ID:"servicemesh-ingress-6d4ff8c47b-tvrtz.istio-system", Domain:"istio-system.svc.cluster.local"}
2018-02-26T13:31:26.437045Z     info    Attempting to lookup address: servicemesh-mixer.istio-system
2018-02-26T13:31:26.438975Z     info    Addr resolved to: 10.105.210.13:9125
2018-02-26T13:31:26.438995Z     info    Finished lookup of address: servicemesh-mixer.istio-system
2018-02-26T13:31:26.439260Z     info    Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10.000s
discoveryAddress: servicemesh-pilot:15003
discoveryRefreshDelay: 1.000s
drainDuration: 45.000s
parentShutdownDuration: 60.000s
proxyAdminPort: 15000
serviceCluster: servicemesh-ingress
statsdUdpAddress: 10.105.210.13:9125
zipkinAddress: zipkin.fleet:9411

2018-02-26T13:31:26.439286Z     info    Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/istio/ingress-certs/", Files:[]string{"tls.crt", "tls.key"}}}
2018-02-26T13:31:26.439693Z     info    Starting proxy agent
2018-02-26T13:31:26.440057Z     info    Received new config, resetting budget
2018-02-26T13:31:26.440077Z     info    Reconciling configuration (budget 10)
2018-02-26T13:31:26.440085Z     info    Epoch 0 starting
2018-02-26T13:31:26.440174Z     info    writing configuration to /etc/istio/proxy/envoy-rev0.json
2018-02-26T13:31:26.440183Z     info    Availability zone not set, proxy will default to not using zone aware routing. To manually override use the --availabilityZone flag.
2018-02-26T13:31:26.440274Z     warn    watching /etc/istio/ingress-certs/ encounters an error no such file or directory
{
  "listeners": [],
  "lds": {
    "cluster": "lds",
    "refresh_delay_ms": 1000
  },
  "admin": {
    "access_log_path": "/dev/stdout",
    "address": "tcp://127.0.0.1:15000"
  },
  "cluster_manager": {
    "clusters": [
      {
        "name": "rds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://servicemesh-pilot:15003"
          }
        ]
      },
      {
        "name": "lds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://servicemesh-pilot:15003"
          }
        ]
      },
      {
        "name": "zipkin",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://zipkin.fleet:9411"
          }
        ]
      }
    ],
    "sds": {
      "cluster": {
        "name": "sds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://servicemesh-pilot:15003"
          }
        ]
      },
      "refresh_delay_ms": 1000
    },
    "cds": {
      "cluster": {
        "name": "cds",
        "connect_timeout_ms": 10000,
        "type": "strict_dns",
        "lb_type": "round_robin",
        "hosts": [
          {
            "url": "tcp://servicemesh-pilot:15003"
          }
        ]
      },
      "refresh_delay_ms": 1000
    }
  },
  "statsd_udp_ip_address": "10.105.210.13:9125",
  "tracing": {
    "http": {
      "driver": {
        "type": "zipkin",
        "config": {
          "collector_cluster": "zipkin",
          "collector_endpoint": "/api/v1/spans"
        }
      }
    }
  }
2018-02-26T13:31:26.441015Z     info    Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster servicemesh-ingress --service-node ingress~~servicemesh-ingress-6d4ff8c47b-tvrtz.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 -l off]
2018-02-26T13:31:56.492244Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:32:26.500984Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:32:56.503425Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:33:26.507448Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:33:56.509650Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:34:26.511146Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:34:56.512696Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:35:26.513824Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:35:56.515680Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:36:26.517300Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node
2018-02-26T13:36:56.518833Z     info    Received 404 status from pilot when retrieving availability zone: AvailabilityZone couldn't find the given cluster node

@andraxylia
Copy link
Contributor

Should work with 0.8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants