Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS for xds-address #362

Closed
stevesloka opened this issue May 2, 2018 · 3 comments
Closed

DNS for xds-address #362

stevesloka opened this issue May 2, 2018 · 3 comments

Comments

@stevesloka
Copy link
Member

I'm testing through splitting envoy from contour and it doesn't seem to be working with a DNS name, but using IP address works fine. This was mentioned and resolved in #228. Following are the logs from my envoy pod and my deployment manifests are here.

Curious if anyone else has had luck with this or if something has changed since #238 landed.

[2018-05-02 14:42:58.083][1][warning][upstream] source/common/config/grpc_mux_impl.cc:36] Unable to establish new stream
[2018-05-02 14:42:58.083][1][warning][config] bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:66] gRPC update fortype.googleapis.com/envoy.api.v2.Cluster failed
@stevesloka
Copy link
Member Author

Also I can confirm my DNS works fine:

$ kubectl exec -ti busybox -- nslookup contour.gimbal-contour.svc.cluster.local                                       [10:34:39]
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      contour.gimbal-contour.svc.cluster.local
Address 1: 10.108.227.232 contour.gimbal-contour.svc.cluster.local

My config also contains the change STRICT_DNS:

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [contour]
      grpc_services:
      - envoy_grpc:
          cluster_name: contour
  cds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [contour]
      grpc_services:
      - envoy_grpc:
          cluster_name: contour
static_resources:
  clusters:
  - name: contour
    connect_timeout: { seconds: 5 }
    type: STRICT_DNS
    hosts:
    - socket_address:
        address: contour.gimbal-contour.svc.cluster.local
        port_value: 8001
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    circuit_breakers:
      thresholds:
        - priority: high
          max_connections: 100000
          max_pending_requests: 100000
          max_requests: 60000000
          max_retries: 50
        - priority: default
          max_connections: 100000
          max_pending_requests: 100000
          max_requests: 60000000
          max_retries: 50
admin:
  access_log_path: /dev/null
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9001

@stevesloka
Copy link
Member Author

stevesloka commented May 2, 2018

Also, I tried using the service env variable by setting the xds-address to $(CONTOUR_PORT_8001_TCP_ADDR):

It seems to load, but the listeners never get added:

[2018-05-02 18:06:27.932][1][info][main] source/server/server.cc:178] initializing epoch 0 (hot restart version=9.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933363)
[2018-05-02 18:06:27.939][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:127] cm init: initializing cds
[2018-05-02 18:06:27.939][1][info][config] source/server/configuration_impl.cc:52] loading 0 listener(s)
[2018-05-02 18:06:27.940][1][info][config] source/server/configuration_impl.cc:92] loading tracing configuration
[2018-05-02 18:06:27.940][1][info][config] source/server/configuration_impl.cc:119] loading stats sink configuration
[2018-05-02 18:06:27.940][1][info][main] source/server/server.cc:353] starting main dispatch loop
[2018-05-02 18:06:27.942][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster default/kuard/80 during init
[2018-05-02 18:06:27.943][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster default/kubernetes/443 during init
[2018-05-02 18:06:27.943][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster default/kubernetes/https during init
[2018-05-02 18:06:27.943][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster default/nginx/80 during init
[2018-05-02 18:06:27.944][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster docker/compose-api/443 during init
[2018-05-02 18:06:27.945][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster docker/compose-api/api during init
[2018-05-02 18:06:27.945][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-contour/contour/8001 during init
[2018-05-02 18:06:27.946][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-contour/contour/xds during init
[2018-05-02 18:06:27.947][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-contour/envoy/443 during init
[2018-05-02 18:06:27.948][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-contour/envoy/80 during init
[2018-05-02 18:06:27.948][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-contour/envoy/http during init
[2018-05-02 18:06:27.949][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-contour/envoy/https during init
[2018-05-02 18:06:27.950][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/grafana/80 during init
[2018-05-02 18:06:27.951][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/grafana/http during init
[2018-05-02 18:06:27.951][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus-alertmanager/80 during init
[2018-05-02 18:06:27.952][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus-alertmanager/http during init
[2018-05-02 18:06:27.952][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus-node-exporter/9100 during init
[2018-05-02 18:06:27.953][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus-node-exporter/metrics during init
[2018-05-02 18:06:27.953][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus/9090 during init
[2018-05-02 18:06:27.953][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus/9093 during init
[2018-05-02 18:06:27.954][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus/alertmanager during init
[2018-05-02 18:06:27.954][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster gimbal-monitoring/prometheus/prometheus during init
[2018-05-02 18:06:27.955][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster kube-system/kube-dns/53 during init
[2018-05-02 18:06:27.955][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster kube-system/kube-dns/dns-tcp during init
[2018-05-02 18:06:27.955][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster kube-system/kube-state-metrics/8080 during init
[2018-05-02 18:06:27.956][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster kube-system/kube-state-metrics/8081 during init
[2018-05-02 18:06:27.956][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster kube-system/kube-state-metrics/http-metrics during init
[2018-05-02 18:06:27.956][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:382] add/update cluster kube-system/kube-state-metrics/telemetry during init
[2018-05-02 18:06:27.956][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:108] cm init: initializing secondary clusters
[2018-05-02 18:06:27.959][1][info][upstream] source/common/upstream/cluster_manager_impl.cc:131] cm init: all clusters initialized
[2018-05-02 18:06:27.959][1][info][main] source/server/server.cc:337] all clusters initialized. initializing init manager

@stevesloka
Copy link
Member Author

I think the issue is with current build on master tag (#363), using v0.5.0 the ENV var works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant