Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example of a http/2 gRPC application connected through envoy/contour #379

Closed
mattalberts opened this issue May 11, 2018 · 8 comments
Closed
Milestone

Comments

@mattalberts
Copy link

I've been following allow this issue, #152, the result of which was to expose upstream protocol configuration as an annotation on the service. I'm attempting to verify http/2 gRPC support using contour v0.5.0 and yages.

I have been able to successfully host kuard, httpbin, multiple variants of an echo both with and without TLS. However, every attempt to call a gRPC service results in an error

[debug][router] source/common/router/router.cc:204] [C10][S12375228013957728489] no cluster match for URL '/yages.Echo/Ping'

I assume the issue is with me 😄. What am I missing?

Kubernetes Host Environment:

  • Docker for Mac Edge Version 18.05.0-ce-rc1-mac63 (24246)

gRPC Service/Source/Proto

syntax = "proto3";

package yages;

// Empty is the null value for parameters.
message Empty {

}

// Content is the payload used in YAGES services.
message Content {
  string text = 1;
}

// The echo YAGES service replies with the message it received.
service Echo {
  rpc Ping(Empty) returns (Content) {}
  rpc Reverse(Content) returns (Content) {}
}

Contour Resource Definition

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: contour
  labels:
    app: contour
---
apiVersion: v1
kind: Service
metadata:
  name: contour-ingress
  labels:
    app: contour
    component: ingress
spec:
  ports:
    - port: 80
      name: http
      protocol: TCP
      targetPort: 8080
    - port: 443
      name: https
      protocol: TCP
      targetPort: 8443
  selector:
    app: contour
    component: ingress
  type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: contour
    component: ingress
  name: contour-ingress
spec:
  selector:
    matchLabels:
      app: contour
      component: ingress
  replicas: 1
  template:
    metadata:
      labels:
        app: contour
        component: ingress
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9001"
        prometheus.io/path: "/stats"
        prometheus.io/format: "prometheus"
    spec:
      dnsPolicy: ClusterFirst
      serviceAccountName: contour
      terminationGracePeriodSeconds: 30
      initContainers:
        - name: envoy-init
          image: gcr.io/heptio-images/contour:v0.5.0
          imagePullPolicy: IfNotPresent
          command: ["contour"]
          args:
            - bootstrap
            - /config/contour.yaml
            - --admin-address=0.0.0.0
          volumeMounts:
            - name: contour-config
              mountPath: /config

      containers:
        - name: envoy
          image: docker.io/envoyproxy/envoy-alpine:v1.6.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
              name: http
            - containerPort: 8443
              name: https
          command: ["envoy"]
          args:
            - -c /config/contour.yaml
            - --service-cluster cluster0
            - --service-node node0
            - --v2-config-only
            - --log-level debug
          volumeMounts:
            - name: contour-config
              mountPath: /config

        - name: contour
          image: gcr.io/heptio-images/contour:v0.5.0
          imagePullPolicy: IfNotPresent
          command: ["contour"]
          args:
            - serve
            - --incluster
            - --ingress-class-name=contour

      volumes:
        - name: contour-config
          emptyDir: {}
      # The affinity stanza below tells Kubernetes to try hard not to place 2 of
      # these pods on the same node.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: contour
                  component: ingress
              topologyKey: kubernetes.io/hostname
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: contour
    component: ingress
  name: contour-ingress
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: contour
  minReplicas: 1
  maxReplicas: 15
  targetCPUUtilizationPercentage: 70
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: contour
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: contour
subjects:
- kind: ServiceAccount
  name: contour
  namespace: ingress-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: contour
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
---

Yages Resource Definition

---
apiVersion: v1
kind: Service
metadata:
  name: yages
  labels:
    app: yages
  annotations:
    contour.heptio.com/upstream-protocol.h2: "9000,grpc"
spec:
  type: ClusterIP
  ports:
  - name: grpc
    port: 9000
    protocol: TCP
    targetPort: grpc
  selector:
    app: yages
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: yages
  labels:
    app: yages
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: yages
    spec:
      containers:
      - name: grpc
        image: quay.io/mhausenblas/yages:0.1.0
        ports:
        - containerPort: 9000
          protocol: TCP
          name: grpc
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: contour-yages
  labels:
    ingress: contour
  annotations:
    kubernetes.io/ingress.class: contour
    contour.heptio.com/request-timeout: 5s
spec:
  tls:
  - hosts:
    - yages.mydomain.com
    secretName: yages
  rules:
  - host: yages.mydomain.com
    http:
      paths:
      - backend:
          serviceName: yages
          servicePort: grpc
---

Deploy Resources

# Create a secret for yages
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=yages.mydomain.com"
kubectl create secret tls yages --key tls.key --cert tls.crt

kubectl create namespace ingress-system
kubectl -n ingress-system apply -f contour.yaml
kubectl apply -f deploy/yages.yaml

Almost there 😄 !

Monitor Envoy (separate terminal)

POD_NAME=$(kubectl get pods -n ingress-system -l "app=contour,component=ingress" -o jsonpath="{.items[0].metadata.name}"); echo $POD_NAME
kubectl -n ingress-system logs $POD_NAME -c envoy -f

Inspect Contour Config

POD_NAME=$(kubectl get pods -n ingress-system -l "app=contour,component=ingress" -o jsonpath="{.items[0].metadata.name}"); echo $POD_NAME
kubectl -n ingress-system exec $POD_NAME -c envoy cat /config/contour.yaml
dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [contour]
      grpc_services:
      - envoy_grpc:
          cluster_name: contour
  cds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [contour]
      grpc_services:
      - envoy_grpc:
          cluster_name: contour
static_resources:
  clusters:
  - name: contour
    connect_timeout: { seconds: 5 }
    type: STRICT_DNS
    hosts:
    - socket_address:
        address: 127.0.0.1
        port_value: 8001
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    circuit_breakers:
      thresholds:
        - priority: high
          max_connections: 100000
          max_pending_requests: 100000
          max_requests: 60000000
          max_retries: 50
        - priority: default
          max_connections: 100000
          max_pending_requests: 100000
          max_requests: 60000000
          max_retries: 50
admin:
  access_log_path: /dev/null
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9001

Forward Proxy debug Interface

POD_NAME=$(kubectl get pods -n ingress-system -l "app=contour,component=ingress" -o jsonpath="{.items[0].metadata.name}"); echo $POD_NAME
kubectl -n ingress-system port-forward $POD_NAME 9001:9001

Inspect Routes/Clusters

default/yages/9000::default_priority::max_connections::1024
default/yages/9000::default_priority::max_pending_requests::1024
default/yages/9000::default_priority::max_requests::1024
default/yages/9000::default_priority::max_retries::3
default/yages/9000::high_priority::max_connections::1024
default/yages/9000::high_priority::max_pending_requests::1024
default/yages/9000::high_priority::max_requests::1024
default/yages/9000::high_priority::max_retries::3
default/yages/9000::added_via_api::true
default/yages/9000::10.1.1.44:9000::cx_active::0
default/yages/9000::10.1.1.44:9000::cx_connect_fail::0
default/yages/9000::10.1.1.44:9000::cx_total::0
default/yages/9000::10.1.1.44:9000::rq_active::0
default/yages/9000::10.1.1.44:9000::rq_error::0
default/yages/9000::10.1.1.44:9000::rq_success::0
default/yages/9000::10.1.1.44:9000::rq_timeout::0
default/yages/9000::10.1.1.44:9000::rq_total::0
default/yages/9000::10.1.1.44:9000::health_flags::healthy
default/yages/9000::10.1.1.44:9000::weight::1
default/yages/9000::10.1.1.44:9000::region::
default/yages/9000::10.1.1.44:9000::zone::
default/yages/9000::10.1.1.44:9000::sub_zone::
default/yages/9000::10.1.1.44:9000::canary::false
default/yages/9000::10.1.1.44:9000::success_rate::-1
default/yages/grpc::default_priority::max_connections::1024
default/yages/grpc::default_priority::max_pending_requests::1024
default/yages/grpc::default_priority::max_requests::1024
default/yages/grpc::default_priority::max_retries::3
default/yages/grpc::high_priority::max_connections::1024
default/yages/grpc::high_priority::max_pending_requests::1024
default/yages/grpc::high_priority::max_requests::1024
default/yages/grpc::high_priority::max_retries::3
default/yages/grpc::added_via_api::true
default/yages/grpc::10.1.1.44:9000::cx_active::0
default/yages/grpc::10.1.1.44:9000::cx_connect_fail::0
default/yages/grpc::10.1.1.44:9000::cx_total::0
default/yages/grpc::10.1.1.44:9000::rq_active::0
default/yages/grpc::10.1.1.44:9000::rq_error::0
default/yages/grpc::10.1.1.44:9000::rq_success::0
default/yages/grpc::10.1.1.44:9000::rq_timeout::0
default/yages/grpc::10.1.1.44:9000::rq_total::0
default/yages/grpc::10.1.1.44:9000::health_flags::healthy
default/yages/grpc::10.1.1.44:9000::weight::1
default/yages/grpc::10.1.1.44:9000::region::
default/yages/grpc::10.1.1.44:9000::zone::
default/yages/grpc::10.1.1.44:9000::sub_zone::
default/yages/grpc::10.1.1.44:9000::canary::false
default/yages/grpc::10.1.1.44:9000::success_rate::-1
"route_table_dump": {
 "name": "ingress_https",
 "virtual_hosts": [
  {
   "name": "yages.mydomain.com",
   "domains": [
    "yages.mydomain.com"
   ],
   "routes": [
    {
     "match": {
      "prefix": "/"
     },
     "route": {
      "cluster": "default/yages/grpc",
      "timeout": "5s"
     }
    }
   ]
  }
 ]
}
"route_table_dump": {
 "name": "ingress_http",
 "virtual_hosts": [
  {
   "name": "yages.mydomain.com",
   "domains": [
    "yages.mydomain.com"
   ],
   "routes": [
    {
     "match": {
      "prefix": "/"
     },
     "route": {
      "cluster": "default/yages/grpc",
      "timeout": "5s"
     }
    }
   ]
  }
 ]
}
  • there appears to be both a cluster and route matching my service

Send Requests

grpcurl -v -insecure yages.mydomain.com:443 yages.Echo.Ping
[debug][http] source/common/http/conn_manager_impl.cc:455] [C14][S13110681565778736525] request headers complete (end_stream=false):
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   ':method':'POST'
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   ':scheme':'http'
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   ':path':'/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo'
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   ':authority':'yages.mydomain.com:443'
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   'content-type':'application/grpc'
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   'user-agent':'grpc-go/1.12.0-dev'
[debug][http] source/common/http/conn_manager_impl.cc:460] [C14][S13110681565778736525]   'te':'trailers'
[debug][router] source/common/router/router.cc:204] [C14][S13110681565778736525] no cluster match for URL '/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo'
[debug][http] source/common/http/conn_manager_impl.cc:963] [C14][S13110681565778736525] encoding headers via codec (end_stream=true):
[debug][http] source/common/http/conn_manager_impl.cc:968] [C14][S13110681565778736525]   ':status':'404'
[debug][http] source/common/http/conn_manager_impl.cc:968] [C14][S13110681565778736525]   'date':'Fri, 11 May 2018 19:31:48 GMT'
[debug][http] source/common/http/conn_manager_impl.cc:968] [C14][S13110681565778736525]   'server':'envoy'

With the protoset to avoid reflection

grpcurl -v -insecure -protoset internal/servers/yages/yages.protoset  yages.mydomain.com:443 yages.Echo.Ping
[http] source/common/http/conn_manager_impl.cc:455] [C17][S10293940626947482852] request headers complete (end_stream=false):
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   ':method':'POST'
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   ':scheme':'http'
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   ':path':'/yages.Echo/Ping'
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   ':authority':'yages.mydomain.com:443'
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   'content-type':'application/grpc'
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   'user-agent':'grpc-go/1.12.0-dev'
[http] source/common/http/conn_manager_impl.cc:460] [C17][S10293940626947482852]   'te':'trailers'
[router] source/common/router/router.cc:204] [C17][S10293940626947482852] no cluster match for URL '/yages.Echo/Ping'
[http] source/common/http/conn_manager_impl.cc:963] [C17][S10293940626947482852] encoding headers via codec (end_stream=true):
[http] source/common/http/conn_manager_impl.cc:968] [C17][S10293940626947482852]   ':status':'404'
[http] source/common/http/conn_manager_impl.cc:968] [C17][S10293940626947482852]   'date':'Fri, 11 May 2018 19:39:47 GMT'
[http] source/common/http/conn_manager_impl.cc:968] [C17][S10293940626947482852]   'server':'envoy

I'm stymied 😃 . Why is there no cluster match for the URL? I have a route that matches the URL and maps to a cluster in my routes definition ....

The only other interesting information is that if I curl or point my browser that the same URL, I see slightly different behavior.

[http] source/common/http/conn_manager_impl.cc:790] [C20][S11093362062933055901] request end stream
[http] source/common/http/conn_manager_impl.cc:455] [C20][S11093362062933055901] request headers complete (end_stream=true):
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   ':method':'GET'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   ':authority':'yages.mydomain.com'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   ':scheme':'https'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   ':path':'/yages.Echo/Ping'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   'user-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   'upgrade-insecure-requests':'1'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   'accept-encoding':'gzip, deflate, br'
[http] source/common/http/conn_manager_impl.cc:460] [C20][S11093362062933055901]   'accept-language':'en-US,en;q=0.9'
[router] source/common/router/router.cc:250] [C20][S11093362062933055901] cluster 'default/yages/grpc' match for URL '/yages.Echo/Ping'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   ':method':'GET'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   ':authority':'yages.mydomain.com'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   ':scheme':'http'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   ':path':'/yages.Echo/Ping'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'user-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'upgrade-insecure-requests':'1'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'accept-encoding':'gzip, deflate, br'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'accept-language':'en-US,en;q=0.9'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'x-forwarded-for':'192.168.65.3'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'x-forwarded-proto':'https'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'x-envoy-internal':'true'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'x-request-id':'dd11edf5-8998-4950-86e2-af578d61571a'
[router] source/common/router/router.cc:298] [C20][S11093362062933055901]   'x-envoy-expected-rq-timeout-ms':'5000'

In terms of interesting differences, the method, authority, and scheme are slightly different.

For fun, I built my own gRPC client to call ping

./bin/ping -insecure -yages-addr=yages.mydomain.com:443
[connection] source/common/ssl/ssl_socket.cc:110] [C32] handshake error: 1
[connection] source/common/ssl/ssl_socket.cc:138] [C32] SSL error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[connection] source/common/network/connection_impl.cc:134] [C32] closing socket: 0
  • This feels like something I can figure out on my own 😄

Who has ideas? Or can point out my foolish mistake? Can someone provide an example of a HTTP/2 gRPC service connected through contour/envoy?

@mattalberts
Copy link
Author

The issue reported here appears to be the same #364.

@davecheney
Copy link
Contributor

This is likely to be #378

TL;DR I think the :443 is leaking into the Host header and that is why envoy can't match on it properly

@mattalberts
Copy link
Author

@davecheney Thanks for the reply!

TL;DR

  • Adding host and ```host:port`` to a virtualhost's domains resolves the issue

EnvoyProxy Configuration

That difference between :authority on match vs no match caught my eye too. Though, my understanding is that both the host and :authority headers defined and optional port on the form [:port]

Looking at envoy proxy configuration, I believe the case described is usually (or can be) handled by registering both host and host:port as domains of the configured virtualhost.

I put together a foolish patch that would stub in both entries. Defining an ingress, now produces a set of routes that looks like this

"route_table_dump": {
 "name": "ingress_https",
 "virtual_hosts": [
  {
   "name": "yages.mydomain.com",
   "domains": [
    "yages.mydomain.com",
    "yages.mydomain.com:443"
   ],
   "routes": [
    {
     "match": {
      "prefix": "/"
     },
     "route": {
      "cluster": "default/yages/grpc",
      "timeout": "5s"
     }
    }
   ]
  }
 ]
}
"route_table_dump": {
 "name": "ingress_http",
 "virtual_hosts": [
  {
   "name": "yages.mydomain.com",
   "domains": [
    "yages.mydomain.com",
    "yages.mydomain.com:80"
   ],
   "routes": [
    {
     "match": {
      "prefix": "/"
     },
     "route": {
      "cluster": "default/yages/grpc",
      "timeout": "5s"
     }
    }
   ]
  }
 ]
}
grpcurl -v -insecure -protoset internal/servers/yages/yages.protoset yages.mydomain.com:443 yages.Echo.Ping
Resolved method descriptor:
{
  "name": "Ping",
  "inputType": ".yages.Empty",
  "outputType": ".yages.Content",
  "options": {
    
  }
}

Request metadata to send:
(empty)

Response headers received:
date: Mon, 14 May 2018 19:45:18 GMT
server: envoy
content-type: application/grpc
x-envoy-upstream-service-time: 1


Response contents:
{
  "text": "pong"
}

Response trailers received:
(empty)
Sent 0 requests and received 1 response
  • I got a pong! 😄

I cross-checked the envoy configuration above against istio's route definition for ingress (I'm evaluating several options as part of a planned k8s build out. I'd prefer contour over istio). The envoy routes configuration also contains 2 entries:

  • host
  • host:port

Are you interested in a patch?!

@davecheney
Copy link
Contributor

davecheney commented May 14, 2018

Thanks for your diagnosis, I appreciate the work you've put into this.

I'm wary of accepting a patch for this without more time to consider the implications of this, see #210

@davecheney
Copy link
Contributor

If you want to work on this my suggestion would be to talk to the Envoy authors and see if adding additional domains for :80, and/or :443 is a reasonable workaround for envoyproxy/envoy#1269.

My outstanding concerns are:

  • that creating additional domains might muck up the statistics, ie some of the traffic goes to example.com, some goes to example.com:80
  • what happens with the default domain "", we can't write ":80".

@davecheney
Copy link
Contributor

Please see #390

@davecheney
Copy link
Contributor

I'm going to close this as a duplicate of #390. Please comment if you think there is something I have not captured.

@mattalberts
Copy link
Author

@davecheney closing sounds like a good plan. You've captured it all! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants