Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(113: Host is unreachable) while connecting to upstream #8081

Closed
tholvoleak opened this issue Dec 28, 2021 · 11 comments
Closed

(113: Host is unreachable) while connecting to upstream #8081

tholvoleak opened this issue Dec 28, 2021 · 11 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@tholvoleak
Copy link

tholvoleak commented Dec 28, 2021

Hi I am a new kubernestes user, I have set up RKE Kubernetes cluster, I have tried to deploy an application and create ingress to expose external access. but I got an issue with "502 Bad Gateway".

cat nginx-app.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  selector:
    matchLabels:
      run: nginx-app
  replicas: 3 
  template:
    metadata:
      labels:
        run: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

cat nginx-service.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-app

cat nginx-ingress.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - http:
      paths:
      - path: /demo
        pathType: Prefix
        backend:
          service: 
             name: nginx-service
             port:
               number: 8080

kubectl get pod -o wide

NAME                                   READY   STATUS      RESTARTS   AGE    IP           NODE             NOMINATED NODE   READINESS GATES
nginx-app-744fc45d8f-drnml             1/1     Running     0          14m    10.42.0.16   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-lc9zn             1/1     Running     0          14m    10.42.0.15   10.*.*.207   <none>           <none>
nginx-app-744fc45d8f-njjkr             1/1     Running     0          14m    10.42.0.14   10.*.*.207   <none>           <none>

kubectl get svc -o wide

NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE    SELECTOR
nginx-service                        ClusterIP   10.43.89.106   <none>        8080/TCP   8m2s   run=nginx-app

kubectl get ingress -o wide

NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
nginx-ingress   <none>   *       10.*.*.207   80      22m

curl http://10.*.*.207/demo

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>

Error logs of pod nginx-ingress controller

2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20

I tested curl from nginx-ingress-controller pod to that service "nginx-service:8080" it's working well. but it not work when i curl to node ip (I set up single node only 10...207)

$ kubectl get node
NAME             STATUS   ROLES                      AGE     VERSION
10.*.*.207   Ready    controlplane,etcd,worker   4h35m   v1.21.7
NAME                                   READY   STATUS      RESTARTS   AGE
nginx-ingress-controller-njx22         1/1     Running     0          4h31m
$ kubectl exec -it nginx-ingress-controller-njx22 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.1$ curl http://nginx-service:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
bash-5.1$ 

Anyone helps me to solve this problem please?

@tholvoleak tholvoleak added the kind/bug Categorizes issue or PR as related to a bug. label Dec 28, 2021
@k8s-ci-robot
Copy link
Contributor

@tholvoleak: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Dec 28, 2021
@longwuyuan
Copy link
Contributor

(1) Duplicate of #8079

(2) 502 response is correct because there is nothing served at that path. Try / as path

(3) It may help to consider the suggestion made in that other issue

this is a basic functionality of the ingress-nginx-controller. So its not a bug and it seems like you are asking for support. Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io . Later if you find a bug or a problem, you can reopen this issue. So i will close for now. Thanks.

(4) You can read the docs at https://kubernetes.io/docs/concepts/services-networking/ingress/ and https://kubernetes.github.io/ingress-nginx/examples/

/remove-kind bug
/kind support

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 28, 2021
@tholvoleak
Copy link
Author

tholvoleak commented Dec 28, 2021

(2) 502 response is correct because there is nothing served at that path. Try / as path

When I tried path /, it's working (404 Not Found) but not route to pod.
In the ingress file, I set path: /demo and pathType: Prefix

      - path: /demo
        pathType: Prefix
        backend:
          service: 
             name: nginx-service
             port:
               number: 8080
Name:             nginx-ingress
Labels:           <none>
Namespace:        ingress-nginx
Address:          10.x.x.207
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /demo   nginx-service:8080 (10.42.0.14:80,10.42.0.15:80,10.42.0.16:80)
Annotations:  nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                   From                      Message
  ----    ------  ----                  ----                      -------
  Normal  Sync    7m1s (x2 over 7m31s)  nginx-ingress-controller  Scheduled for sync

So, It should work with that path /demo, right?

@tholvoleak
Copy link
Author

I did allow network policies on that namespace. but still not work

Spec:
  PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
  Allowing ingress traffic:
    To Port: <any> (traffic allowed to all ports)
    From: <any> (traffic not restricted by source)
  Not affecting egress traffic
  Policy Types: Ingress

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 28, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@uniuuu
Copy link

uniuuu commented Apr 10, 2023

Hi @tholvoleak
Replied here #8079

@cleanet
Copy link

cleanet commented May 2, 2024

The logs:

2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20

means that nginx is accessing at application since the endpoint 10.42.0.15:80.

This socket, is the endpoint of you service. You can see it, do it:

kubectl get endpoints -n nginx-service

In this case, is the endpoints of service nginx-service.
But seeing that throw a 502 Bad Gateway and the logs, this means that the ingress controller is trying access at service via endpoint (trying with all the endpoints of ingress controller). And the ingress controller's pod cannot access.

For test it, entry in the pod of ingress controller and checks the connection.

$ kubectl exec -it pod/ingress-nginx-controller-57ff8464d9-pvjpc -- bash
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ nc -zv 10.42.0.16 80
nc: 10.85.0.12 (10.85.0.12:8080): Host is unreachable
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ 

As we see exactly , this cannot access.

You look that IP has the service nginx-service and try access

$ kubectl describe service
$ kubectl exec -it pod/ingress-nginx-controller-57ff8464d9-pvjpc -- bash
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ nc -zv 10.43.89.106 8080
10.43.89.106 (10.43.89.106:8080) open

And as we see, the pod has access. With the ClusterIP and Port of the service.

So that a solution would be do the follow.

You must tell at Ingress, that uses the ClusterIP:port instead of use endpoints list of ingress controller.

For this you edit the Ingress resource and add the follow annotation.

nginx.ingress.kubernetes.io/service-upstream: "true"

FYI

Service Upstream

By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.

The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

This can be desirable for things like zero-downtime deployments . See issue #257.

Known Issues

If the service-upstream annotation is specified the following things should be taken into consideration:

  • Sticky Sessions will not work as only round-robin load balancing is supported.
  • The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.

rascasoft added a commit to mmul-it/kubelab that referenced this issue May 31, 2024
This commit adds the annotation parameter inside the ingress example in the
"Ingress NGINX" README paragraph.
This is not needed while using EL distributions, but without it Ubuntu
ingresses won't work, as explained in this issue [1].

Fixes: #1

[1] kubernetes/ingress-nginx#8081
@asifaftab87
Copy link

annotations:
nginx.ingress.kubernetes.io/service-upstream: "true"

did work,
thank you so much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

7 participants