Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

502 Errors when deployment scales up #2490

Closed
tobrien-endpoint opened this issue Feb 2, 2022 · 8 comments
Closed

502 Errors when deployment scales up #2490

tobrien-endpoint opened this issue Feb 2, 2022 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tobrien-endpoint
Copy link

Describe the bug

We are getting a few 502 errors from the ALB when our deployments scale up with HPA.

Steps to reproduce

We are leveraging alb.ingress.kubernetes.io/target-type: ip, auto-injecting the readinessGates into our pods (we labeled the namespace), have readinessProbess, and have alb.ingress.kubernetes.io/healthcheck-path enabled. Our nodes are only under about 30% load during this time, pods pass readiness checks, and the readinessGate conditions added by the webhook pass.

Expected outcome

It's my understanding that with all of these enabled, we should not experience any 502 errors when pods are added to the deployment and registered into the target groups.

Environment

  • AWS Load Balancer controller version 2.1.8
  • Kubernetes version 1.18
  • Using EKS (yes/no), if so version?

Additional Context:

Ingress annotations:

    alb.ingress.kubernetes.io/actions.ssl-redirect: >-
      {"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port":
      "443", "StatusCode": "HTTP_301"}}
    alb.ingress.kubernetes.io/group.name: internal
    alb.ingress.kubernetes.io/healthcheck-path: /health
    alb.ingress.kubernetes.io/ip-address-type: ipv4
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true,idle_timeout.timeout_seconds=121'
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
    alb.ingress.kubernetes.io/target-type: ip

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
  name: test
spec:
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app.kubernetes.io/name: test
  template:
    metadata:
      labels:
        app.kubernetes.io/name: test
    spec:
      containers:
          image: -----------
          imagePullPolicy: Always
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /service-information
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - containerPort: 3000
              name: http
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Feb 2, 2022

@tobrien-endpoint
You'll also need a prestop hook to sleep, see #2366 for more details

@tobrien-endpoint
Copy link
Author

@M00nF1sh I saw we'll need to achieve 200s during scale down events / rolling deploys, but we don't have any pods being terminated during this time. This is happening when we are going from say, 2 -> 5 pods of the same deployment. Pods are only being added, not stopped/removed during the events where we're seeing the 502s. Would this have any effect during scale up?

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Feb 9, 2022

@tobrien-endpoint
It shouldn't caused 502s if only new pods are been added. (and prestop hook didn't matter since no old pod is deleted in your case).

If your pod ready to reach traffic as long as it passes readiness probe? If not, you should update the readiness probe to test for it.
And if yes, we'll need to further troubleshot where the 502 came from(as all pods been registered is ready to take traffic), maybe involve ALB team.

@kishorj
Copy link
Collaborator

kishorj commented Feb 16, 2022

@tobrien-endpoint, any updates on this issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants