Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Upstream:Backup directive for service failure use cases #4452

Closed
3 of 4 tasks
danielnginx opened this issue Oct 2, 2023 Discussed in #4091 · 1 comment · Fixed by #4653
Closed
3 of 4 tasks

Support Upstream:Backup directive for service failure use cases #4452

danielnginx opened this issue Oct 2, 2023 Discussed in #4091 · 1 comment · Fixed by #4653
Assignees
Labels
backlog Pull requests/issues that are backlog items
Milestone

Comments

@danielnginx
Copy link
Collaborator

danielnginx commented Oct 2, 2023

Discussed in #4091

Originally posted by brianehlert July 11, 2023
NGINX Ingress Controller customers have expressed a desire to be able to have the continued benefits of NGINX Ingress Controller balancing traffic directly to all of the pods of a Service, but they want the ability to say "if all of the pods of the backend service become unhealthy, direct the traffic to this alternate location"

In nginx.conf this can be achieved using the backup directive under the upstream block. Where an upstream server can be set as the backup when all the other servers fail (fail to respond, are deemed unhealthy via active or passive health checks, etc).
The key here is that a local service has fully failed and traffic needs to be forwarded to some other target.

This is different than using weights. When using weights such as you might for a blue/green, there always runs the risk of a low number of requests being routed to the alternative destination. And that does not work for all applications or situations. For example, NIC today supports weights of 1 - 99. Which would mean if two upstream services are defined 99 requests could be sent to one service and request 100 would be sent to the other. If this is acceptable for the application, this can be achieved today.

Since NGINX Ingress Controller is continuously updating the Upstream server list, it is necessary to represent this concept in YAML.
What I am proposing is this:

apiVersion: k8s.nginx.org/v1 
kind: VirtualServer 
metadata: 
   name: cafe 
spec: 
   host: cafe.example.com 
   tls: 
      secret: cafe-secret 
   upstreams:
   - name: tea
     service: tea-svc
     port: 80 
     backup: backup-cluster
     backupPort: 80
   - name: milk 
     service: milk-svc 
     port: 80
kind: Service
apiVersion: v1
metadata:
  name: backup-cluster
spec:
  type: ExternalName
  externalName: clustertwo.corp.local

apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
  name: cafe
spec:
  listener:
    name: tls-passthrough
    protocol: TLS_PASSTHROUGH
  host: cafe.example.com 
  upstreams:
  - name: cafe-app
    service: cafe-svc
    port: 8443
    backup: cafe-svc-bak
    backupPort: 8443
  action:
    pass: cafe-app

In this example the backup is another service endpoint that is represented by an ExternalName service. It only has to adhere to the limitation of ExternalName service.
I am also limiting the target, because it is a backup, to one service target. Therefore if the Service was some secondary service int he cluster, its Service name would be resolved and the Service cluster IP would be the target for backup.

Tasks

  1. 3 of 3
    backlog bug
    jjngx shaun-nx
@danielnginx danielnginx added this to the v3.4.0 milestone Oct 2, 2023
@danielnginx danielnginx added the backlog Pull requests/issues that are backlog items label Oct 3, 2023
@jjngx jjngx assigned jjngx and unassigned haywoodsh Oct 12, 2023
@jjngx jjngx linked a pull request Nov 15, 2023 that will close this issue
6 tasks
@jjngx jjngx removed a link to a pull request Nov 15, 2023
6 tasks
@jjngx jjngx linked a pull request Nov 15, 2023 that will close this issue
6 tasks
@ADubhlaoich
Copy link
Contributor

I think it makes the most sense to put the documentation this under /configuration.

It's not something everyone will want by default, and it's not something reactive that might make sense for troubleshooting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog Pull requests/issues that are backlog items
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

5 participants