Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow multiple services to share same dns record #1446

Merged
merged 3 commits into from
Oct 16, 2020
Merged

Allow multiple services to share same dns record #1446

merged 3 commits into from
Oct 16, 2020

Conversation

sagor999
Copy link
Contributor

This allows multiple services to share same dns record via new annotation:
external-dns.alpha.kubernetes.io/parent

Example:

    apiVersion: v1
    kind: Service
    metadata:
      name: external-dns-test-service
      namespace: ptumik-test
      annotations:
        external-dns.alpha.kubernetes.io/hostname: shared.example.com
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      selector:
        app: external-dns-test
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          name: http
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: external-dns-test-service2
      namespace: ptumik-test
      annotations:
        external-dns.alpha.kubernetes.io/hostname: shared.example.com
        external-dns.alpha.kubernetes.io/parent: ptumik-test/external-dns-test-service
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      selector:
        app: external-dns-test2
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
          name: http

This allows two separate services to share same dns record and have multiple A records created for same hostname.
This way we still maintain relationship of one dns record = one service.

This is specifically needed for our internal case, where we have to use dns type load balancing and cannot push all traffic through one node.

I do understand if this will get rejected, since this seems like a very specific thing that is needed for our scenario and might not be needed for clusters running in the cloud (ours is bare metal).

Open to suggestions and improvements.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 25, 2020
@sagor999
Copy link
Contributor Author

/assign @hjacobs

@hwoarang
Copy link

This is also something that we need for our usecase were we need to have a single DNS record on Route53 to send traffic to multiple k8s services which all have the same name but are located in different namespaces. For example

  • serviceA on namespaceA
  • serviceA on namespaceB

We would like external-dns to be able to create a single serviceA.example.com A record with the IPs of the two serviceA services.

So far we are using the multivalue routing option that this does not scale above 8 entries so it's only a workaround.

@seanmalloy
Copy link
Member

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 19, 2020
@seanmalloy
Copy link
Member

@sagor999 can you please rebase this PR?

@sagor999
Copy link
Contributor Author

@seanmalloy rebased.

@seanmalloy
Copy link
Member

/assign

@seanmalloy
Copy link
Member

@sagor999 I think the code changes you have made look pretty good. The project maintainers will bee do decide if they want to add this feature or not. I am not a maintainer for the k8s external-dns project.

Two things that you will need to update if the maintainers/community want to move forward with accepting the feature enhancement.

  1. Create end user documentation for the new annotation
  2. Create unit tests for the new code being added to cover this feature enhancement

@Raffo and @njuettner what do you think? Is this a feature you would be willing to accept into the k8s external-dns project?

/cc @tariq1890

@seanmalloy
Copy link
Member

I reached out in Slack to get some feedback from the community to see if people think this feature would be useful or not.

https://kubernetes.slack.com/archives/C771MKDKQ/p1599591609100400

@seanmalloy
Copy link
Member

I sent a message to @Raffo and @njuettner in Slack asking for their feedback: https://kubernetes.slack.com/archives/C771MKDKQ/p1600281812134100

@Raffo
Copy link
Contributor

Raffo commented Sep 19, 2020

Thank you @sagor999 for this PR and @seanmalloy for the great help reviewing PRs.

I took some time to try to understand the use case and I have a question: wouldn't a service that points to the other services of the apps be enough to satisfy the use case? I'm thinking of something along the lines of what is described in this stackoverflow thread. I think that if this satisfies your use case, we should avoid adding the feature to ExternalDNS.

In general, I find the API of this proposal a bit hard to understand: I struggled to understand what parent means in this context and I don't think it will be useful as is. I would rather have an annotation in all the services to be merged, something like external-dns.alpha.kubernetes.io/merge: foo and merge all the records with name foo. But again, this use of annotations can easily get confusing and the whole thing smells like there is an abstraction missing. For that, I'm thinking that if we can use it with the additional service it is better.

@sagor999
Copy link
Contributor Author

@Raffo
The use case for this is following example:
Say we have rados gateway (S3) that serves a lot of traffic.
We would like to be able to spread the traffic over several nodes. (one node cannot physically handle that much traffic).
To do that we would like to use DNS level load balancing via multiple A records. And we are working with baremetal cluster.
For that, we need to create several external load balancer services, that will route traffic to respective rados pods.
Currently there is no way with external-dns to have all 3 load balancer services share same DNS name via multiple A records.
With this PR you can now share DNS name across multiple services to implement DNS level load balancing.

As for actual implementation:
I agree, it is not ideal and a bit confusing.
I do like your idea of merge annotation. Though I think there was something implementation wise that required me to specify one master service and then allow other services to share DNS name with that master via parent annotation.

Let me know what you think.
If you don't want this PR in any shape or form due to concerns that you listed above, let me know and I can close this PR.

@Raffo
Copy link
Contributor

Raffo commented Sep 19, 2020

@sagor999 Before deciding if we want to continue with this PR (in its modified form), I would love to understand if the solution with the additional Service with the type: ExternalName that i proposed would solve the problem. Did you consider it and if so, what are the limits?

@sagor999
Copy link
Contributor Author

@Raffo No, that solution not going to fix anything. It will not allow you to share same DNS record with multiple services. type: ExternalName service is for cluster use (for workloads running inside cluster).
What we need is to be able to expose several external IPs via same DNS record.

@Raffo
Copy link
Contributor

Raffo commented Sep 20, 2020

@sagor999 gotcha. I spent a significant amount of time trying to hack a solution by using an ingress resource without an ingress controller, but ingress is HTTP only and most of the things of the spec are made for such cases. Also, Kubernetes enforces validation on fields that makes the hack hard to implement... too bad cause this could have been a good solution in terms of having an abstraction to represent what you want. I think I've done enough research and attempts to say that what you want to do isn't implementable with Kubernetes as it is today and given that I think that we should proceed with this PR. What I'm asking you to do is to change it with the new merge annotation and I'll review that once the changes are done.

@sagor999
Copy link
Contributor Author

@Raffo sounds good.
Just to make sure we are on the same page, is this how you envision this feature to work:

ServiceA
annotations:
        external-dns.alpha.kubernetes.io/hostname: shared.example.com
        external-dns.alpha.kubernetes.io/merge: true

ServiceB
annotations:
        external-dns.alpha.kubernetes.io/hostname: shared.example.com
        external-dns.alpha.kubernetes.io/merge: true

For merging to work, hostname should be the same. So annotation merge: true just tells controller that merging is allowed and intentional.

@Raffo
Copy link
Contributor

Raffo commented Sep 20, 2020

@sagor999 yes, sounds good 👍

@savitha-ping
Copy link

Will this annotation work with multiple instances of external-dns running in different clusters to merge/consolidate DNS records? Basically, does this cover the use case described in Issue #1441?

@sagor999
Copy link
Contributor Author

@savitha-ping no, my solution will not cover that use case.

@savitha-ping
Copy link

Ahh, that's too bad, I was hopeful :)

Can someone provide some input/feedback on #1441? Is it being considered? It is very useful to have in multi-region deployments (i.e. multiple Kubernetes clusters) when using service discovery protocols such as JGroups DNS_PING.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 29, 2020
@sagor999
Copy link
Contributor Author

@Raffo I updated this PR.
I realized that having annotation external-dns.alpha.kubernetes.io/merge: true is not necessary and not intuitive for end user.
What I did instead is that if we have multiple services that have same hostname, they would get merged into one DNS entry with multiple A records. As I would think that is what end user would assume should happen.
At least when I attempted to use it that way, I was assuming that external-dns will merge those entries and create a DNS record that would point to all services that share same hostname.

I added tests to test those cases as well.

Let me know what you think and if I missed anything? Or if you have suggestions for improvements.

Thank you!

@sfc-gh-mgarda
Copy link

@sagor999 @Raffo : any idea when this PR will be accepted?

@Raffo
Copy link
Contributor

Raffo commented Oct 16, 2020

/approve
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 16, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Raffo, sagor999

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 16, 2020
@k8s-ci-robot k8s-ci-robot merged commit 7639d72 into kubernetes-sigs:master Oct 16, 2020
@sagor999 sagor999 deleted the share-dns-record branch October 16, 2020 14:33
@sfc-gh-mgarda
Copy link

@Raffo : thanx for approving PR, any idea when you will have the next release that includes this PR?

@Raffo
Copy link
Contributor

Raffo commented Oct 27, 2020

@sfc-gh-mgarda v0.7.5 is expected to land at the beginning of November.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. squash-on-merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants