-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent behavior between Azure DNS and Azure Private DNS #4372
Comments
I understand this frustration of different behavior between both Azure providers.
A parameter named |
I totally agree with that, if we decide to do so I'm happy to submit a new PR |
@khuedoan For migration purposes, I would say if we want to change the flag behavior, we need to have multiple steps:
I think the faster way to enable you would be to review and merge your PR. |
Sounds good to me 👍 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened:
Thank you for the project! We use ExternalDNS to manage DNS for our Kubernetes clusters with Azure Private DNS. Here's our setup and example use case (with sensitive values replaced by placeholders):
internal.example.com
zonecluster-1
,cluster-2
service-a
(deployed tocluster-1
) andservice-b
(deployed tocluster-2
)The applications deployed to each cluster will have the following hosts in their Ingress object (following the
$APP.$CLUSTER.internal.example.com
convention):service-a.cluster-1.internal.example.com
service-b.cluster-2.internal.example.com
Each cluster has a separate ExternalDNS controller.
Because each cluster is managed by a different team, we want to avoid accidental misconfiguration by specifying
--domain-filter
to limit the scope of ExternalDNS on each cluster to only Ingress hostnames with the$CLUSTER.internal.example.com
suffix.But when we add
--domain-filter=$CLUSTER.internal.example.com
, we get the following error:(It does work without the
--domain-filter
flag)After reading the code we noticed that
--domain-filter
actually filters the zone name, not the domain name in Ingress object, and the Azure DNS provider (--privider=azure
) has an optional--zone-name-filter
flag that changes the behaviour of--domain-filter
to filter Ingress domains instead (implemented in #1060), but there's no implementation for that flag in the Azure Private DNS provider (--provider=azure-private-dns
)What you expected to happen:
Initially, I expected the
--domain-filter
flag to filter the hostnames in Ingressspec.rules.*.host
, but seems like I misunderstood and it's a design decision.If I understand correctly, the
--zone-name-filter
flag was added to Azure DNS to alter the behavior of--domain-filter
to make it backward compatible and avoid breaking changes.If that's the case, I expect Azure Private DNS to have the same consistent behaviour as Azure (public) DNS. I created a PR (#4346) to port the same feature to Azure Private DNS.
How to reproduce it (as minimally and precisely as possible):
Here's the relevant ExternalDNS configuration:
Cluster 1:
Cluster 2:
Anything else we need to know?:
Environment:
external-dns --version
): v0.14.1The text was updated successfully, but these errors were encountered: