-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloudflare: external-dns takes over existing DNS record #3706
Comments
I have the same problem to... failed to create record: DNS Validation Error (1004)
This is my manifest: apiVersion: apps/v1
kind: Deployment
metadata:
name: mydomaincomar-external-dns
namespace: external-dns
spec:
replicas: 1
selector:
matchLabels:
app: mydomaincomar-external-dns
template:
metadata:
labels:
app: mydomaincomar-external-dns
spec:
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.13.5
args:
- '--source=ingress'
- '--domain-filter=mydomain.com.ar'
- '--provider=cloudflare'
- '--cloudflare-proxied'
- '--cloudflare-dns-records-per-page=5000'
- '--log-level=debug'
- '--txt-owner-id=aks-itools-iprd-ue'
env:
- name: CF_API_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare
key: mydomaincomar-token
optional: false
resources:
limits:
cpu: 10m
memory: 32Mi
requests:
cpu: 5m
memory: 16Mi
the same problem with |
Can you please show the ingress resources that have the same hostname in spec including status? |
It looks like the CloudFlare provider handles updates where the targets change by doing a delete followed by an insert. So I suspect the source ingress changed its target. The resulting update was rendered as a delete followed by an insert. The provider then unexpectedly matched the delete request to the existing DNS record, so deleted the existing DNS record. |
@szuecs yes, this is my ingress apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
tls:
- hosts:
- hello-world-ingress.mydomain.com.ar
secretName: hello-world-ingress.mydomain.com.ar--tls
rules:
- host: hello-world-ingress.mydomain.com.ar
http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80 and this another has the same problem apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: loki-loki-distributed-gateway
namespace: monitoring
labels:
app.kubernetes.io/component: gateway
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: loki-distributed
app.kubernetes.io/version: 2.6.1
argo-tracking/instance: iprd-loki
helm.sh/chart: loki-distributed-0.67.1
spec:
ingressClassName: nginx
tls:
- hosts:
- loki-gateway.mydomain.com.ar
secretName: loki-gateway.mydomain.com.ar-tls
rules:
- host: loki-gateway.mydomain.com.ar
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: loki-loki-distributed-gateway
port:
number: 80
The result, no A DNS record |
A think I found the problem. If I try to use the cloudflare api curl --request POST \
--url https://api.cloudflare.com/client/v4/zones/c8tdtrstrsatarstarstb5/dns_records \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer B0Ftrstarstsrtrastyma6' \
--data '{
"content": "198.51.100.4",
"name": "loki-gateway.mydomain.com.ar",
"proxied": true,
"type": "A",
"comment": "Domain verification record",
"tags": [
"owner:dns-team"
],
"ttl": 3600
}' this is the result {"result":null,
"success":false,
"errors":[
{"code":1004,"message":"DNS Validation Error","error_chain":[{"code":9300,"message":"DNS record has 1 tags, exceeding the quota of 0."}]}],
"messages":[]} dns records tags has a quota limit https://developers.cloudflare.com/dns/manage-dns-records/reference/record-attributes/#record-tags there is any way to see the error_chain in the container logs? |
I don't see any references to the |
So why the error "DNS Validation Error"? Is there a way to see more details of the error? |
The "DNS Valdiation Error" was not reported in the initial description. It is probably a separate, unrelated issue. |
Still an issue. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
external-dns was ignoring existing DNS record(created outside of external-dns) for few days from 13th till 15th of June , and suddenly on 15th of June it deleted it and created a new one according to ingress definition deployed on the 13th of June
What you expected to happen:
external-dns ignores the existing DNS record. We've been before planned migration from one system to the other one, and this happened unexpectedly in production, based on existing ingress definition.
How to reproduce it (as minimally and precisely as possible):
I can't reproduce it by creating a new record, but as you see from the Pod logs, it happened.
The related Ingress object and DNS record was created on 13th of June(2023-06-13T08:30:21Z) and external-dns didn't apply any changes to the DNS record until 15th of June(2023-06-15T10:11:38Z).
Anything else we need to know?:
Environment:
external-dns --version
): v20230529-v0.13.5The text was updated successfully, but these errors were encountered: