Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add additional server without overwriting the default one #75

Open
maxnitze opened this issue Aug 10, 2022 · 3 comments
Open

Add additional server without overwriting the default one #75

maxnitze opened this issue Aug 10, 2022 · 3 comments

Comments

@maxnitze
Copy link

maxnitze commented Aug 10, 2022

Is it somehow possible to add an additional server to the Corefile without overwriting the default server?

The default values for servers are:

servers:
- zones:
  - zone: .
  port: 53
  # If serviceType is nodePort you can specify nodePort here
  # nodePort: 30053
  plugins:
  - name: errors
  # Serves a /health endpoint on :8080, required for livenessProbe
  - name: health
    configBlock: |-
      lameduck 5s
  # Serves a /ready endpoint on :8181, required for readinessProbe
  - name: ready
  # Required to query kubernetes API for data
  - name: kubernetes
    parameters: cluster.local in-addr.arpa ip6.arpa
    configBlock: |-
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
      ttl 30
  # Serves a /metrics endpoint on :9153, required for serviceMonitor
  - name: prometheus
    parameters: 0.0.0.0:9153
  - name: forward
    parameters: . /etc/resolv.conf
  - name: cache
    parameters: 30
  - name: loop
  - name: reload
  - name: loadbalance

With this config the ConfigMap with the Corefile looks like this:

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |-
    .:53 {
        errors
        health  {
            lameduck 5s
        }
        ready
        kubernetes   cluster.local  cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus   0.0.0.0:9153
        forward   . /etc/resolv.conf
        cache   30
        loop
        reload
        loadbalance
    }

I want to add one additional hosts, so that it looks somewhat like this:

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |-
    .:53 {
        errors
        health  {
            lameduck 5s
        }
        ready
        kubernetes   cluster.local  cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus   0.0.0.0:9153
        forward   . /etc/resolv.conf
        cache   30
        loop
        reload
        loadbalance
    }
    my-internal-domain.com:53 {
        hosts {
          10.20.30.40 my-service.my-internal-domain.com
          fallthrough
        }
    }

Unfortunately the nameserver that is provided by the /etc/resolv.conf must not resolve this host. And running my own nameserver on the host (with dnsmasq) does not work well together with CoreDNS.

I got tat least the deployment working by adding another zoneFile. It was successfully mounted to the container, but the name did not resolve afterwards. Maybe I need to register it somewhere?

Thanks in advance!

@maxnitze
Copy link
Author

Ok, I did it now with the following:

zoneFiles:
  - filename: my-internal-domain.com.conf
    domain: my-internal-domain.com
    contents: |
      my-internal-domain.com:53 {
          hosts {
            10.20.30.40 my-service.my-internal-domain.com
            fallthrough
          }
      }
extraConfig:
  import:
    parameters: /etc/coredns/my-internal-domain.com.conf

Is this the intended way? It does not feel like it.

@hagaibarel
Copy link
Collaborator

Yes, this is the intended way to support additional zone files. I'd love to hear if you comments or suggestions on how make it more clear

@hungran
Copy link

hungran commented May 6, 2023

#78 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants