From 76d63e2c5bdbc478112e50aa4c206a1c86543467 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Fri, 24 Nov 2017 12:33:19 +0800 Subject: [PATCH] Improve DNS documentation --- .../services-networking/dns-pod-service.md | 432 ++++++------------ .../dns-custom-nameservers.md | 282 +++++++++++- 2 files changed, 404 insertions(+), 310 deletions(-) diff --git a/docs/concepts/services-networking/dns-pod-service.md b/docs/concepts/services-networking/dns-pod-service.md index 6c11c0e42cff8..f3fafe028d8b7 100644 --- a/docs/concepts/services-networking/dns-pod-service.md +++ b/docs/concepts/services-networking/dns-pod-service.md @@ -2,21 +2,21 @@ approvers: - davidopp - thockin -title: DNS Pods and Services +title: DNS for Services and Pods --- * TOC {:toc} -## Introduction +{% capture body %} -As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager [cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md). +## Introduction Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service's IP to resolve DNS names. -## What things get DNS names? +### What things get DNS names? Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod's DNS search list will @@ -28,17 +28,15 @@ in namespace `bar` can look up this service by simply doing a DNS query for `foo`. A Pod running in namespace `quux` can look up this service by doing a DNS query for `foo.bar`. -## Supported DNS schema - The following sections detail the supported record types and layout that is supported. Any other layout or names or queries that happen to work are considered implementation details and are subject to change without warning. For more up-to-date specification, see [Kubernetes DNS-Based Service Discovery](https://github.com/kubernetes/dns/blob/master/docs/specification.md). -### Services +## Services -#### A records +### A records "Normal" (not headless) Services are assigned a DNS A record for a name of the form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP @@ -50,7 +48,7 @@ Services, this resolves to the set of IPs of the pods selected by the Service. Clients are expected to consume the set or else use standard round-robin selection from the set. -#### SRV records +### SRV records SRV Records are created for named ports that are part of normal or [Headless Services](/docs/concepts/services-networking/service/#headless-services). @@ -62,37 +60,29 @@ For a headless service, this resolves to multiple answers, one for each pod that is backing the service, and contains the port number and a CNAME of the pod of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`. -#### Backwards compatibility - -Previous versions of kube-dns made names of the form -`my-svc.my-namespace.cluster.local` (the 'svc' level was added later). This -is no longer supported. - -### Pods +## Pods -#### A Records +### A Records -When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`. +When enabled, pods are assigned a DNS A record in the form of +"`pod-ip-address.my-namespace.pod.cluster.local`". -For example, a pod with IP `1.2.3.4` in the namespace `default` with a DNS name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`. +For example, a pod with IP `1.2.3.4` in the namespace `default` with a DNS name +of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`. -#### A Records and hostname based on Pod's hostname and subdomain fields +### Pod's hostname and subdomain fields Currently when a pod is created, its hostname is the Pod's `metadata.name` value. -With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be. -The Pod annotation, if specified, takes precedence over the Pod's name, to be the hostname of the pod. -For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name". +The Pod spec has an optional `hostname` field, which can be used to specify the +Pod's hostname. When specified, it takes precedence over the Pod's name, to be +the hostname of the pod. For example, given a Pod with `hostname` set to +"`my-host`", the Pod will have its hostname set to "`my-host`". -With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the -`pod.beta.kubernetes.io/hostname` annotation value. - -v1.2 introduces a beta feature where the user can specify a Pod annotation, `pod.beta.kubernetes.io/subdomain`, to specify the Pod's subdomain. -The final domain will be "...svc.". -For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local" - -With v1.3, the PodSpec has a `subdomain` field, which can be used to specify the Pod's subdomain. This field value takes precedence over the -`pod.beta.kubernetes.io/subdomain` annotation value. +The Pod spec also has an optional `subdomain` field which can be used to specify +its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain` +set to "`bar`", in namespace "`my-namespace`", will have the FQDN +"foo.bar.my-namespace.svc.cluster.local" Example: @@ -143,105 +133,49 @@ spec: name: busybox ``` -If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server also returns an A record for the Pod's fully qualified hostname. -Given a Pod with the hostname set to "busybox-1" and the subdomain set to "default-subdomain", and a headless Service named "default-subdomain" in the same namespace, the pod will see its own FQDN as "busybox-1.default-subdomain.my-namespace.svc.cluster.local". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "busybox1" and "busybox2" can have their distinct A records. - -As of Kubernetes v1.2, the Endpoints object also has the annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'. -If the Endpoints are for a headless service, an A record is created with the format ...svc. -For the example json, if endpoints are for a headless service named "bar", and one of the endpoints has IP "10.245.1.6", an A record is created with the name "my-webserver.bar.my-namespace.svc.cluster.local" and the A record lookup would return "10.245.1.6". -This endpoints annotation generally does not need to be specified by end-users, but can used by the internal service controller to deliver the aforementioned feature. - -With v1.3, The Endpoints object can specify the `hostname` for any endpoint, along with its IP. The hostname field takes precedence over the hostname value -that might have been specified via the `endpoints.beta.kubernetes.io/hostnames-map` annotation. - -With v1.3, the following annotations are deprecated: `pod.beta.kubernetes.io/hostname`, `pod.beta.kubernetes.io/subdomain`, `endpoints.beta.kubernetes.io/hostnames-map`. - -## How do I test if it is working? - -### Create a simple Pod to use as a test environment - -Create a file named busybox.yaml with the -following contents: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: busybox - namespace: default -spec: - containers: - - image: busybox - command: - - sleep - - "3600" - imagePullPolicy: IfNotPresent - name: busybox - restartPolicy: Always -``` - -Then create a pod using this file: - -``` -kubectl create -f busybox.yaml -``` - -### Wait for this pod to go into the running state - -You can get its status with: -``` -kubectl get pods busybox -``` - -You should see: - -``` -NAME READY STATUS RESTARTS AGE -busybox 1/1 Running 0 -``` - -### Validate that DNS is working - -Once that pod is running, you can exec nslookup in that environment: - -``` -kubectl exec -ti busybox -- nslookup kubernetes.default -``` - -You should see something like: - -``` -Server: 10.0.0.10 -Address 1: 10.0.0.10 - -Name: kubernetes.default -Address 1: 10.0.0.1 -``` - -If you see that, DNS is working correctly. - -### Troubleshooting Tips - -If the nslookup command fails, check the following: - -#### Check the local DNS configuration first -Take a look inside the resolv.conf file. (See [Inheriting DNS from the node](#inheriting-dns-from-the-node) and [Known issues](#known-issues) below for more information) - -``` -kubectl exec busybox cat /etc/resolv.conf -``` - -Verify that the search path and name server are set up like the following (note that search path may vary for different cloud providers): - -``` -search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal -nameserver 10.0.0.10 -options ndots:5 -``` - -### DNS Policy - -By default, DNS policy for a pod is 'ClusterFirst'. So pods running with hostNetwork cannot resolve DNS names. To have DNS options set along with hostNetwork, you should specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Update the busybox.yaml as following: +If there exists a headless service in the same namespace as the pod and with +the same name as the subdomain, the cluster's KubeDNS Server also returns an A +record for the Pod's fully qualified hostname. +Given a Pod with the hostname set to "`busybox-1`" and the subdomain set to +"`default-subdomain`", and a headless Service named "`default-subdomain`" in +the same namespace, the pod will see its own FQDN as +"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an +A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and +"`busybox2`" can have their distinct A records. + +The Endpoints object can specify the `hostname` for any endpoint addresses, +along with its IP. + +### Pod's DNS Policy + +DNS policies can be set on a per-pod basis. Currently Kubernetes supports the +following pod-specific DNS policies. These policies are specified in the +`dnsPolicy` field of a Pod Spec. + +- "`Default`": The Pod inherits the name resolution configuration from the node + that the pods run on. + See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node) + for more details. +- "`ClusterFirst`": Any DNS query that does not match the configured cluster + domain suffix, such as "`www.kubernetes.io`", is forwarded to the upstream + nameserver inherited from the node. Cluster aministrator may have extra + stub-domain and upstream DNS servers configured. + See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods) + for details on how DNS queries are handled in those cases. +- "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should + explicitly set its DNS policy "`ClusterFirstWithHostNet`". +- "`None`": A new option value introduced in Kubernetes v1.9. This allows a Pod + to ignore DNS settings from the Kubernetes environment. All DNS settings are + supposed to be provided using the `dnsConfig` field in the Pod Spec. + See [DNS config](#dns-config) subsection below. + +**NOTE:** "Default" is not the default DNS policy. If `dnsPolicy` is not +explicitly specified, then “ClusterFirst” is used. +{: .note} + + +The example below shows a Pod with its DNS policy set to +"`ClusterFirstWithHostNet`" because it has `hostNetwork` set to `true`. ```yaml apiVersion: v1 @@ -262,157 +196,85 @@ spec: dnsPolicy: ClusterFirstWithHostNet ``` -#### Quick diagnosis - -Errors such as the following indicate a problem with the kube-dns add-on or associated Services: - -``` -$ kubectl exec -ti busybox -- nslookup kubernetes.default -Server: 10.0.0.10 -Address 1: 10.0.0.10 - -nslookup: can't resolve 'kubernetes.default' -``` - -or - -``` -$ kubectl exec -ti busybox -- nslookup kubernetes.default -Server: 10.0.0.10 -Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local - -nslookup: can't resolve 'kubernetes.default' -``` - -#### Check if the DNS pod is running - -Use the kubectl get pods command to verify that the DNS pod is running. - -``` -kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -``` - -You should see something like: - -``` -NAME READY STATUS RESTARTS AGE -... -kube-dns-v19-ezo1y 3/3 Running 0 1h -... -``` - -If you see that no pod is running or that the pod has failed/completed, the DNS add-on may not be deployed by default in your current environment and you will have to deploy it manually. - -#### Check for Errors in the DNS pod - -Use `kubectl logs` command to see logs for the DNS daemons. - -``` -kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns -kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq -kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar -``` - -See if there is any suspicious log. W, E, F letter at the beginning represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors. - -#### Is DNS service up? - -Verify that the DNS service is up by using the `kubectl get service` command. - -``` -kubectl get svc --namespace=kube-system -``` - -You should see: - -``` -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -... -kube-dns 10.0.0.10 53/UDP,53/TCP 1h -... -``` - -If you have created the service or in the case it should be created by default but it does not appear, see this [debugging services page](/docs/tasks/debug-application-cluster/debug-service/) for more information. - -#### Are DNS endpoints exposed? - -You can verify that DNS endpoints are exposed by using the `kubectl get endpoints` command. - -``` -kubectl get ep kube-dns --namespace=kube-system -``` - -You should see something like: - -``` -NAME ENDPOINTS AGE -kube-dns 10.180.3.17:53,10.180.3.17:53 1h -``` - -If you do not see the endpoints, see endpoints section in the [debugging services documentation](/docs/tasks/debug-application-cluster/debug-service/). - -For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) in the Kubernetes GitHub repository. - -## Kubernetes Federation (Multiple Zone support) - -Release 1.3 introduced Cluster Federation support for multi-site -Kubernetes installations. This required some minor -(backward-compatible) changes to the way -the Kubernetes cluster DNS server processes DNS queries, to facilitate -the lookup of federated services (which span multiple Kubernetes clusters). -See the [Cluster Federation Administrators' Guide](/docs/concepts/cluster-administration/federation/) for more -details on Cluster Federation and multi-site support. - -## How it Works - -The running Kubernetes DNS pod holds 3 containers - kubedns, dnsmasq and a health check called healthz. -The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains -in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve -performance. The healthz container provides a single health check endpoint while performing dual healthchecks -(for dnsmasq and kubedns). - -The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the -kubelet passes DNS configured using the `--cluster-dns=10.0.0.10` flag to each -container. - -DNS names also need domains. The local domain is configurable, in the kubelet using -the flag `--cluster-domain=`. - -The Kubernetes cluster DNS server (based off the [SkyDNS](https://github.com/skynetservices/skydns) library) -supports forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records). - -## Inheriting DNS from the node -When running a pod, kubelet will prepend the cluster DNS server and search -paths to the node's own DNS settings. If the node is able to resolve DNS names -specific to the larger environment, pods should be able to, also. See "Known -issues" below for a caveat. - -If you don't want this, or if you want a different DNS config for pods, you can -use the kubelet's `--resolv-conf` flag. Setting it to "" means that pods will -not inherit DNS. Setting it to a valid file path means that kubelet will use -this file instead of `/etc/resolv.conf` for DNS inheritance. - -## Known issues -Kubernetes installs do not configure the nodes' resolv.conf files to use the -cluster DNS by default, because that process is inherently distro-specific. -This should probably be implemented eventually. - -Linux's libc is impossibly stuck ([see this bug from -2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just -3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to -consume 1 `nameserver` record and 3 `search` records. This means that if a -local installation already uses 3 `nameserver`s or uses more than 3 `search`es, -some of those settings will be lost. As a partial workaround, the node can run -`dnsmasq` which will provide more `nameserver` entries, but not more `search` -entries. You can also use kubelet's `--resolv-conf` flag. - -If you are using Alpine version 3.3 or earlier as your base image, DNS may not -work properly owing to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215) -for more information. - -## References - -- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md) - -## What's next -- [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). +### Pod's DNS Config + +Kubernetes v1.9 introduces an Alpha feature that allows users more control on +the DNS settings for a Pod. To enable this feature, the cluster aministrator +needs to enable the `CustomPodDNS` feature gate on the apiserver and the kubelet, +for example, "`--feature-gates=CustomPodDNS=true,...". +When the feature gate is enabled, users can set the `dnsPolicy` field of a Pod +to "`None`" and they can add a new field `dnsConfig` to a Pod Spec. + +The `dnsConfig` field is optional and it can work with any `dnsPolicy` settings. +However, when a Pod's `dnsPolicy` is set to "`None`", the `dnsConfig` field has +to be specified. + +Below are the properties a user can specify in the `dnsConfig` field: + +- `nameservers`: a list of IP addresses that will be used as DNS servers for the + Pod. There may be at most 3 IP addresses specified. When the Pod's `dnsPolicy` + is set to "`None`", the list must contain at least one IP address, otherwise + this property is optional. + The servers listed will be combined to the base nameservers generated from the + specified DNS policy with duplicate addresses removed. +- `searches`: a list of DNS search domains for hostname lookup in the Pod. + This property is optional. When specified, the provided list will be merged + into the base search domain names generated from the chosen DNS policy. + Duplicate domain names are removed. + Kubernetes allows for at most 6 search domains. +- `options`: an optional list of objects where each object may have a `name` + property (required) and a `value` property (optional). The contents in this + property will be merged to the options generated from the specified DNS policy. + Duplicate entries are removed. + + +## Compatibility + +Before v1.3, a service name generated by kube-dns was of the form +"`my-svc.my-namespace.cluster.local`". Note that there was not the '`svc`' level +which has been added later. This form is no longer supported. + +With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, +to specify what the Pod's hostname should be. +The Pod annotation, if specified, takes precedence over the Pod's name, to be +the hostname of the pod. For example, given a Pod with annotation +`pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname +set to "my-pod-name". +The annotation has been deprecated by the `hostname` field in a Pod spec since v1.3. + +Kubernetes v1.2 provides a beta feature for users to add a Pod annotation, +`pod.beta.kubernetes.io/subdomain`, to specify the Pod's subdomain. +The final domain will be "...svc.". +For example, a Pod with the hostname annotation set to "`foo`", and the subdomain +annotation set to "`bar`", in namespace "`my-namespace`", will have the FQDN +"foo.bar.my-namespace.svc.cluster.local" + +Back in Kubernetes v1.2, the Endpoints object can have an annotation +`endpoints.beta.kubernetes.io/hostnames-map`. Its value is the JSON +representation of map[string(IP)][endpoints.HostRecord], for example: +`'{"10.245.1.6":{HostName: "my-webserver"}}'`. + +If the Endpoints are for a headless service, an A record is created with the +format `...svc.`. +For the example JSON, if endpoints are for a headless service named "`bar`", +and one of the endpoints has IP "`10.245.1.6`", an A record is created with the +name "`my-webserver.bar.my-namespace.svc.cluster.local`" and the A record +lookup would return "`10.245.1.6`". +This endpoints annotation generally does not need to be specified by end-users, +but can used by the internal service controller to deliver the aforementioned +feature. +Since v1.3, the `hostname` field of an endpoint address in the Endpoints object +takes precedence over the hostname value that might have been specified via the +`endpoints.beta.kubernetes.io/hostnames-map` annotation. The annotation was +deprecated since. + +{% endcapture %} + +{% capture whatsnext %} + +For guidance on administering DNS configurations, check +[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/) + +{% endcapture %} + +{% include templates/concept.md %} diff --git a/docs/tasks/administer-cluster/dns-custom-nameservers.md b/docs/tasks/administer-cluster/dns-custom-nameservers.md index 792978c1abcfd..2be0a8d028224 100644 --- a/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -2,12 +2,12 @@ approvers: - bowei - zihongz -title: Configure private DNS zones and upstream nameservers in Kubernetes +title: Configure DNS Service --- {% capture overview %} -This page shows how to add custom private DNS zones (stub domains) and upstream -nameservers. +This page provides hints on configuring DNS Pod and guidance on customizing the +DNS resolution process and diagnosing DNS problems. {% endcapture %} {% capture prerequisites %} @@ -18,6 +18,45 @@ nameservers. {% capture steps %} +## Introduction + +Starting from Kubernetes v1.3, DNS is a built-in service launched automatically +using the addon manager +[cluster add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md). + +The running Kubernetes DNS pod holds 3 containers: + +- "`kubedns`": The `kubedns` process watches the Kubernetes master for changes + in Services and Endpoints, and maintains in-memory lookup structures to serve + DNS requests. +- "`dnsmasq`": The `dnsmasq` container adds DNS caching to improve performance. +- "`healthz`": The `healthz` container provides a single health check endpoint + while performing dual healthchecks (for `dnsmasq` and `kubedns`). + +The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned +the kubelet passes DNS configured using the `--cluster-dns=` +flag to each container. + +DNS names also need domains. The local domain is configurable in the kubelet +using the flag `--cluster-domain=`. + +The Kubernetes cluster DNS server is based off the +[SkyDNS](https://github.com/skynetservices/skydns) library. It supports forward +lookups (A records), service lookups (SRV records) and reverse IP address +lookups (PTR records). + +## Inheriting DNS from the node + +When running a pod, kubelet will prepend the cluster DNS server and search +paths to the node's own DNS settings. If the node is able to resolve DNS names +specific to the larger environment, pods should be able to, also. +See [Known issues](#known-issues) below for a caveat. + +If you don't want this, or if you want a different DNS config for pods, you can +use the kubelet's `--resolv-conf` flag. Setting it to "" means that pods will +not inherit DNS. Setting it to a valid file path means that kubelet will use +this file instead of `/etc/resolv.conf` for DNS inheritance. + ## Configure stub-domain and upstream DNS servers Cluster administrators can specify custom stub domains and upstream nameservers @@ -43,7 +82,8 @@ As specified, DNS requests with the “.acme.local” suffix are forwarded to a DNS listening at 1.2.3.4. Google Public DNS serves the upstream queries. -The table below describes how queries with certain domain names would map to their destination DNS servers: +The table below describes how queries with certain domain names would map to +their destination DNS servers: | Domain name | Server answering the query | | ----------- | -------------------------- | @@ -58,36 +98,37 @@ details about the configuration option format. {% capture discussion %} -## Understanding name resolution in Kubernetes - -DNS policies can be set on a per-pod basis. Currently Kubernetes supports two pod-specific DNS policies: “Default” and “ClusterFirst”. These policies are specified with the `dnsPolicy` flag. +## Impacts on Pods -*NOTE: "Default" is not the default DNS policy. If `dnsPolicy` is not -explicitly specified, then “ClusterFirst” is used.* +Custom upstream nameservers and stub domains won't impact Pods that have their +`dnsPolicy` set to "`Default`" or "`None`". -### "Default" DNS Policy +If a Pod's `dnsPolicy` is set to "`ClusterFirst`", its name resolution is +handled differently, depending on whether stub-domain and upstream DNS servers +are configured. -If `dnsPolicy` is set to “Default”, then the name resolution configuration is -inherited from the node that the pods run on. Custom upstream nameservers and stub domains cannot be used in conjunction with this policy. +**Without custom configurations**: Any query that does not match the configured +cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream +nameserver inherited from the node. -### "ClusterFirst" DNS Policy - -If the `dnsPolicy` is set to "ClusterFirst", name resolution is handled differently, *depending on whether stub-domain and upstream DNS servers are configured*. - -**Without custom configurations**: Any query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. - -**With custom configurations**: If stub domains and upstream DNS servers are configured (as in the [previous example](#configuring-stub-domain-and-upstream-dns-servers)), DNS queries will be -routed according to the following flow: +**With custom configurations**: If stub domains and upstream DNS servers are +configured (as in the [previous example](#configuring-stub-domain-and-upstream-dns-servers)), +DNS queries will be routed according to the following flow: 1. The query is first sent to the DNS caching layer in kube-dns. -1. From the caching layer, the suffix of the request is examined and then forwarded to the appropriate DNS, based on the following cases: +1. From the caching layer, the suffix of the request is examined and then + forwarded to the appropriate DNS, based on the following cases: - * *Names with the cluster suffix* (e.g.".cluster.local"): The request is sent to kube-dns. + * *Names with the cluster suffix* (e.g.".cluster.local"): + The request is sent to kube-dns. - * *Names with the stub domain suffix* (e.g. ".acme.local"): The request is sent to the configured custom DNS resolver (e.g. listening at 1.2.3.4). + * *Names with the stub domain suffix* (e.g. ".acme.local"): + The request is sent to the configured custom DNS resolver (e.g. listening at 1.2.3.4). - * *Names without a matching suffix* (e.g."widget.com"): The request is forwarded to the upstream DNS (e.g. Google public DNS servers at 8.8.8.8 and 8.8.4.4). + * *Names without a matching suffix* (e.g."widget.com"): + The request is forwarded to the upstream DNS + (e.g. Google public DNS servers at 8.8.8.8 and 8.8.4.4). ![DNS lookup flow](/docs/tasks/administer-cluster/dns-custom-nameservers/dns.png) @@ -100,7 +141,7 @@ Options for the kube-dns `kube-system:kube-dns` ConfigMap: | `stubDomains` (optional) | A JSON map using a DNS suffix key (e.g. “acme.local”) and a value consisting of a JSON array of DNS IPs. | The target nameserver may itself be a Kubernetes service. For instance, you can run your own copy of dnsmasq to export custom DNS names into the ClusterDNS namespace. | | `upstreamNameservers` (optional) | A JSON array of DNS IPs. | Note: If specified, then the values specified replace the nameservers taken by default from the node’s `/etc/resolv.conf`. Limits: a maximum of three upstream nameservers can be specified. | -## Additional examples +## Examples ### Example: Stub domain @@ -142,6 +183,197 @@ data: [“172.16.0.1”] ``` +## Debugging DNS resolution + +### Create a simple Pod to use as a test environment + +Create a file named busybox.yaml with the following contents: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: busybox + namespace: default +spec: + containers: + - image: busybox + command: + - sleep + - "3600" + imagePullPolicy: IfNotPresent + name: busybox + restartPolicy: Always +``` + +Then create a pod using this file and verify its status: + +``` +kubectl create -f busybox.yaml +pod "busybox" created + +kubectl get pods busybox +NAME READY STATUS RESTARTS AGE +busybox 1/1 Running 0 +``` + +Once that pod is running, you can exec `nslookup` in that environment. +If you see something like the following, DNS is working correctly. + +``` +kubectl exec -ti busybox -- nslookup kubernetes.default +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +Name: kubernetes.default +Address 1: 10.0.0.1 +``` + +If the `nslookup` command fails, check the following: + +### Check the local DNS configuration first + +Take a look inside the resolv.conf file. +(See [Inheriting DNS from the node](#inheriting-dns-from-the-node) and +[Known issues](#known-issues) below for more information) + +``` +kubectl exec busybox cat /etc/resolv.conf +``` + +Verify that the search path and name server are set up like the following +(note that search path may vary for different cloud providers): + +``` +search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal +nameserver 10.0.0.10 +options ndots:5 +``` + +Errors such as the following indicate a problem with the kube-dns add-on or +associated Services: + +``` +$ kubectl exec -ti busybox -- nslookup kubernetes.default +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +nslookup: can't resolve 'kubernetes.default' +``` + +or + +``` +$ kubectl exec -ti busybox -- nslookup kubernetes.default +Server: 10.0.0.10 +Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local + +nslookup: can't resolve 'kubernetes.default' +``` + +### Check if the DNS pod is running + +Use the kubectl get pods command to verify that the DNS pod is running. + +``` +kubectl get pods --namespace=kube-system -l k8s-app=kube-dns +NAME READY STATUS RESTARTS AGE +... +kube-dns-v19-ezo1y 3/3 Running 0 1h +... +``` + +If you see that no pod is running or that the pod has failed/completed, the DNS +add-on may not be deployed by default in your current environment and you will +have to deploy it manually. + +### Check for Errors in the DNS pod + +Use `kubectl logs` command to see logs for the DNS daemons. + +``` +kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns +kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq +kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar +``` + +See if there is any suspicious log. Letter '`W`', '`E`', '`F`' at the beginning +represent Warning, Error and Failure. Please search for entries that have these +as the logging level and use +[kubernetes issues](https://github.com/kubernetes/kubernetes/issues) +to report unexpected errors. + +### Is DNS service up? + +Verify that the DNS service is up by using the `kubectl get service` command. + +``` +kubectl get svc --namespace=kube-system +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +... +kube-dns 10.0.0.10 53/UDP,53/TCP 1h +... +``` + +If you have created the service or in the case it should be created by default +but it does not appear, see +[debugging services](/docs/tasks/debug-application-cluster/debug-service/) for +more information. + +### Are DNS endpoints exposed? + +You can verify that DNS endpoints are exposed by using the `kubectl get endpoints` +command. + +``` +kubectl get ep kube-dns --namespace=kube-system +NAME ENDPOINTS AGE +kube-dns 10.180.3.17:53,10.180.3.17:53 1h +``` + +If you do not see the endpoints, see endpoints section in the +[debugging services](/docs/tasks/debug-application-cluster/debug-service/) documentation. + +For additional Kubernetes DNS examples, see the +[cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) +in the Kubernetes GitHub repository. + +## Known issues + +Kubernetes installs do not configure the nodes' resolv.conf files to use the +cluster DNS by default, because that process is inherently distro-specific. +This should probably be implemented eventually. + +Linux's libc is impossibly stuck ([see this bug from +2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just +3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to +consume 1 `nameserver` record and 3 `search` records. This means that if a +local installation already uses 3 `nameserver`s or uses more than 3 `search`es, +some of those settings will be lost. As a partial workaround, the node can run +`dnsmasq` which will provide more `nameserver` entries, but not more `search` +entries. You can also use kubelet's `--resolv-conf` flag. + +If you are using Alpine version 3.3 or earlier as your base image, DNS may not +work properly owing to a known issue with Alpine. +Check [here](https://github.com/kubernetes/kubernetes/issues/30215) +for more information. + +## Kubernetes Federation (Multiple Zone support) + +Release 1.3 introduced Cluster Federation support for multi-site Kubernetes +installations. This required some minor (backward-compatible) changes to the +way the Kubernetes cluster DNS server processes DNS queries, to facilitate +the lookup of federated services (which span multiple Kubernetes clusters). +See the [Cluster Federation Administrators' Guide](/docs/concepts/cluster-administration/federation/) +for more details on Cluster Federation and multi-site support. + +## References + +- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md) + +## What's next +- [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). + {% endcapture %} {% include templates/task.md %}