Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate reference details out Service concept #36675

Merged
merged 6 commits into from
Nov 29, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
400 changes: 27 additions & 373 deletions content/en/docs/concepts/services-networking/service.md

Large diffs are not rendered by default.

11 changes: 11 additions & 0 deletions content/en/docs/reference/networking/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: Networking Reference
sftim marked this conversation as resolved.
Show resolved Hide resolved
content_type: reference
weight: 85
---

<!-- overview -->
This section of the Kubernetes documentation provides reference details
of Kubernetes networking.

<!-- body -->
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Ports and Protocols
content_type: reference
weight: 90
weight: 40
---

When running Kubernetes in an environment with strict network boundaries, such
Expand Down
127 changes: 127 additions & 0 deletions content/en/docs/reference/networking/service-protocols.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
---
title: Protocols for Services
content_type: reference
weight: 10
---

<!-- overview -->
If you configure a {{< glossary_tooltip text="Service" term_id="service" >}},
you can select from any network protocol that Kubernetes supports.

Kubernetes supports the following protocols with Services:

- [`SCTP`](#protocol-sctp)
- [`TCP`](#protocol-tcp) _(the default)_
- [`UDP`](#protocol-udp)

When you define a Service, you can also specify the
[application protocol](/docs/concepts/services-networking/service/#application-protocol)
that it uses.

This document details some special cases, all of them typically using TCP
as a transport protocol:

- [HTTP](#protocol-http-special) and [HTTPS](#protocol-http-special)
- [PROXY protocol](#protocol-proxy-special)
- [TLS](#protocol-tls-special) termination at the load balancer
sftim marked this conversation as resolved.
Show resolved Hide resolved

<!-- body -->
## Supported protocols {#protocol-support}

There are 3 valid values for the `protocol` of a port for a Service:

### `SCTP` {#protocol-sctp}

{{< feature-state for_k8s_version="v1.20" state="stable" >}}

When using a network plugin that supports SCTP traffic, you can use SCTP for
most Services. For `type: LoadBalancer` Services, SCTP support depends on the cloud
provider offering this facility. (Most do not).

SCTP is not supported on nodes that run Windows.

#### Support for multihomed SCTP associations {#caveat-sctp-multihomed}

The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.

NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.

{{< note >}}
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to make 'kube-proxy' a link or a glossary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this PR, I was aiming to retain the original text. Subsequent PRs could tidy that up.

If some of my other PRs merge, I can shrink #30817 until it reaches a reviewable size. That PR includes more fixes than just the reorganisation that this PR covers.

{{< /note >}}


### `TCP` {#protocol-tcp}

You can use TCP for any kind of Service, and it's the default network protocol.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You can use TCP for any kind of Service, and it's the default network protocol.
You can use TCP for any kind of Service, and it's the default service protocol.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this PR, I was aiming to retain the original text. Subsequent PRs could tidy that up.

If some of my other PRs merge, I can shrink #30817 until it reaches a reviewable size. That PR includes more fixes than just the reorganisation that this PR covers.


### `UDP` {#protocol-udp}

You can use UDP for most Services. For `type: LoadBalancer` Services,
UDP support depends on the cloud provider offering this facility.


## Special cases

### HTTP {#protocol-http-special}

If your cloud provider supports it, you can use a Service in LoadBalancer mode to
configure a load balancer outside of your Kubernetes cluster, in a special mode
where your cloud provider's load balancer implements HTTP / HTTPS reverse proxying,
with traffic forwarded to the backend endpoints for that Service.

Typically, you set the protocol for the Service to `TCP` and add an
{{< glossary_tooltip text="annotation" term_id="annotation" >}}
(usually specific to your cloud provider) that configures the load balancer
to handle traffic at the HTTP level.
This configuration might also include serving HTTPS (HTTP over TLS) and
reverse-proxying plain HTTP to your workload.

{{< note >}}
You can also use an {{< glossary_tooltip term_id="ingress" >}} to expose
HTTP/HTTPS Services.
{{< /note >}}

You might additionally want to specify that the
[application protocol](/docs/concepts/services-networking/service/#application-protocol)
of the connection is `http` or `https`. Use `http` if the session from the
load balancer to your workload is HTTP without TLS, and use `https` if the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if the application protocol has to be bound to a LoadBalancer service. Maybe we can use it for ClusterIP and/or NodePort services as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are annotating a Service, where the type of the Service is set to LoadBalancer, we can assume there's a load balancer. If you're not, this section isn't relevant.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is not about LoadBalancer service, right?
We are confusing users with this revision.

session from the load balancer to your workload uses TLS encryption.

### PROXY protocol {#protocol-proxy-special}

If your cloud provider supports it, you can use a Service set to `type: LoadBalancer`
to configure a load balancer outside of Kubernetes itself, that will forward connections
wrapped with the
[PROXY protocol](https://www.haproxy.org/download/2.5/doc/proxy-protocol.txt).

The load balancer then sends an initial series of octets describing the
incoming connection, similar to this example (PROXY protocol v1):

```
PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n
```

The data after the proxy protocol preamble are the original
data from the client. When either side closes the connection,
the load balancer also triggers a connection close and sends
any remaining data where feasible.

Typically, you define a Service with the protocol to `TCP`.
You also set an annotation, specific to your
cloud provider, that configures the load balancer to wrap each incoming connection in the PROXY protocol.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the implications to users? In other words, why should they care about these details?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we fix that as a follow up after the refactor part of the changes are in?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, in a refactor PR, we can move the existing text without any modifications. Such a PR is a pure refactor one. For newly added contents, we may want to make sure we are not introducing defects. If a PR is already too large, we may want to consider whether it is possible to split such a PR to smaller ones. When we identify that something is more appropriate for a follow-up PR, we may want to ensure that an issue is filed before the current PR is kicked in.

Does this make senses to you?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with each of those principles @tengqm but I'm not sure if we agree on the interpretation of them.

I don't see a way to make this PR smaller and still be useful.


### TLS {#protocol-tls-special}

If your cloud provider supports it, you can use a Service set to `type: LoadBalancer` as
a way to set up external reverse proxying, where the connection from client to load
balancer is TLS encrypted and the load balancer is the TLS server peer.
sftim marked this conversation as resolved.
Show resolved Hide resolved
The connection from the load balancer to your workload can also be TLS,
or might be plain text. The exact options available to you depend on your
cloud provider or custom Service implementation.

Typically, you set the protocol to `TCP` and set an annotation
(usually specific to your cloud provider) that configures the load balancer
to act as a TLS server. You would configure the TLS identity (as server,
and possibly also as a client that connects to your workload) using
mechanisms that are specific to your cloud provider.
Loading