Skip to content

Commit

Permalink
Merge branch 'main' into alpeb/2.12
Browse files Browse the repository at this point in the history
  • Loading branch information
alpeb committed Jan 10, 2022
2 parents 7d4a1a2 + eaee8e3 commit f188049
Show file tree
Hide file tree
Showing 44 changed files with 1,052 additions and 113 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
run: |
make build-run.linkerd.io
- uses: actions/upload-artifact@v2.2.4
- uses: actions/upload-artifact@v2.3.1
with:
name: run.linkerd.io
path: tmp/run.linkerd.io/public
Expand All @@ -45,7 +45,7 @@ jobs:
needs: [build]

steps:
- uses: actions/download-artifact@v2.0.10
- uses: actions/download-artifact@v2.1.0
with:
name: run.linkerd.io

Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2.10/tasks/distributed-tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ If using helm to install ingress-nginx, you can configure tracing by using:
controller:
config:
enable-opentracing: "true"
zipkin-collector-host: linkerd-collector.linkerd
zipkin-collector-host: collector.linkerd-jaeger
```
### Client Library
Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2.10/tasks/using-ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ mesh](https://buoyant.io/2021/05/24/emissary-and-linkerd-the-best-of-both-worlds
Nginx can be meshed normally, but the
[`nginx.ingress.kubernetes.io/service-upstream`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#service-upstream)
annotation should be set to `true`. No further configuration is required.
annotation should be set to `"true"`. No further configuration is required.

```yaml
# apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
Expand All @@ -90,7 +90,7 @@ metadata:
name: emojivoto-web-ingress
namespace: emojivoto
annotations:
nginx.ingress.kubernetes.io/service-upstream: true
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
ingressClassName: nginx
defaultBackend:
Expand Down
31 changes: 16 additions & 15 deletions linkerd.io/content/2.11/features/protocol-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,30 +31,31 @@ likely running into a protocol detection timeout. This section will help you
understand how to fix this.
{{< /note >}}

In some cases, Linkerd's protocol detection will time out because it doesn't
see any bytes from the client. This situation is commonly encountered when
using "server-speaks-first" protocols where the server sends data before the
client does, such as SMTP, or protocols that proactively establish connections
without sending data, such as Memcache. In this case, the connection will
proceed as a TCP connection after a 10-second protocol detection delay.
In some cases, Linkerd's protocol detection will time out because it doesn't see
any bytes from the client. This situation is commonly encountered when using
protocols where the server sends data before the client does (such as SMTP) or
protocols that proactively establish connections without sending data (such as
Memcache). In this case, the connection will proceed as a TCP connection after a
10-second protocol detection delay.

To avoid this delay, you will need to provide some configuration for Linkerd.
There are two basic mechanisms for configuring protocol detection: _opaque
ports_ and _skip ports_. Marking a port as _opaque_ instructs Linkerd to skip
protocol detection and immediately proxy the connection as a TCP stream;
marking a port as a _skip port_ bypasses the proxy entirely. Opaque ports are
generally preferred (as Linkerd can provide mTLS, TCP-level metrics, etc), but
can only be used for services inside the cluster.
protocol detection and immediately proxy the connection as a TCP stream; marking
a port as a _skip port_ bypasses the proxy entirely. Opaque ports are generally
preferred (as Linkerd can still provide mTLS, TCP-level metrics, etc), but can
only be used for destinations inside the cluster.

By default, Linkerd automatically marks the ports for some server-speaks-first
protocol as opaque. Services that speak those protocols over the default ports
to destinations inside the cluster do not need further configuration.
Linkerd's default list of opaque ports in the 2.11 release is 25 (SMTP), 587
(SMTP), 3306 (MySQL), 4444 (Galera), 5432 (Postgres), 6379 (Redis), 9300
(ElasticSearch), and 11211 (Memcache). Note that this may change in future
releases.

The following table contains common protocols that may require configuration.
Linkerd's default list of opaque ports in the 2.11 release is **25** (SMTP),
**587** (SMTP), **3306** (MySQL), **4444** (Galera), **5432** (Postgres),
**6379** (Redis), **9300** (ElasticSearch), and **11211** (Memcache).

The following table contains common protocols that may require additional
configuration.

| Protocol | Default port(s) | Notes |
|-----------------|-----------------|-------|
Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2.11/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,14 +255,14 @@ the debugging tutorial below for much more on this.)

## That's it! 👏

Congratulations, you have joined the lofty, exalted ranks of Linkerd users!
Congratulations, you have joined the lofty ranks of Linkerd users!
Give yourself a pat on the back.

What's next? Here are some steps we recommend:

* Learn how to use Linkerd to [debug the errors in
Emojivoto](../debugging-an-app/).
* Learn more about [meshing your own services](../adding-your-service/) to
* Learn how to [add your own services](../adding-your-service/) to
Linkerd without downtime.
* Learn more about [Linkerd's architecture](../reference/architecture/)
* Learn how to set up [automatic control plane mTLS credential
Expand Down
6 changes: 3 additions & 3 deletions linkerd.io/content/2.11/reference/cluster-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,16 +58,16 @@ echo $MASTER_IPV4_CIDR $NETWORK $NETWORK_TARGET_TAG
10.0.0.0/28 foo-network gke-foo-cluster-c1ecba83-node
```

Create the firewall rules for `proxy-injector` and `tap`:
Create the firewall rules for `proxy-injector`, `policy-validator` and `tap`:

```bash
gcloud compute firewall-rules create gke-to-linkerd-control-plane \
--network "$NETWORK" \
--allow "tcp:8443,tcp:8089" \
--allow "tcp:8443,tcp:8089,tcp:9443" \
--source-ranges "$MASTER_IPV4_CIDR" \
--target-tags "$NETWORK_TARGET_TAG" \
--priority 1000 \
--description "Allow traffic on ports 8443, 8089 for linkerd control-plane components"
--description "Allow traffic on ports 8443, 8089, 9443 for linkerd control-plane components"
```

Finally, verify that the firewall is created:
Expand Down
72 changes: 31 additions & 41 deletions linkerd.io/content/2.11/tasks/adding-your-service.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
+++
title = "Adding Your Services to Linkerd"
title = "Adding your services to Linkerd"
description = "In order for your services to take advantage of Linkerd, they also need to be *meshed* by injecting Linkerd's data plane proxy into their pods."
aliases = [
"../adding-your-service/",
Expand All @@ -12,18 +12,16 @@ your application. In order for your services to take advantage of Linkerd, they
need to be *meshed*, by injecting Linkerd's data plane proxy into their pods.

For most applications, meshing a service is as simple as adding a Kubernetes
annotation. However, services that make network calls immediately on startup
may need to [handle startup race
conditions](#a-note-on-startup-race-conditions), and services that use MySQL,
SMTP, Memcache, and similar protocols may need to [handle server-speaks-first
protocols](#a-note-on-server-speaks-first-protocols).
annotation and restarting the service. However, services that communicate using
certain non-HTTP protocols (including MySQL, SMTP, Memcache, and others) may
need a little configuration.

Read on for more!

## Meshing a service with annotations

Meshing a Kubernetes resource is typically done by annotating the resource, or
its namespace, with the `linkerd.io/inject: enabled` Kubernetes annotation.
Meshing a Kubernetes resource is typically done by annotating the resource (or
its namespace) with the `linkerd.io/inject: enabled` Kubernetes annotation.
This annotation triggers automatic proxy injection when the resources are
created or updated. (See the [proxy injection
page](../../features/proxy-injection/) for more on how this works.)
Expand All @@ -34,26 +32,33 @@ annotation to a given Kubernetes manifest. Of course, these annotations can be
set by any other mechanism.

{{< note >}}
Simply adding the annotation will not automatically mesh existing pods. After
setting the annotation, you will need to recreate or update any resources (e.g.
with `kubectl rollout restart`) to trigger proxy injection. (Often, a
Adding the annotation to existing pods does not automatically mesh them. For
existing pods, after adding the annotation you will also need to recreate or
update the resource (e.g. by using `kubectl rollout restart` to perform a
[rolling
update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/)
can be performed to inject the proxy into a live service without interruption.)
update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/))
to trigger proxy injection.
{{< /note >}}

## Example
## Examples

To add Linkerd's data plane proxies to a service defined in a Kubernetes
manifest, you can use `linkerd inject` to add the annotations before applying
the manifest to Kubernetes:
the manifest to Kubernetes.

You can transform an existing `deployment.yml` file to add annotations
in the correct places and apply it to the cluster:

```bash
cat deployment.yml | linkerd inject - | kubectl apply -f -
```

This example transforms the `deployment.yml` file to add injection annotations
in the correct places, then applies it to the cluster.
You can mesh every deployment in a namespace by combining this
with `kubectl get`:

```bash
kubectl get -n NAMESPACE deploy -o yaml | linkerd inject - | kubectl apply -f -
```

## Verifying the data plane pods have been injected

Expand All @@ -62,39 +67,24 @@ Kubernetes for the list of containers in the pods and ensure that the proxy is
listed:

```bash
kubectl -n MYNAMESPACE get po -o jsonpath='{.items[0].spec.containers[*].name}'
kubectl -n NAMESPACE get po -o jsonpath='{.items[0].spec.containers[*].name}'
```

If everything was successful, you'll see `linkerd-proxy` in the output, e.g.:

```bash
MYCONTAINER linkerd-proxy
linkerd-proxy CONTAINER
```

## A note on startup race conditions
## Handling MySQL, SMTP, and other non-HTTP protocols

While the proxy starts very quickly, Kubernetes doesn't provide any guarantees
about container startup ordering, so the application container may start before
the proxy is ready. This means that any connections made immediately at app
startup time may fail until the proxy is active.

In many cases, this can be ignored: the application will ideally retry the
connection, or Kubernetes will restart the container after it fails, and
eventually the proxy will be ready. Alternatively, you can use
[linkerd-await](https://github.com/linkerd/linkerd-await) to delay the
application container until the proxy is ready, or set a
[`skip-outbound-ports`
annotation](../../features/protocol-detection/#skipping-the-proxy)
to bypass the proxy for these connections.

## A note on server-speaks-first protocols

Linkerd's [protocol
detection](../../features/protocol-detection/) works by
Linkerd's [protocol detection](../../features/protocol-detection/) works by
looking at the first few bytes of client data to determine the protocol of the
connection. Some protocols such as MySQL, SMTP, and other server-speaks-first
protocols don't send these bytes. In some cases, this may require additional
configuration to avoid a 10-second delay in establishing the first connection.
connection. Some protocols, such as MySQL and SMTP, don't send these bytes. If
your application uses these protocols without TLSing them, you may require
additional configuration to avoid a 10-second delay when establishing
connections.

See [Configuring protocol
detection](../../features/protocol-detection/#configuring-protocol-detection)
for details.
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2.11/tasks/distributed-tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ If using helm to install ingress-nginx, you can configure tracing by using:
controller:
config:
enable-opentracing: "true"
zipkin-collector-host: linkerd-collector.linkerd
zipkin-collector-host: collector.linkerd-jaeger
```
### Client Library
Expand Down
47 changes: 27 additions & 20 deletions linkerd.io/content/2.11/tasks/multicluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ between services that live on different clusters.

At a high level, you will:

1. [Install Linkerd](#install-linkerd) on two clusters with a shared trust
anchor.
1. [Install Linkerd and Linkerd Viz](#install-linkerd) on two clusters with a
shared trust anchor.
1. [Prepare](#preparing-your-cluster) the clusters.
1. [Link](#linking-the-clusters) the clusters.
1. [Install](#installing-the-test-services) the demo.
Expand Down Expand Up @@ -42,16 +42,13 @@ At a high level, you will:
- Elevated privileges on both clusters. We'll be creating service accounts and
granting extended privileges, so you'll need to be able to do that on your
test clusters.
- Linkerd's `viz` extension should be installed in order to run `stat` commands,
view the Grafana or Linkerd dashboard and run the `linkerd multicluster gateways`
command.
- Support for services of type `LoadBalancer` in the `east` cluster. Check out
the documentation for your cluster provider or take a look at
[inlets](https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/).
This is what the `west` cluster will use to communicate with `east` via the
gateway.

## Install Linkerd
## Install Linkerd and Linkerd Viz

{{< fig
alt="install"
Expand Down Expand Up @@ -116,14 +113,23 @@ linkerd install \
>(kubectl --context=east apply -f -)
```

And then Linkerd Viz:

```bash
for ctx in west east; do
linkerd --context=${ctx} viz install | \
kubectl --context=${ctx} apply -f - || break
done
```

The output from `install` will get applied to each cluster and come up! You can
verify that everything has come up successfully with `check`.

```bash
for ctx in west east; do
echo "Checking cluster: ${ctx} .........\n"
echo "Checking cluster: ${ctx} ........."
linkerd --context=${ctx} check || break
echo "-------------\n"
echo "-------------"
done
```

Expand Down Expand Up @@ -151,7 +157,7 @@ for ctx in west east; do
echo "Installing on cluster: ${ctx} ........."
linkerd --context=${ctx} multicluster install | \
kubectl --context=${ctx} apply -f - || break
echo "-------------\n"
echo "-------------"
done
```

Expand All @@ -175,7 +181,7 @@ for ctx in west east; do
echo "Checking gateway on cluster: ${ctx} ........."
kubectl --context=${ctx} -n linkerd-multicluster \
rollout status deploy/linkerd-gateway || break
echo "-------------\n"
echo "-------------"
done
```

Expand All @@ -185,9 +191,7 @@ running:
```bash
for ctx in west east; do
printf "Checking cluster: ${ctx} ........."
while [ "$(kubectl --context=${ctx} -n linkerd-multicluster get service \
-o 'custom-columns=:.status.loadBalancer.ingress[0].ip' \
--no-headers)" = "<none>" ]; do
while [ "$(kubectl --context=${ctx} -n linkerd-multicluster get service -o 'custom-columns=:.status.loadBalancer.ingress[0].ip' --no-headers)" = "<none>" ]; do
printf '.'
sleep 1
done
Expand Down Expand Up @@ -271,10 +275,10 @@ can mirror. To add these to both clusters, you can run:
for ctx in west east; do
echo "Adding test services on cluster: ${ctx} ........."
kubectl --context=${ctx} apply \
-k "github.com/linkerd/website/multicluster/${ctx}/"
-n test -k "github.com/linkerd/website/multicluster/${ctx}/"
kubectl --context=${ctx} -n test \
rollout status deploy/podinfo || break
echo "-------------\n"
echo "-------------"
done
```

Expand Down Expand Up @@ -370,9 +374,8 @@ kubectl --context=west -n test exec -c nginx -it \
You'll see the `greeting from east` message! Requests from the `frontend` pod
running in `west` are being transparently forwarded to `east`. Assuming that
you're still port forwarding from the previous step, you can also reach this
from your browser at [http://localhost:8080/east](http://localhost:8080/east).
Refresh a couple times and you'll be able to get metrics from `linkerd viz stat`
as well.
with `curl http://localhost:8080/east`. Make that call a couple times and
you'll be able to get metrics from `linkerd viz stat` as well.

```bash
linkerd --context=west -n test viz stat --from deploy/frontend svc
Expand Down Expand Up @@ -404,8 +407,8 @@ linkerd --context=west -n test viz tap deploy/frontend | \

`tls=true` tells you that the requests are being encrypted!

{{< note >}} As `linkerd edges` works on concrete resources and cannot see two
clusters at once, it is not currently able to show the edges between pods in
{{< note >}} As `linkerd viz edges` works on concrete resources and cannot see
two clusters at once, it is not currently able to show the edges between pods in
`east` and `west`. This is the reason we're using `tap` to validate mTLS here.
{{< /note >}}

Expand Down Expand Up @@ -506,7 +509,10 @@ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to
To cleanup the multicluster control plane, you can run:

```bash
linkerd --context=west multicluster unlink --cluster-name east |
kubectl --context=west delete -f -
for ctx in west east; do
kubectl --context=${ctx} delete ns test
linkerd --context=${ctx} multicluster uninstall | kubectl --context=${ctx} delete -f -
done
```
Expand All @@ -515,6 +521,7 @@ If you'd also like to remove your Linkerd installation, run:

```bash
for ctx in west east; do
linkerd --context=${ctx} viz uninstall | kubectl --context=${ctx} delete -f -
linkerd --context=${ctx} uninstall | kubectl --context=${ctx} delete -f -
done
```
Loading

0 comments on commit f188049

Please sign in to comment.