Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove any/all code language hints for now #715

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docusaurus/blog/wildcard-dns/cheatsheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ authors: dovholuknf

# Wildcard DNS Cheatsheet

```bash
```
# ------------- start docker
docker-compose up

Expand Down
16 changes: 8 additions & 8 deletions docusaurus/blog/zitification/kubernetes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ as my service name. When deployed by a cloud provider, the Kubernetes API is gen

#### Example Ziti CLI commands

```bash
```
# the name of the service
service_name=k8s.oci
# the name of the identity you'd like to see on the kubectl client
Expand Down Expand Up @@ -100,7 +100,7 @@ Once we have established the pieces of the [Ziti Network][8], we'll want to get

Notice that we are changing the file location output by these commands and they are being output as two separate Kubernetes config files. If you prefer to merge them all into one big config file and change contexts - feel free. I left them as separate files here because it provides a very clear separation as to which config is being used or modified.

```bash
```
# Get this value directly from Oracle
oci_cluster_id="put-your-cluster-id-here"

Expand Down Expand Up @@ -129,7 +129,7 @@ At this point we should have all the pieces in place so that we can start puttin

This step is very straight-forward for anyone who's used Kubernetes before. Issue the following commands, making sure the path is correct for your public Kubernetes config file, and verify Kubernetes works as expected.

```bash
```
export KUBECONFIG=/tmp/oci/config.oci.public
kubectl get pods -v6 --request-timeout='5s'
I1019 13:57:31.910962 3211 loader.go:372] Config loaded from file: /tmp/oci/config.oci.public
Expand All @@ -148,7 +148,7 @@ Next we'll grab a few lines from the excellent guide NetFoundry put out for inte
3. locate the jwt file for the Kubernetes identity. If you followed the steps above the file will be named: `"${the_kubernetes_identity}".jwt` (make sure you replace the variable with the correct value)
4. use the jwt to add Ziti: `helm install ziti-host netfoundry/ziti-host --set-file enrollmentToken="${the_kubernetes_identity}".jwt` (again make sure you replace the variable name) If you need to, make sure you create a persistent volume. The ziti pod requires storage to store a secret.

```bash
```
apiVersion: v1
kind: PersistentVolume
metadata:
Expand All @@ -169,7 +169,7 @@ spec:

Now consume the one time token (the jwt file) to enroll and create a client-side identity using the Ziti Desktop Edge for Windows (or MacOS or via the `ziti-edge-tunnel` if you prefer). Once you can see the identity in your tunneling app, you should be able to use the private kubernetes config file to access the same exact cluster. Remember though, we have mapped the port on the client side to use 443. That means you'll need to update your config file and change 6443 --> 443. Now when you run `get pods` you'll see the ziti-host pod deployed:

```bash
```
export KUBECONFIG=/tmp/oci/config.oci.private
kubectl get pods
NAME READY STATUS RESTARTS AGE
Expand All @@ -185,7 +185,7 @@ If you have made it this far, you've seen us access the Kubernetes API via the p
3. Build `kubeztl` from [the GitHub repo](https://github.com/openziti-test-kitchen/kubeztl)
4. Use `kubeztl` to get pods!

```bash
```
./kubeztl -zConfig ./id.json -service k8s.oci get pods
NAME READY STATUS RESTARTS AGE
ziti-host-976b84c66-kr4bc 1/1 Running 0 101m
Expand All @@ -195,7 +195,7 @@ If you have made it this far, you've seen us access the Kubernetes API via the p

The `kubeztl` command has also been modified to allow you to add the service name and config file directly into the file itself. This is convenient since you will not need to supply the ziti identity file, nor will you need to specify which service to use. Modifying the file is straight-forward. Open the config file, find the context listed under the contexts root and add two rows as shown here.

```bash
```
contexts
- context:
cluster: cluster-cjw4arxuolq
Expand All @@ -206,7 +206,7 @@ contexts

Once done - you can now simply use the context the same way you have always - `kubeztl get pods`!

```bash
```
./kubeztl get pods
NAME READY STATUS RESTARTS AGE
ziti-host-976b84c66-kr4bc 1/1 Running 0 114m
Expand Down
46 changes: 23 additions & 23 deletions docusaurus/blog/zitification/prometheus/part2.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,13 +117,13 @@ attribute to the identity of `kubeA.services`. This will be used later when sett

#### Create the Identity

```text
```
ziti edge create identity device kubeA.ziti.id -o /tmp/prometheus/kubeA.ziti.id.jwt -a "kubeA.services"
```

You should see confirmation output such as:

```text
```
New identity kubeA.ziti.id created with id: BeyyFUZFDR
Enrollment expires at 2022-04-22T01:18:53.402Z
```
Expand All @@ -133,15 +133,15 @@ Once created, we can use helm to install the `ziti-host` pod. The jwt is a one-u
`ziti-host`. As this is probably your first time running this helm chart, you will need to install it. The command is idempotent to
running it over and over is of no concern. Run the following:

```text
```
helm repo add netfoundry https://netfoundry.github.io/charts/
helm repo update
helm install ziti-host netfoundry/ziti-host --set-file enrollmentToken="/tmp/prometheus/kubeA.ziti.id.jwt"
```

You will see the confirmation output from helm. Now when you look at your Kubernetes cluster with `kubectl`, you will see a pod deployed:

```text
```
kubectl get pods
NAME READY STATUS RESTARTS AGE
ziti-host-db55b5c4b-rpc7f 1/1 Running 0 2m40s
Expand All @@ -159,7 +159,7 @@ second service provided is a scrape target for Prometheus. There is one metric e

#### Create and Enroll the Identity

```text
```
ziti edge create identity user kubeA.reflect.id -o /tmp/prometheus/kubeA.reflect.id.jwt
ziti edge enroll /tmp/prometheus/kubeA.reflect.id.jwt -o /tmp/prometheus/kubeA.reflect.id.json
```
Expand All @@ -171,7 +171,7 @@ able to test the service to ensure they work. To enable testing the services, we
will allow identities using tunneling apps to be able to access the services, this is how we'll verify the services work. Make the
configs and services now.

```text
```
# create intercept configs for the two services
ziti edge create config kubeA.reflect.svc-intercept.v1 intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeA.reflect.svc.ziti"],"portRanges":[{"low":80, "high":80}]}'
Expand All @@ -190,7 +190,7 @@ need to be authorized to bind these services. Tunneling apps will need to be aut
Prometheus servers will need to be able to dial these services too. We will now create `service-policies` to authorize the tunneling
clients, Prometheus scrapes, and the `reflectz` server to bind the service.

```text
```
# create the bind service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeA.reflect.svc.bind" Bind \
--service-roles "@kubeA.reflect.svc" --identity-roles "@kubeA.reflect.id"
Expand All @@ -211,7 +211,7 @@ that to deploy `reflectz` we need to supply an identity to the workload using `-
to 'Bind' the services the workload exposes. We also need to define what the service names are we want to allow that identity to bind.
We do this using the `--set serviceName` and `--set prometheusServiceName` flags.

```text
```
helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install reflectz openziti-test-kitchen/reflect \
Expand All @@ -222,7 +222,7 @@ helm install reflectz openziti-test-kitchen/reflect \

After running helm, pod 2 should be up and running. Let's take a look using `kubectl`

```text
```
kubectl get pods
NAME READY STATUS RESTARTS AGE
reflectz-775bd45d86-4sjwh 1/1 Running 0 7s
Expand All @@ -243,15 +243,15 @@ but we can define it now.

#### Create and Enroll the Identity

```text
```
# create and enroll the identity.
ziti edge create identity user kubeA.prometheus.id -o /tmp/prometheus/kubeA.prometheus.id.jwt -a "reflectz-clients","prometheus-clients"
ziti edge enroll /tmp/prometheus/kubeA.prometheus.id.jwt -o /tmp/prometheus/kubeA.prometheus.id.json
```

#### Create Configs and Services (including Tunneling-based Access)

```text
```
# create the config and service for the kubeA prometheus server
ziti edge create config "kubeA.prometheus.svc-intercept.v1" intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeA.prometheus.svc"],"portRanges":[{"low":80, "high":80}]}'
Expand All @@ -263,7 +263,7 @@ ziti edge create service "kubeA.prometheus.svc" \

#### Authorize the Workload and Clients

```text
```
# grant the prometheus clients the ability to dial the service and the kubeA.prometheus.id the ability to bind
ziti edge create service-policy "kubeA.prometheus.svc.dial" Dial \
--service-roles "@kubeA.prometheus.svc" \
Expand All @@ -290,7 +290,7 @@ network. We're also passing one `--set-file` parameter to tell Prometheus what i
This secret will be used when we configure Prometheus to scrape the workload. Go ahead and run this command now and run
`kubectl get pods` until all the containers are running.

```text
```
helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install prometheuz openziti-test-kitchen/prometheus \
Expand Down Expand Up @@ -327,13 +327,13 @@ we'll create that identity, authorize it to bind the services, and authorize cli
similar to what we did for ClusterA, there's not much to explain. Setup ClusterB's `reflectz` now.

#### Create the Identity
```text
```
ziti edge create identity user kubeB.reflect.id -o /tmp/prometheus/kubeB.reflect.id.jwt
ziti edge enroll /tmp/prometheus/kubeB.reflect.id.jwt -o /tmp/prometheus/kubeB.reflect.id.json
```

#### Create Configs and Services (including Tunneling-based Access)
```text
```
# create intercept configs for the two services
ziti edge create config kubeB.reflect.svc-intercept.v1 intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeB.reflect.svc.ziti"],"portRanges":[{"low":80, "high":80}]}'
Expand All @@ -346,7 +346,7 @@ ziti edge create service "kubeB.reflect.scrape.svc" --configs "kubeB.reflect.svc
```

#### Authorize the Workload to Bind the Services
```text
```
# create the bind service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeB.reflect.svc.bind" Bind \
--service-roles "@kubeB.reflect.svc" --identity-roles "@kubeB.reflect.id"
Expand All @@ -355,7 +355,7 @@ ziti edge create service-policy "kubeB.reflect.scrape.svc.bind" Bind \
```

#### Authorize Clients to Access the Services
```text
```
# create the dial service policies and authorize the reflect id to bind these services
ziti edge create service-policy "kubeB.reflect.svc.dial" Dial \
--service-roles "@kubeB.reflect.svc" --identity-roles "#reflectz-clients"
Expand All @@ -364,7 +364,7 @@ ziti edge create service-policy "kubeB.reflect.svc.dial.scrape" Dial \
```

#### Deploy `reflectz` {#deploy-reflectz-1}
```text
```
helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
helm install reflectz openziti-test-kitchen/reflect \
Expand All @@ -381,7 +381,7 @@ the surface with very subtle differences. We'll explore these differences as we
config** (a difference from the ClusterA install), a service, and two service-policies. Let's get to it.

#### Create the Identity
```text
```
ziti edge create identity user kubeB.prometheus.id -o /tmp/prometheus/kubeB.prometheus.id.jwt -a "reflectz-clients","prometheus-clients"
ziti edge enroll /tmp/prometheus/kubeB.prometheus.id.jwt -o /tmp/prometheus/kubeB.prometheus.id.json
```
Expand All @@ -391,7 +391,7 @@ Here's a difference from ClusterA. Since we are going to listen on the OpenZiti
we don't need to create a `host.v1` config. A `host.v1` config is necessary for services which have a 'Bind' configuration and are being
bound by a tunneling application. We're not doing that, here Prometheus will 'Bind' this service, thus we don't need that `host.v1`
config.
```text
```
# create the config and service for the kubeB prometheus server
ziti edge create config "kubeB.prometheus.svc-intercept.v1" intercept.v1 \
'{"protocols":["tcp"],"addresses":["kubeB.prometheus.svc"],"portRanges":[{"low":80, "high":80}], "dialOptions": {"identity":"kubeB.prometheus.id"}}'
Expand All @@ -411,7 +411,7 @@ to be bound by the `ziti-host` identity.
Here we are flipping that script. We are allowing Prometheus to bind this service! That means we'll need to authorize the
`kubeB.prometheus.id` to be able to bind the service.

```text
```
# grant the prometheus clients the ability to dial the service and the kubeB.prometheus.id the ability to bind
ziti edge create service-policy "kubeB.prometheus.svc.dial" Dial \
--service-roles "@kubeB.prometheus.svc" \
Expand Down Expand Up @@ -442,7 +442,7 @@ Finally, to allow the server to scrape targets we need to supply a final identit
You'll notice for simplicities sake, we are using the same identity for all three needs which is perfectly fine. If you wanted to use a
different identity, you could. That choice is up to you. To keep it simple we just authorized this identity for all these purposes.

```text
```
# install prometheus
helm repo add openziti-test-kitchen https://openziti-test-kitchen.github.io/helm-charts/
helm repo update
Expand Down Expand Up @@ -470,6 +470,6 @@ All the commands above are also available in github as `.sh` scripts. If you wou
[ziti-doc repository](https://github.com/openziti/ziti-doc) and access the scripts from the path mentioned below. "Cleanup" scripts are
provided if desired.

```text
```
${checkout_root}/docusaurus/blog/zitification/prometheus/scripts
```
24 changes: 12 additions & 12 deletions docusaurus/blog/zitification/prometheus/part3.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@ everything get installed and it all "seems to work". But how do we **know** it w
let's enroll it in your local tunneling app and find out. Go out and get [a tunneling client](/docs/learn/core-concepts/clients/choose) running
locally. Once you have that installed, provision an identity and enroll it with your tunneling client.

```text
```
ziti edge create identity user dev.client -a "prometheus-clients","reflectz-clients"
```

You should have access to six total services when this identity is enrolled:

```text
```
Service Name: kubeA.prometheus.svc
Intercept: kubeA.prometheus.svc:80
Service Name: kubeA.reflect.svc
Expand Down Expand Up @@ -70,14 +70,14 @@ Neat, but this isn't what we want to actually monitor.
What we really want to monitor is the workload we deployed: `reflectz`. We can do this by editing the Prometheus configmap using
`kubectl`. Let's go ahead and do this now:

```text
```
kubectl edit cm prometheuz-prometheus-server
```

This will open an editor in your terminal and allow you to update the config map for the pod. Once the editor is open, find the section
labeled "scrape_config" and add the following entry:

```text
```
- job_name: 'kubeA.reflectz'
scrape_interval: 5s
honor_labels: true
Expand Down Expand Up @@ -109,14 +109,14 @@ location of the identity.
If you would like to tail the `configmap-reloadz` container, you can issue this one liner. This will instruct `kubectl` to tail the logs
from `configmap-reloadz`.

```text
```
pod=$(kubectl get pods | grep server | cut -d " " -f1); echo POD: $pod; kubectl logs -f "$pod" prometheus-server-configmap-reload
```

When the trigger happens for ClusterA you will see a message like the one below. Notice that `configmap-reloadz` is using the underlay
network: `http://127.0.0.1:9090/-/reload`

```text
```
2022/04/23 20:01:23 config map updated
2022/04/23 20:01:23 performing webhook request (1/1/http://127.0.0.1:9090/-/reload)
2022/04/23 20:01:23 successfully triggered reload
Expand Down Expand Up @@ -161,7 +161,7 @@ Now we can use netcat to open a connection through this intercept a few times. T
reflect service. Connect, send some text, the use ctrl-c to disconnect. Do that a few times then click 'execute' again on the graph page.
You can see I did this over a minute and moved my total count on kubeA to 8, shown below.

```text
```
/tmp/prometheus$ nc kubeA.reflect.svc.ziti 80
kubeA reflect test
you sent me: kubeA reflect test
Expand All @@ -184,7 +184,7 @@ Hopefully you agree with me that this is pretty neat. Well what if we take it to
workload we deployed to ClusterB? Could we get that to work? Recall from above how we enabled the job named 'kubeA.reflectz'. What if we
simply copied/pasted that into the configmap changing kubeA --> kubeB. Would it work? Let's see.

```text
```
# edit the configmap on ClusterA:
kubectl edit cm prometheuz-prometheus-server

Expand Down Expand Up @@ -214,7 +214,7 @@ ClusterA, we have just scraped a workload from Kubernetes ClusterB, entirely ove
Generate some data like you did before by running a few netcat connection/disconnects and click 'Execute' again. Don't forget to send
the connection request to kubeB though!

```text
```
nc kubeB.reflect.svc.ziti 80
this is kubeb
you sent me: this is kubeb
Expand Down Expand Up @@ -245,7 +245,7 @@ data from our two Prometheus instances using a locally deployed `Prometheuz` via
GitHub has a sample Prometheus [file you can download](https://raw.githubusercontent.com/openziti/ziti-doc/main/docusaurus/blog/zitification/prometheus/scripts/local.prometheus.yml).
Below, I used curl to download it and put it into the expected location.

```text
```
curl -s https://raw.githubusercontent.com/openziti/ziti-doc/main/docusaurus/blog/zitification/prometheus/scripts/local.prometheus.yml > /tmp/prometheus/prometheus.config.yml

ziti edge create identity user local.prometheus.id -o /tmp/prometheus/local.prometheus.id.jwt -a "reflectz-clients","prometheus-clients"
Expand All @@ -270,7 +270,7 @@ But wait, I'm not done. That docker instance is listening on an underlay network
I want to fix that too. Let's start this docker container up listening only on the OpenZiti overlay. Just like in [part 2](./part2.md)
we will make a config, a service and two policies to enable identities on the OpenZiti overlay.

```text
```
curl -s https://raw.githubusercontent.com/openziti/ziti-doc/main/docusaurus/blog/zitification/prometheus/scripts/local.prometheus.yml > /tmp/prometheus/prometheus.config.yml

# create the config and service for the local prometheus server
Expand All @@ -295,7 +295,7 @@ you're familiar with docker these will probably all make sense. The most importa
flag. The `-p` flag is used to expose a port from inside docker, outside docker. Look at the previous docker sample and you'll find we
were mapping local underlay port 9090 to port 9090 in the docker container. In this example, **we will do no such thing**! :)

```text
```
docker run \
-e ZITI_LISTENER_SERVICE_NAME=local.prometheus.svc \
-e ZITI_LISTENER_IDENTITY_FILE=/etc/prometheus/ziti.server.json \
Expand Down
Loading
Loading