Skip to content

Commit

Permalink
Merge pull request #270 from solo-io/automation
Browse files Browse the repository at this point in the history
[bot] Merge automation
  • Loading branch information
djannot authored Dec 9, 2024
2 parents a281620 + a0ed47d commit 70a51a5
Show file tree
Hide file tree
Showing 79 changed files with 4,820 additions and 640 deletions.
43 changes: 14 additions & 29 deletions gloo-gateway/1-17/enterprise-istio-ambient/default/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ source ./scripts/assert.sh

## Table of Contents
* [Introduction](#introduction)
* [Lab 1 - Deploy a KinD cluster](#lab-1---deploy-a-kind-cluster-)
* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-)
* [Lab 2 - Deploy Istio in Ambient mode](#lab-2---deploy-istio-in-ambient-mode-)
* [Lab 3 - Deploy Keycloak](#lab-3---deploy-keycloak-)
* [Lab 4 - Deploy Gloo Gateway](#lab-4---deploy-gloo-gateway-)
Expand Down Expand Up @@ -89,23 +89,21 @@ You can find more information about Gloo Gateway in the official documentation:



## Lab 1 - Deploy a KinD cluster <a name="lab-1---deploy-a-kind-cluster-"></a>
## Lab 1 - Deploy KinD Cluster(s) <a name="lab-1---deploy-kind-cluster(s)-"></a>


Clone this repository and go to the directory where this `README.md` file is.

Set the context environment variable:
Set the context environment variables:

```bash
export CLUSTER1=cluster1
```

Run the following commands to deploy a Kubernetes cluster using [Kind](https://kind.sigs.k8s.io/):

Deploy the KinD clusters:
```bash
./scripts/deploy-multi.sh 1 cluster1
bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh
```

Then run the following commands to wait for all the Pods to be ready:

```bash
Expand All @@ -114,40 +112,26 @@ Then run the following commands to wait for all the Pods to be ready:

**Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again.

Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following:

```,nocopy
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m
kube-system calico-node-przxs 1/1 Running 0 4h26m
kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m
kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m
kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m
kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m
kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m
kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m
kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m
local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m
metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m
metallb-system speaker-d7jkp 1/1 Running 0 4h26m
```
Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state.
<!--bash
cat <<'EOF' > ./test.js
const helpers = require('./tests/chai-exec');
describe("Clusters are healthy", () => {
const clusters = ["cluster1"];
clusters.forEach(cluster => {
it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" }));
});
});
EOF
echo "executing test dist/gloo-gateway-workshop/build/templates/steps/deploy-kind-cluster/tests/cluster-healthy.test.js.liquid"
echo "executing test dist/gloo-gateway-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid"
timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; }
-->




## Lab 2 - Deploy Istio in Ambient mode <a name="lab-2---deploy-istio-in-ambient-mode-"></a>


Expand Down Expand Up @@ -1203,6 +1187,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail ||




The team in charge of the gateway can create a `Gateway` resource and configure an HTTP listener.


Expand Down Expand Up @@ -3726,7 +3711,7 @@ controller:
trafficRouterPlugins:
trafficRouterPlugins: |-
- name: "argoproj-labs/gatewayAPI"
location: "https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.3.0/gateway-api-plugin-linux-amd64"
location: "https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.4.0/gatewayapi-plugin-linux-$(dpkg --print-architecture)"
EOF
```

Expand Down Expand Up @@ -5928,7 +5913,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail ||
Here is the expected output:

```json,nocopy
{"message":"portal config not found"}
[{"apiProductMetadata":{"imageURL":"https://raw.githubusercontent.com/solo-io/workshops/master/images/bookinfo.jpg"},"description":"# Bookinfo REST API v1 Documentation\nThis is some extra information about the API\n","id":"bookinfo","name":"BookInfo REST API","versionsCount":2}]
```

You can see that no portal configuration has been found.
Expand Down Expand Up @@ -5985,7 +5970,7 @@ spec:
spec:
serviceAccountName: portal-frontend
containers:
- image: gcr.io/solo-public/docs/portal-frontend:v0.0.35
- image: gcr.io/product-excellence-424719/portal-frontend:v0.0.35
args: ["--host", "0.0.0.0"]
imagePullPolicy: Always
name: portal-frontend
Expand Down Expand Up @@ -6374,7 +6359,7 @@ spec:
serviceAccountName: backstage
containers:
- name: backstage
image: gcr.io/solo-public/docs/portal-backstage-backend:v0.0.33
image: gcr.io/product-excellence-424719/portal-backstage-backend:v0.0.35
imagePullPolicy: IfNotPresent
ports:
- name: http
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
#!/usr/bin/env bash
set -o errexit

number="1"
name="cluster1"
region=""
zone=""
twodigits=$(printf "%02d\n" $number)

kindest_node=${KINDEST_NODE}

if [ -z "$kindest_node" ]; then
export k8s_version="1.28.0"

[[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version}
kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \
| jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest')

if [ -z "$kindest_node_ver" ]; then
echo "Incorrect Kubernetes version provided: ${k8s_version}."
exit 1
fi
kindest_node=kindest/node:${kindest_node_ver}
fi
echo "Using KinD image: ${kindest_node}"

if [ -z "$3" ]; then
case $name in
cluster1)
region=us-west-1
;;
cluster2)
region=us-west-2
;;
*)
region=us-east-1
;;
esac
fi

if [ -z "$4" ]; then
case $name in
cluster1)
zone=us-west-1a
;;
cluster2)
zone=us-west-2a
;;
*)
zone=us-east-1a
;;
esac
fi

if hostname -I 2>/dev/null; then
myip=$(hostname -I | awk '{ print $1 }')
else
myip=$(ipconfig getifaddr en0)
fi

# Function to determine the next available cluster number
get_next_cluster_number() {
if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then
echo 1
else
highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-)
echo $((highest_num + 1))
fi
}

if [ -f /.dockerenv ]; then
myip=$HOST_IP
container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2)
docker network connect "kind" $container || true
number=$(get_next_cluster_number)
twodigits=$(printf "%02d\n" $number)
fi

reg_name='kind-registry'
reg_port='5000'
docker start "${reg_name}" 2>/dev/null || \
docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2

cache_port='5000'
cat > registries <<EOF
docker https://registry-1.docker.io
us-docker https://us-docker.pkg.dev
us-central1-docker https://us-central1-docker.pkg.dev
quay https://quay.io
gcr https://gcr.io
EOF

cat registries | while read cache_name cache_url; do
cat > ${HOME}/.${cache_name}-config.yml <<EOF
version: 0.1
proxy:
remoteurl: ${cache_url}
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
EOF

docker start "${cache_name}" 2>/dev/null || \
docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2
done

echo Contents of kind${number}.yaml
cat << EOF | tee kind${number}.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: ${kindest_node}
extraPortMappings:
- containerPort: 6443
hostPort: 70${twodigits}
labels:
ingress-ready: true
topology.kubernetes.io/region: ${region}
topology.kubernetes.io/zone: ${zone}
- role: worker
image: ${kindest_node}
labels:
ingress-ready: true
topology.kubernetes.io/region: ${region}
topology.kubernetes.io/zone: ${zone}
networking:
serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16"
podSubnet: "10.1${twodigits}.0.0/16"
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
endpoint = ["http://${reg_name}:${reg_port}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["http://docker:${cache_port}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"]
endpoint = ["http://us-docker:${cache_port}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"]
endpoint = ["http://us-central1-docker:${cache_port}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
endpoint = ["http://quay:${cache_port}"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
endpoint = ["http://gcr:${cache_port}"]
EOF
echo -----------------------------------------------------

kind create cluster --name kind${number} --config kind${number}.yaml
ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress')
networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }')
kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true

# Preload images
cat << EOF >> images.txt
quay.io/metallb/controller:v0.13.12
quay.io/metallb/speaker:v0.13.12
EOF
cat images.txt | while read image; do
docker pull $image || true
kind load docker-image $image --name kind${number} || true
done

docker network connect "kind" "${reg_name}" || true
docker network connect "kind" docker || true
docker network connect "kind" us-docker || true
docker network connect "kind" us-central1-docker || true
docker network connect "kind" quay || true
docker network connect "kind" gcr || true

for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done
kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true

cat << EOF | tee metallb${number}.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
EOF

printf "Create IPAddressPool in kind-kind${number}\n"
for i in {1..10}; do
kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break
sleep 2
done

# connect the registry to the cluster network if not already connected
printf "Renaming context kind-kind${number} to ${name}\n"
for i in {1..100}; do
(kubectl config get-contexts -oname | grep ${name}) && break
kubectl config rename-context kind-kind${number} ${name} && break
printf " $i"/100
sleep 2
[ $i -lt 100 ] || exit 1
done

# Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl --context=${name} apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
Original file line number Diff line number Diff line change
Expand Up @@ -90,4 +90,4 @@ done

# If the loop exits, it means the check failed consistently for 1 minute
echo "DNS rewrite rule verification failed."
exit 1
exit 1
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ hosts_file="/etc/hosts"
# Function to check if the input is a valid IP address
is_ip() {
if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
return 0 # 0 = true
return 0 # 0 = true - valid IPv4 address
elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then
return 0 # 0 = true - valid IPv6 address
else
return 1 # 1 = false
fi
Expand All @@ -38,14 +40,15 @@ else
fi

# Check if the entry already exists
if grep -q "$hostname" "$hosts_file"; then
if grep -q "$hostname\$" "$hosts_file"; then
# Update the existing entry with the new IP
tempfile=$(mktemp)
sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile"
sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile"
sudo cp "$tempfile" "$hosts_file"
rm "$tempfile"
echo "Updated $hostname in $hosts_file with new IP: $new_ip"
else
# Add a new entry if it doesn't exist
echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null
echo "Added $hostname to $hosts_file with IP: $new_ip"
fi
fi
Loading

0 comments on commit 70a51a5

Please sign in to comment.