-
Define the environment variables to be used by the resources definition.
NOTE: In the following sections we'll be generating and setting some environment variables. If you're terminal session restarts you may need to reset these variables. You can use that via the following command:
source envLabVars.env
Start in the root of the repository folder
cd cc-aks-ebpf
export RESOURCE_GROUP=rg-cc-aks-ebpf export CLUSTERNAME=myname-cc-aks-ebpf export LOCATION=canadacentral export K8S_VERSION=1.30.6 export POD_CIDR='10.244.0.0/16' # Persist for later sessions in case of disconnection. echo export RESOURCE_GROUP=$RESOURCE_GROUP >> envLabVars.env echo export CLUSTERNAME=$CLUSTERNAME >> envLabVars.env echo export LOCATION=$LOCATION >> envLabVars.env echo export K8S_VERSION=$K8S_VERSION >> envLabVars.env echo export POD_CIDR=$POD_CIDR >> envLabVars.env
-
If not created, create the Resource Group in the desired Region.
az group create \ --name $RESOURCE_GROUP \ --location $LOCATION
-
Create the AKS cluster without a network plugin. We will install Calico OSS CNI afterwards.
az aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTERNAME \ --kubernetes-version $K8S_VERSION \ --nodepool-name 'nodepool1' \ --location $LOCATION \ --node-count 3 \ --network-plugin none \ --pod-cidr $POD_CIDR \ --node-osdisk-size 50 \ --node-vm-size Standard_B2ms \ --max-pods 70 \ --generate-ssh-keys \ --enable-managed-identity \ --output table
-
Verify your cluster status. The
ProvisioningState
should beSucceeded
az aks list -o table | grep $CLUSTERNAME
Or
watch az aks list -o table
You may get an output like the following
Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn ----------------------- ------------- ------------------------- ------------------- -------------------------- ------------------- ----------------------------------------------------------------------- aks-cc-aks-ebpf canadacentral rg-cc-aks-ebpf 1.30 1.30.6 Succeeded aks-rg-cc-aks-03cfb8-ub5gqil0.hcp.canadacentral.azmk8s.io
-
Get the credentials to connect to the cluster.
az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTERNAME
-
Verify you have API access to your new AKS cluster
kubectl get nodes
The output will be something similar to the this:
NAME STATUS ROLES AGE VERSION aks-nodepool1-24664026-vmss000000 NotReady agent 7m1s v1.30.6 aks-nodepool1-24664026-vmss000001 NotReady agent 7m4s v1.30.6
The nodes are showing
NotReady
because we haven't actually installed Calico as a CNI yet, which is the next step.
By default, when installing Calico as a CNI on an AKS cluster as a BYOCNI option, the default dataplane will be iptables/nftables (depending on the OS) unless the user configures it as eBPF. For the sake of this workshop, we want to compare the two dataplanes so we start with iptables and then convert the cluster to eBPF later.
-
Configure Calico Helm repo
helm repo add projectcalico https://docs.tigera.io/calico/charts
-
Configure Helm values
cat > values.yaml <<EOF installation: kubernetesProvider: AKS cni: type: Calico ipam: type: Calico calicoNetwork: hostPorts: Disabled bgp: Disabled ipPools: - cidr: $POD_CIDR encapsulation: VXLAN EOF
-
Install Calico OSS using Helm values file
kubectl create namespace tigera-operator
helm install calico projectcalico/tigera-operator --version v3.29.1 -f values.yaml --namespace tigera-operator
-
The nodes should now be networked properly with Calico as CNI and should show status as
Ready
NAME STATUS ROLES AGE VERSION aks-nodepool1-24664026-vmss000000 Ready agent 9m3s v1.30.6 aks-nodepool1-24664026-vmss000001 Ready agent 9m3s v1.30.6
-
At this point, the Calico components should also be functional and show as
Available
kubectl get tigerastatus
The output should look something like this:
NAME AVAILABLE PROGRESSING DEGRADED SINCE apiserver True False False 11m calico True False False 11m
Now we are ready to connect the cluster to Calico Cloud.