v0.8.0: Cloud-burster
Drain the nodes before applying the deploy core script
Changelog
- Updated dependencies
- ArgoCD: 5.13.8
- Multus: 4.0.0-unstable (not yet tracked by version tracker)
- K0s: 1.25.3+k0s.0
- MetalLB: 4.1.11 (#96)
- Kube Prometheus Stack: 41.7.4
- csi-driver-nfs: v4.1.0 (downgrade)
- Sealed Secrets: 2.7.0
- KubeVirt: v0.59.0-alpha.0
- CoreDNS: 1.10.0
- cfctl: v0.13.2+9001
- Helm: v3.10.2
- kubeseal: 0.19.1
- etcdctl: v3.5.5
- Local Path Provisioner: v0.0.23
- Traefik: 20.2.0
- cert-manager: v1.11.0-alpha.0
- Slurm: Added Cloud-Burster to slurm (3d2f092)
- Slurm: Slurm now supports metrics (3dcc4ab)
- Core: "Deploy core" script deploys CoreDNS
- Core: "Deploy core" script only waits for specific deployments instead of all
- Core: Uncoupled CoreDNS from initial K0s deployment
- Core: CoreDNS as a DaemonSet
- Core: Enable HTTP/3 on Traefik by default
- Helm apps: Added 398ds to Helm directory
- Helm apps: Various fixes on Squid Proxy
- Helm apps: Various fixes on CVMFS Service
- Helm apps: Various fixes on OpenLDAP
- Helm apps: Supports OpenOnDemand with Dex image and without.
- Packer: New DeepSquare Yum repository path
- Packer: Initial support for Rockylinux 9, support for Rockylinux 8.6
- Documentation updates on
cfctl
andcfctl.yaml
Breaking changes
The new major version of Multus CNI introduces heavy changes (k8snetworkplumbingwg/multus-cni#893)
The migration is seamless; but needs attention.
Multus CNI 4.0 thick daemonset introduces a new architecture which is the server/client architecture. As there is a server that handles all the network attachments, the process is quite slow.
As soon as you apply the Multus CNI, each pod will be killed to reattach the networks. Be aware that the process is REALLY slow. You might see some Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "<id>": plugin type="multus-shim" name="multus-cni-network" failed (add): CNI request failed with status 400: <...>
. This means that the multus-shim
is working hard.
To accelerate the process and to be safe, drain the nodes.
Rebooting a node won't change anything and may break your setup.
The new major version of ArgoCD introduces breaking changes (https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd#520)
The ArgoCD helm chart now handles CRDs
-
To adopt the new CRD management style, runs this script:
YOUR_ARGOCD_NAMESPACE="argocd" YOUR_ARGOCD_RELEASENAME="argocd" for crd in "applications.argoproj.io" "applicationsets.argoproj.io" "argocdextensions.argoproj.io" "appprojects.argoproj.io"; do kubectl label --overwrite crd $crd app.kubernetes.io/managed-by=Helm kubectl annotate --overwrite crd $crd meta.helm.sh/release-namespace="$YOUR_ARGOCD_NAMESPACE" kubectl annotate --overwrite crd $crd meta.helm.sh/release-name="$YOUR_ARGOCD_RELEASENAME" done
-
To NOT adopt the new CRD management style, add in the values file:
crds: install: false
You will have to update the CRDs using:
kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=<appVersion>"
Deprecated configs.repositoryCredentials
, server.additionalApplications
and server.additionalProjects
.
See: https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd#500
The new major version of MetaLB introduces breaking changes (#96)
MetalLB 0.13.0 is now available, with its chart bitnami/metallb 4.0.0.
configInline
is now deprecated and CRDs are now preferred.
This major release includes the changes and features available in MetalLB from version 0.13.0. Those changes include the deprecation of configmaps for configuring the service and using CRDs instead. If you are upgrading from a previous version, you can follow the official documentation on how to migrate the configuration from a configMap to CRDs.
Migration instructions:
- Fetch the MetalLB configMap and store it inside a file
config.yaml
- Run the conversion utility using docker:
docker run -it --rm -v $(pwd):/var/input quay.io/metallb/configmaptocrs -source config.yaml
-
Remove
configInline
from the Helm values insidecfctl.yaml
and redeploy with cfctl. If there are issues with updating a Helm extensions, see the documentation -
Apply the CRDs
kubectl apply -f .
Full Changelog: v0.7.0...v0.8.0