Skip to content

Releases: admiraltyio/admiralty

v0.16.0

04 Nov 06:25
Compare
Choose a tag to compare

This release adds support for Kubernetes 1.27 and 1.28, and drops support for 1.23 and older.
Among the various new features and bug fixes, we'd like to call out several improvements around cross-cluster garbage collection.
We also welcome 4 new contributors.

New Features

  • 3081591 add support for k8s 1.27 and 1.28
  • 0730b87 add support for webhook reinvocation policy in chart, thanks @kirillmakhonin-brt
  • 408855e label virtual nodes with node.kubernetes.io/exclude-from-external-load-balancers in addition to the deprecated alpha.service-controller.kubernetes.io/exclude-balancer, thanks @bcarlock-emerge
  • fa81d34 support different default priority in target cluster
  • 01688ea delete proxy pod when pod chaperon is deleted
  • 7116f41 recreate delegate pod if pod chaperon not deleted after a minute (if cluster connection lost)
  • 7116f41 webhook readiness, for high availability

Bugfixes

  • a2e557e fix cross-cluster garbage collection after parent deletion
  • ccc3899 fix use-constraints-from-spec-for-proxy-pod-scheduling when webhook is reinvocated

Breaking Changes

  • 3081591 drop support for k8s 1.23 and older

Internals

  • distribute container images and Helm chart (as OCI artifact) on ECR public registry
  • fixed flaky e2e tests
  • per-k8s-version e2e test failure cluster dump
  • bumping dependencies with dependabot, thanks @Rajpratik71
  • speed up GH Actions by not installing Docker because it's already installed
  • migrating away from deprecated functions in the wait package, thanks @Parthiba-Hazra

v0.16.0-alpha.0

20 Oct 04:32
Compare
Choose a tag to compare

Release notes to be compiled for v0.16.0.

v0.15.1

17 Mar 08:20
Compare
Choose a tag to compare

Bugfixes

  • 2581da6 fix service propagation: handle new ClusterIPs dual-stack field

v0.15.0

02 Mar 14:10
Compare
Choose a tag to compare

This release mainly adds support for Kubernetes 1.22+ (and OpenShift 4.9+), while dropping support for 1.20-.

New Features

Bugfixes

  • e30ba9f fix recreate delegate pod when deleted
  • e23bf9b fix retry without candidate scheduler
  • e97a695 fix "more than one candidate" error with self targets in multiple namespaces, and, in general, for targets using identities authorized (sometimes by mistake) in multiple namespaces
  • d7d5aca fix finalizer length limit overflow for long namespace/target names

Breaking Changes

  • bbbf347 drop support for Kubernetes 1.20 and older

v0.15.0-alpha.0

18 Feb 06:50
Compare
Choose a tag to compare
v0.15.0-alpha.0 Pre-release
Pre-release

This release mainly adds support for newer Kubernetes versions, while dropping support for older versions.
It supports Kubernetes 1.21 through 1.23. Previous releases supported Kubernetes 1.17 through 1.21.

New Features

  • bbbf347 add support for Kubernetes 1.22 and 1.23 (and likely future versions, until something breaks)

Bugfixes

  • d7d5aca fix finalizer length limit overflow for long namespace/target names

Breaking Changes

  • bbbf347 drop support for Kubernetes 1.20 and older

v0.14.1

21 Sep 05:23
Compare
Choose a tag to compare

Bugfixes

  • 88f12af start VK server asynchronously and time out if CSR is not signed after 30s, instead of blocking before controllers could start: fixes Admiralty on EKS 1.19+, but with remote logs/exec disabled, until we upgrade dependencies to use certificates.k8s.io/v1, cf. #120
  • 9af2bab add resource quota in release namespace for system-cluster-critical priority class: 0.14.0 added priorityClassName: system-cluster-critical to Admiralty pods to control evictions, but GKE and possibly other distributions limit its consumption by default outside the kube-system namespace; a ResourceQuota fixes that (#124)

v0.14.0

15 Sep 10:18
Compare
Choose a tag to compare

New Features

  • 8221b3a and 3f84b8d leader election, enabled by default with 2 replicas per component

Bugfixes

  • 28ba9d2 by refactoring cross-cluster controllers from fan-out to 1-on-1, if a target is unavailable at startup, it
    no longer breaks other targets while the corresponding controller is waiting for the target cache to sync (fixed #106)
  • 28c126f and e99ecee allow excluding labels from aggregation on virtual nodes, especially useful on AKS to exclude
    ^kubernetes\.azure\.com/cluster=, so kube-proxy and azure-ip-masq-agent DaemonSet don't create pods for Admiralty
    virtual nodes (the manifest of those DaemonSets is reconciled by the add-on manager so adding a node anti-affinity
    wasn't an option) (fixed #114)

Internals

v0.13.2

15 Dec 21:57
Compare
Choose a tag to compare

v0.13.2

Bugfixes

v0.13.1

19 Nov 04:15
0d39a1d
Compare
Choose a tag to compare

Bugfixes

  • 0d39a1d Fix amd64 image. UPX was guilty, but we didn't notice. Disable UPX for non-amd64 images until we e2e-test all archs in general, and make UPX work with arm64 in particular.

v0.13.0

18 Nov 00:46
7a231d3
Compare
Choose a tag to compare

New Features

  • a1c88bc Alternative scheduling algorithm, enabled with multicluster.admiralty.io/no-reservation pod annotation, to work with third-party schedulers in target clusters, e.g., AWS Fargate (instead of candidate scheduler).
  • 7a231d3 Support cluster-level, i.e., virtual-node-level scheduling constraints, in addition (with multicluster.admiralty.io/proxy-pod-scheduling-constraints pod annotation) or instead of (with multicluster.admiralty.io/use-constraints-from-spec-for-proxy-pod-scheduling pod annotation) target-cluster-node-level scheduling constraints. To inform this new type of scheduling, aggregate target cluster node labels on virtual nodes: labels with unique values across all nodes of a target cluster, though not necessarily present on all nodes of that cluster, are added to the corresponding virtual node.

Bugfixes

  • a04da55 Fix multi-cluster service deletion.