Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.5 #733

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Apr 17, 2024

This PR contains the following updates:

Package Update Change
kubernetes-sigs/cluster-api-provider-openstack minor 0.9.0 -> 0.10.5

Release Notes

kubernetes-sigs/cluster-api-provider-openstack (kubernetes-sigs/cluster-api-provider-openstack)

v0.10.5

Compare Source

Changes since v0.10.4

🐛 Bug Fixes

Thanks to all our contributors! 😊

v0.10.4

Compare Source

Changes since v0.10.3

🐛 Bug Fixes
  • Fix down-conversion of IdentityRef (#​2138)
🌱 Others
  • Add EmilienM as a maintainer (#​2117)

Thanks to all our contributors! 😊

v0.10.3

Compare Source

Changes since v0.10.2

🐛 Bug Fixes
  • Handle errors returned by GetInstanceStatusByName in machine controller (#​2087)
  • allNodesSecurityGroupRules: relax remote fields (#​2080)
  • Fix loadbalancer timeout panic (#​2076)
  • Fix empty version output in release builds (#​2059)
  • Fix panic executing manager without valid kube context (#​2061)
  • Fix nil pointer issue while creating port (#​2065)
🌱 Others
  • Drop dulek from reviewers (#​2083)
  • Set FallbackToLogsOnError on CAPO manager (#​2072)
  • Refactoring: never assign unacceptable TLS versions (#​2062)

Thanks to all our contributors! 😊

v0.10.2

Compare Source

What's Changed

Full Changelog: kubernetes-sigs/cluster-api-provider-openstack@v0.10.1...v0.10.2

v0.10.1

Compare Source

What's Changed

Full Changelog: kubernetes-sigs/cluster-api-provider-openstack@v0.10.0...v0.10.1

v0.10.0

Compare Source

Breaking API Changes

v0.10.0 is a major update which brings major changes to the API.

v1alpha5 is no longer served

If you are still using v1alpha5, this will not work in v0.10.0. However, for this release only objects are still defined in the CRDs and the code is still present, so as a temporary workaround it is possible to manually edit the CRDs to set versions.served to true for v1alpha5 objects. This is not tested, and we have low confidence that this will work without problems. Some manual effort may be required to check and fix automatically converted objects.

v1alpha6 and v1alpha7 are deprecated

v1alpha6 and v1alpha7 objects will be automatically converted to v1beta during use. This is well tested. We don’t anticipate problems with these conversions.

We will stop serving and testing v1alpha6 in the next release.

v1alpha7 is not marked deprecated in v0.10.0 to allow a switch-over period without deprecation warnings, but will be marked deprecated in the next release. Will will stop serving and testing it in a release after that.

You should update to use v1beta1 natively as soon as possible.

v1beta1 is released

v1beta1 marks a major update to the CAPO API. The specific changes from v1alpha7 are documented here: https://cluster-api-openstack.sigs.k8s.io/topics/crd-changes/v1alpha7-to-v1beta1

More than this, though, it marks an intention by the maintainers to stop making breaking changes. The API will continue to evolve, but we will make every effort to do this without introducing more backwards-incompatible changes.

Removal of hardcoded Calico CNI security group rules

This is documented more completely in the API upgrade documentation.

Prior to v1beta1, when using managed security groups we would automatically add certain rules which were specific to Calico CNI. It was not possible to add rules for any other CNI. A common way to work round this was to set allowAllInClusterTraffic: true.

With v1beta1 there are no longer any implicit rules for any CNI. However, it is now possible to specify custom rules in the cluster spec which will be automatically added to managed security groups. Users of Calico CNI must now add these rules explicitly. Users of other CNIs now have the option of using managed security groups.

Calico CNI rules will be added automatically when upgrading to v1beta1 from a previous API version.

The Calico CNI rules have been added to the release templates, so for now creating a cluster with clusterctl will continue to have Calico rules when using the default templates.

Management cluster changes
Removal of MutatingWebhookConfiguration

CAPO no longer uses a mutating webhook, and its configuration is removed. If you upgrade your management cluster with clusterctl this will be handled correctly. If you do it manually you must ensure you remove the MutatingWebhookConfiguration capo-mutating-webhook-configuration. If you do not you may see errors like the one in https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1927.

Minimum management cluster version is now 1.25

v0.10.0 now uses https://kubernetes.io/docs/reference/using-api/cel/ for some API validations, which only became available without a feature gate in 1.25. Consequently we now require the management cluster to be at least k8s 1.25.

Highlighted new features
API Reference documentation

We now automatically publish API reference documentation! The documentation for v1beta1 can be found here: https://cluster-api-openstack.sigs.k8s.io/api/v1beta1/api

Floating IP IPAM Provider

It is now possible to allocate floating IPs for individual machines using the new Floating IP IPAM Provider documented here: https://cluster-api-openstack.sigs.k8s.io/api/v1alpha1/api#infrastructure.cluster.x-k8s.io/v1alpha1.OpenStackFloatingIPPool

Attach them to a machine via the new floatingIPPoolRef in OpenStackMachineSpec: https://cluster-api-openstack.sigs.k8s.io/api/v1beta1/api#infrastructure.cluster.x-k8s.io/v1beta1.OpenStackMachineSpec

What's Changed
New Features
Bug fixes
Documentation
Administrative
API changes
Changes to build, test, and CI, minor changes, and code tidy ups
New Contributors

Full Changelog: kubernetes-sigs/cluster-api-provider-openstack@v0.9.0...v0.10.0

v0.9.2

Compare Source

Changes since v0.9.1
🐛 Bug Fixes
🌱 Others
  • Add EmilienM as a maintainer (#​2116)

Thanks to all our contributors! 😊

v0.9.1

Compare Source

What's Changed

Full Changelog: kubernetes-sigs/cluster-api-provider-openstack@v0.9.0...v0.9.1


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@scs-zuul
Copy link

scs-zuul bot commented Apr 17, 2024

Build succeeded (e2e-quick-test pipeline).
https://zuul.scs.community/t/SCS/buildset/43fb572f5eba49719b44c7f0e390728c

✔️ k8s-cluster-api-provider-e2e-quick SUCCESS in 37m 47s

Warning:

SCS Compliance results Testing SCS Compatible KaaS version v2 ******************************************************* Testing standard Kubernetes version policy ... Reference: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Standards/scs-0210-v2-k8s-version-policy.md ... INFO: Checking cluster specified by default context in /home/ubuntu/src/github.com/SovereignCloudStack/k8s-cluster-api-provider/terraform/pr733-36de57.yaml.gx-scs-zuul. INFO: The K8s cluster version 1.28.8 of cluster 'pr733-36de57-admin@pr733-36de57' is still in the recency time window.

... returned 0 errors, 0 aborts


Testing standard Kubernetes node distribution and availability ...
Reference: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Standards/scs-0214-v1-k8s-node-distribution.md ...
WARNING: There seems to be no distribution across multiple regions or labels aren't set correctly across nodes.
WARNING: There seems to be no distribution across multiple zones or labels aren't set correctly across nodes.
INFO: The nodes are distributed across 3 host-ids.
WARNING: There seems to be no distribution across multiple regions or labels aren't set correctly across nodes.
WARNING: There seems to be no distribution across multiple zones or labels aren't set correctly across nodes.
INFO: The nodes are distributed across 3 host-ids.
The config file under ./config.yaml couldn't be found, falling back to the default config.
... returned 0 errors, 0 aborts


Testing standard CNCF Kubernetes conformance ...
Reference: https://github.com/cncf/k8s-conformance/tree/master ...
WARNING: No check tool specified for CNCF Kubernetes conformance


Verdict for subject KaaS_V1, SCS Compatible KaaS, version v2: PASSED
Testing SCS Compatible KaaS version v1


Testing standard Kubernetes version policy ...
Reference: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Standards/scs-0210-v2-k8s-version-policy.md ...
... returned 0 errors, 0 aborts


Testing standard Kubernetes node distribution and availability ...
Reference: https://raw.githubusercontent.com/SovereignCloudStack/standards/main/Standards/scs-0214-v1-k8s-node-distribution.md ...
... returned 0 errors, 0 aborts


Verdict for subject KaaS_V1, SCS Compatible KaaS, version v1: PASSED

Sonobouy results === Collecting results === time="2024-04-17T16:46:13Z" level=info msg="delete request issued" dry-run=false kind=namespace namespace=sonobuoy time="2024-04-17T16:46:13Z" level=info msg="delete request issued" dry-run=false kind=clusterrolebindings names="[sonobuoy-serviceaccount-sonobuoy]" time="2024-04-17T16:46:13Z" level=info msg="delete request issued" dry-run=false kind=clusterroles names="[sonobuoy-serviceaccount-sonobuoy]" Plugin: e2e Status: passed Total: 7393 Passed: 5 Failed: 0 Skipped: 7388

Plugin: systemd-logs
Status: passed
Total: 5
Passed: 5
Failed: 0
Skipped: 0

Run Details:
API Server version: v1.28.8
Node health: 6/6 (100%)
Pods health: 51/51 (100%)
Errors detected in files:
Errors:
27946 podlogs/kube-system/kube-apiserver-pr733-36de57-dg5cb-rlb6l/logs/kube-apiserver.txt
235 podlogs/kube-system/etcd-pr733-36de57-dg5cb-rlb6l/logs/etcd.txt
58 podlogs/kube-system/etcd-pr733-36de57-dg5cb-7lbn6/logs/etcd.txt
40 podlogs/kube-system/kube-apiserver-pr733-36de57-dg5cb-7lbn6/logs/kube-apiserver.txt
32 podlogs/kube-system/snapshot-controller-7c5dccb849-6rrzk/logs/snapshot-controller.txt
28 podlogs/kube-system/etcd-pr733-36de57-dg5cb-pzzg7/logs/etcd.txt
23 podlogs/kube-system/kube-scheduler-pr733-36de57-dg5cb-pzzg7/logs/kube-scheduler.txt
19 podlogs/kube-system/snapshot-controller-7c5dccb849-w4rfh/logs/snapshot-controller.txt
19 podlogs/kube-system/kube-scheduler-pr733-36de57-dg5cb-7lbn6/logs/kube-scheduler.txt
16 podlogs/kube-system/kube-controller-manager-pr733-36de57-dg5cb-pzzg7/logs/kube-controller-manager.txt
13 podlogs/kube-system/kube-controller-manager-pr733-36de57-dg5cb-7lbn6/logs/kube-controller-manager.txt
12 podlogs/kube-system/kube-apiserver-pr733-36de57-dg5cb-pzzg7/logs/kube-apiserver.txt
6 podlogs/sonobuoy/sonobuoy-systemd-logs-daemon-set-fb60c49cb10d43fd-65ct9/logs/sonobuoy-worker.txt
6 podlogs/kube-system/cilium-lfc95/logs/cilium-agent.txt
6 podlogs/kube-system/cilium-hqfmj/logs/cilium-agent.txt
6 podlogs/kube-system/cilium-vb764/logs/cilium-agent.txt
5 podlogs/kube-system/cilium-plt4b/logs/cilium-agent.txt
5 podlogs/kube-system/cilium-prsl4/logs/cilium-agent.txt
5 podlogs/kube-system/cilium-qv6qm/logs/cilium-agent.txt
4 podlogs/kube-system/openstack-cloud-controller-manager-nrnb7/logs/openstack-cloud-controller-manager.txt
2 podlogs/sonobuoy/sonobuoy/logs/kube-sonobuoy.txt
1 podlogs/kube-system/metrics-server-56cfc8b678-zcs8d/logs/metrics-server.txt
1 podlogs/kube-system/kube-proxy-bl5vf/logs/kube-proxy.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-45zfz/logs/node-driver-registrar.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-nzmv4/logs/node-driver-registrar.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-j6z6z/logs/node-driver-registrar.txt
1 podlogs/kube-system/openstack-cloud-controller-manager-bfzz4/logs/openstack-cloud-controller-manager.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-g9g82/logs/liveness-probe.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-kknnb/logs/node-driver-registrar.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-rfbhk/logs/node-driver-registrar.txt
1 podlogs/sonobuoy/sonobuoy-e2e-job-77cb0075367a4194/logs/e2e.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-g9g82/logs/node-driver-registrar.txt
Warnings:
3153 podlogs/kube-system/etcd-pr733-36de57-dg5cb-rlb6l/logs/etcd.txt
1580 podlogs/kube-system/etcd-pr733-36de57-dg5cb-7lbn6/logs/etcd.txt
462 podlogs/kube-system/etcd-pr733-36de57-dg5cb-pzzg7/logs/etcd.txt
51 podlogs/kube-system/kube-apiserver-pr733-36de57-dg5cb-rlb6l/logs/kube-apiserver.txt
46 podlogs/kube-system/kube-apiserver-pr733-36de57-dg5cb-7lbn6/logs/kube-apiserver.txt
31 podlogs/kube-system/kube-apiserver-pr733-36de57-dg5cb-pzzg7/logs/kube-apiserver.txt
7 podlogs/kube-system/kube-scheduler-pr733-36de57-dg5cb-7lbn6/logs/kube-scheduler.txt
7 podlogs/kube-system/csi-cinder-nodeplugin-rfbhk/logs/node-driver-registrar.txt
7 podlogs/kube-system/csi-cinder-nodeplugin-kknnb/logs/node-driver-registrar.txt
6 podlogs/kube-system/csi-cinder-nodeplugin-nzmv4/logs/node-driver-registrar.txt
6 podlogs/kube-system/cilium-prsl4/logs/cilium-agent.txt
4 podlogs/kube-system/csi-cinder-controllerplugin-5955c86dbc-mw8tc/logs/csi-attacher.txt
4 podlogs/kube-system/csi-cinder-nodeplugin-g9g82/logs/node-driver-registrar.txt
4 podlogs/kube-system/openstack-cloud-controller-manager-nrnb7/logs/openstack-cloud-controller-manager.txt
3 podlogs/kube-system/kube-scheduler-pr733-36de57-dg5cb-pzzg7/logs/kube-scheduler.txt
3 podlogs/kube-system/csi-cinder-nodeplugin-rfbhk/logs/liveness-probe.txt
3 podlogs/kube-system/csi-cinder-nodeplugin-45zfz/logs/node-driver-registrar.txt
3 podlogs/kube-system/csi-cinder-nodeplugin-kknnb/logs/liveness-probe.txt
3 podlogs/kube-system/csi-cinder-nodeplugin-g9g82/logs/liveness-probe.txt
2 podlogs/kube-system/cilium-vb764/logs/cilium-agent.txt
2 podlogs/kube-system/cilium-lfc95/logs/cilium-agent.txt
2 podlogs/kube-system/csi-cinder-nodeplugin-j6z6z/logs/node-driver-registrar.txt
2 podlogs/kube-system/csi-cinder-nodeplugin-45zfz/logs/liveness-probe.txt
2 podlogs/kube-system/cilium-hqfmj/logs/cilium-agent.txt
1 podlogs/kube-system/openstack-cloud-controller-manager-jv7qz/logs/openstack-cloud-controller-manager.txt
1 podlogs/kube-system/cilium-plt4b/logs/cilium-agent.txt
1 podlogs/kube-system/openstack-cloud-controller-manager-bfzz4/logs/openstack-cloud-controller-manager.txt
1 podlogs/kube-system/cilium-qv6qm/logs/cilium-agent.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-j6z6z/logs/liveness-probe.txt
1 podlogs/sonobuoy/sonobuoy-systemd-logs-daemon-set-fb60c49cb10d43fd-65ct9/logs/sonobuoy-worker.txt
1 podlogs/kube-system/csi-cinder-nodeplugin-nzmv4/logs/liveness-probe.txt
1 podlogs/sonobuoy/sonobuoy-e2e-job-77cb0075367a4194/logs/e2e.txt
1 podlogs/kube-system/csi-cinder-controllerplugin-5955c86dbc-mw8tc/logs/csi-provisioner.txt
1 podlogs/sonobuoy/sonobuoy/logs/kube-sonobuoy.txt
time="2024-04-17T16:46:14Z" level=info msg="delete request issued" dry-run=false kind=namespace namespace=sonobuoy
time="2024-04-17T16:46:14Z" level=info msg="delete request issued" dry-run=false kind=clusterrolebindings names="[]"
time="2024-04-17T16:46:14Z" level=info msg="delete request issued" dry-run=false kind=clusterroles names="[]"

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[]}
...
...
...

Namespace "sonobuoy" has status {Phase:Terminating Conditions:[{Type:NamespaceDeletionDiscoveryFailure Status:False LastTransitionTime:2024-04-17 16:46:35 +0000 UTC Reason:ResourcesDiscovered Message:All resources successfully discovered} {Type:NamespaceDeletionGroupVersionParsingFailure Status:False LastTransitionTime:2024-04-17 16:46:35 +0000 UTC Reason:ParsedGroupVersions Message:All legacy kube types successfully parsed} {Type:NamespaceDeletionContentFailure Status:False LastTransitionTime:2024-04-17 16:46:35 +0000 UTC Reason:ContentDeleted Message:All content successfully deleted, may be waiting on finalization} {Type:NamespaceContentRemaining Status:True LastTransitionTime:2024-04-17 16:46:35 +0000 UTC Reason:SomeResourcesRemain Message:Some resources are remaining: pods. has 1 resource instances} {Type:NamespaceFinalizersRemaining Status:False LastTransitionTime:2024-04-17 16:46:35 +0000 UTC Reason:ContentHasNoFinalizers Message:All content-preserving finalizers finished}]}

Namespace "sonobuoy" has been deleted

Deleted all ClusterRoles and ClusterRoleBindings.
=== Sonobuoy conformance tests passed in 369s ===
make[1]: Leaving directory '/home/ubuntu/src/github.com/SovereignCloudStack/k8s-cluster-api-provider/terraform'

@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from 97d3d4e to eafee88 Compare April 30, 2024 23:01
@renovate renovate bot changed the title Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.0 Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.1 Apr 30, 2024
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from eafee88 to 665ab51 Compare May 8, 2024 13:22
@renovate renovate bot changed the title Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.1 Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.2 May 8, 2024
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from 665ab51 to 784dd41 Compare May 17, 2024 14:13
@renovate renovate bot changed the title Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.2 Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.3 May 17, 2024
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch 3 times, most recently from 95a73a7 to ece5d5d Compare May 24, 2024 06:04
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from ece5d5d to 6d265dc Compare May 30, 2024 12:15
@scs-zuul
Copy link

scs-zuul bot commented Jun 10, 2024

Build failed (e2e-quick-test pipeline).
https://zuul.scs.community/t/SCS/buildset/d7372b34d4d44be3a328d6dd1e5d482c

k8s-cluster-api-provider-e2e-quick FAILURE in 42m 21s

@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from 6d265dc to 35bf15c Compare June 26, 2024 13:13
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from 35bf15c to 4d6b775 Compare July 7, 2024 23:46
@renovate renovate bot changed the title Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.3 Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.4 Jul 7, 2024
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from 4d6b775 to 116a754 Compare July 8, 2024 07:32
…0.10.5

Signed-off-by: SCS Renovate Bot <renovatebot@scs.community>
@renovate renovate bot force-pushed the renovate/kubernetes-sigs-cluster-api-provider-openstack-0.x branch from 116a754 to 6726a0e Compare September 3, 2024 18:24
@renovate renovate bot changed the title Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.4 Update dependency kubernetes-sigs/cluster-api-provider-openstack to v0.10.5 Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant