Skip to content

Commit

Permalink
more comments
Browse files Browse the repository at this point in the history
  • Loading branch information
fabriziopandini committed Jun 10, 2022
1 parent c07bca1 commit eaf2e61
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 25 deletions.
19 changes: 8 additions & 11 deletions cmd/clusterctl/client/cluster/mover.go
Original file line number Diff line number Diff line change
Expand Up @@ -1174,30 +1174,27 @@ func (o *objectMover) checkTargetProviders(toInventory InventoryClient) error {
return kerrors.NewAggregate(errList)
}

// patchTopologyManagedFields patches the managed fields of obj if parts of it are owned by the topology controller.
// This is necessary to ensure the managed fields created by the topology controller are still present and thus to
// prevent unnecessary machine rollouts. Without patching the managed fields, clusterctl would be the owner of the fields
// which would lead to co-ownership and also additional machine rollouts.
func patchTopologyManagedFields(ctx context.Context, oldManagedFields []metav1.ManagedFieldsEntry, obj *unstructured.Unstructured, cTo client.Client) error {
var containsTopologyManagedFields bool
// managedFields will contain the target managed fields obj gets patched to.
wantManagedFields := []metav1.ManagedFieldsEntry{}

// Check if the object was owned by the topology controller.
for _, m := range oldManagedFields {
if m.Manager == "manager" && m.Operation == metav1.ManagedFieldsOperationUpdate && m.Subresource == "" {
continue
}
if m.Operation == metav1.ManagedFieldsOperationApply &&
m.Manager == structuredmerge.TopologyManagerName &&
m.Subresource == "" {
containsTopologyManagedFields = true
break
}
wantManagedFields = append(wantManagedFields, m)
}

// Return early if the managed fields had no entry from topology manager
// Return early if the object was not owned by the topology controller.
if !containsTopologyManagedFields {
return nil
}

base := obj.DeepCopy()
obj.SetManagedFields(wantManagedFields)
obj.SetManagedFields(oldManagedFields)

if err := cTo.Patch(ctx, obj, client.MergeFrom(base)); err != nil {
return errors.Wrapf(err, "error patching managed fields %q %s/%s",
Expand Down
12 changes: 6 additions & 6 deletions docs/book/src/clusterctl/commands/alpha-topology-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,18 @@ the input should have all the objects needed.

<h1>Limitations</h1>

While running in a production the topology controllers uses [Server Side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/)
to proper manage other controllers co-authoring the same objects, but this kind of interactions can't be recreated
The topology controllers uses [Server Side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/)
to support use cases where other controllers are co-authoring the same objects, but this kind of interactions can't be recreated
in a dry-run scenario.

As a consequence Dry-Run can give some false positive/false negative when trying to have a preview of
As a consequence Dry-Run can give some false positives/false negatives when trying to have a preview of
changes to a set of existing topology owned objects. In other worlds this limitation impacts all the use cases described
below except for "Designing a new ClusterClass".

More specifically:
- DryRun doesn't consider OpenAPI schema extension like +ListMap this can lead to false positive when topology
dry run is simulating a change to an existing slice (DryRun always revert external changes, like server side apply when +ListMap=atomic).
- DryRun doesn't consider existing metadata.managedFields, and this can lead to false negative when topology dry run
- DryRun doesn't consider OpenAPI schema extension like +ListMap this can lead to false positives when topology
dry run is simulating a change to an existing slice (DryRun always reverts external changes, like server side apply when +ListMap=atomic).
- DryRun doesn't consider existing metadata.managedFields, and this can lead to false negatives when topology dry run
is simulating a change where a field is dropped from a template (DryRun always preserve dropped fields, like
server side apply when the field has more than one manager).

Expand Down
6 changes: 3 additions & 3 deletions docs/book/src/developer/providers/v1.1-to-v1.2.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ in ClusterAPI are kept in sync with the versions used by `sigs.k8s.io/controller
### Required API Changes for providers

- ClusterClass and managed topologies are now using [Server Side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/)
to proper manager other controllers like CAPA/CAPZ coauthoring slices, see [#6320](https://github.com/kubernetes-sigs/cluster-api/issues/6320).
In order to proper leverage this feature providers are required to add marker to their API types as described in
to properly manage other controllers like CAPA/CAPZ coauthoring slices, see [#6320](https://github.com/kubernetes-sigs/cluster-api/issues/6320).
In order to take advantage of this feature providers are required to add marker to their API types as described in
[merge-strategy](https://kubernetes.io/docs/reference/using-api/server-side-apply/#merge-strategy).
NOTE: the change will cause a rollout on existing clusters created with ClusterClass

Expand All @@ -61,7 +61,7 @@ Must be modified into:

```go
// +optional
// +listType=Map
// +listType=map
// +listMapKey=id
Subnets Subnets `json:"subnets,omitempty"
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -176,17 +176,17 @@ owned objects in a Cluster.

More specifically, the topology controller uses [Server Side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/)
to write/patch topology owned objects; using SSA allows other controllers to co-author the generated objects,
like e.g. integration info for the subnet list in CAPA.
like e.g. adding info for subnets in CAPA.

<aside class="note">
<h1>What about patches?</h1>

The considerations above apply also when using patches, the only difference being that the
authoritative fields should be determined by applying patches on top of the templates.
set of fields that are enforced should be determined by applying patches on top of the templates.

</aside>

A corollary of the behaviour described above is that it is technically possible to change non-authoritative
fields in the object derived from the template in a specific Cluster, but we advise against using the possibility
A corollary of the behaviour described above is that it is technically possible to change fields in the object
which are not derived from the templates and patches, but we advise against using the possibility
or making ad-hoc changes in generated objects unless otherwise needed for a workaround. It is always
preferable to improve ClusterClasses by supporting new Cluster variants in a reusable way.
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ func (h *serverSidePatchHelper) Patch(ctx context.Context) error {
return h.client.Patch(ctx, h.modified, client.Apply, options...)
}

// cleanupLegacyManagedFields cleanups managed filed management in place before introducing SSA.
// cleanupLegacyManagedFields cleanups managed field management in place before introducing SSA.
// NOTE: this operation can trigger a machine rollout, but this is considered acceptable given that ClusterClass is still alpha
// and SSA adoption align the topology controller with K8s recommended solution for many controllers authoring the same object.
func cleanupLegacyManagedFields(obj *unstructured.Unstructured, c client.Client) error {
Expand Down

0 comments on commit eaf2e61

Please sign in to comment.