Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for managedFieldsManagers in ignoreResourceUpdates (solves #15094,#15151) #15137

Conversation

uriariel
Copy link

@uriariel uriariel commented Aug 21, 2023

Checklist:

  • Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
  • The title of the PR states what changed and the related issues number (used for the release note).
  • The title of the PR conforms to the Toolchain Guide
  • I've included "Closes [ISSUE #]" or "Fixes [ISSUE #]" in the description to automatically close the associated issue.
  • I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
  • Does this PR require documentation updates?
  • I've updated documentation as required by this PR.
  • Optional. My organization is added to USERS.md.
  • I have signed off all my commits as required by DCO
  • I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
  • My build is green (troubleshooting builds).
  • My new feature complies with the feature status guidelines.
  • I have added a brief description of why this PR is necessary and/or what this PR solves.

Closes [ISSUE #15151]
Closes [ISSUE #15094]

🤖 Generated by Copilot at 0f8ed93

Summary

⚡✨🔨

Added a new function to compare application state with cached state using different ignore rules, and used it in the application reconciliation process. This improves the performance and accuracy of state comparisons and reduces unnecessary diffs.

CompareAppState
Now with more options - how?
isCompareWithRecent

Walkthrough

  • Add and implement CompareAppStateWithComparisonLevel function to improve application state comparisons (link, link, link, link)

@uriariel uriariel changed the title Add support for managedFieldsManagers in ignoreResourceUpdates feat: Add support for managedFieldsManagers in ignoreResourceUpdates Aug 22, 2023
@codecov
Copy link

codecov bot commented Aug 22, 2023

Codecov Report

Patch coverage: 82.14% and project coverage change: -0.02% ⚠️

Comparison is base (dc8d729) 49.91% compared to head (242b103) 49.89%.
Report is 24 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #15137      +/-   ##
==========================================
- Coverage   49.91%   49.89%   -0.02%     
==========================================
  Files         262      263       +1     
  Lines       45131    45205      +74     
==========================================
+ Hits        22525    22555      +30     
- Misses      20391    20433      +42     
- Partials     2215     2217       +2     
Files Changed Coverage Δ
controller/state.go 71.86% <80.76%> (+0.43%) ⬆️
controller/appcontroller.go 54.03% <100.00%> (ø)

... and 14 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: uri ariel <uri.ariel@granulate.io>
Signed-off-by: uri ariel <uri.ariel@granulate.io>
@uriariel uriariel force-pushed the add-support-for-managedFieldsManagers-in-ignoreResourceUpdates branch from fd32420 to 0f8ed93 Compare August 22, 2023 22:49
@uriariel uriariel changed the title feat: Add support for managedFieldsManagers in ignoreResourceUpdates feat: Add support for managedFieldsManagers in ignoreResourceUpdates (solves #15094,#15151) Aug 22, 2023
@uriariel
Copy link
Author

solves #15094,#15151

@uriariel
Copy link
Author

its a "completion" of @agaudreault-jive's feature: #13912, regarding documentation, its strongly implied today that the current ignoreResourceUpdates supports managedFieldsManagers.
@leoluz please take a look at this PR

@leoluz leoluz self-requested a review August 31, 2023 15:55
Signed-off-by: uri ariel <uri.ariel@granulate.io>
…ourceUpdates

Signed-off-by: uri ariel <uri.ariel@granulate.io>
@agaudreault
Copy link
Member

I really dont think the behavior of ignoreResourceUpdates should be used to ignoreDifferences from the cluster. Ignore resource updates will only filter some cluster events to reduce the quantity of reconcile Argo has to process. It does not and should not in my opinion affect the sync behavior.

ignoreResourceUpdates is currently only configurable on the server and it is not possible to specify it on the Application.

The documentation does not mention the usage of managedFieldsManagers because it is not implemented

I think ignore difference from the cluster is a good idea. I am aware of #15116 that was created around this Idea, however, I think it should be implemented separately from ignoreResourceUpdates.

Copy link
Member

@agaudreault agaudreault left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this PR does not Add support for managedFieldsManagers in ignoreResourceUpdates, but instead, changes the semantic of ignoreResourceUpdates to a ignoreClusterDifferences. I think both features should not be mutually exclusive and it would be a good idea to implement a ignoreClusterDifferences.

@leoluz
Copy link
Collaborator

leoluz commented Sep 5, 2023

I really dont think the behavior of ignoreResourceUpdates should be used to ignoreDifferences from the cluster. Ignore resource updates will only filter some cluster events to reduce the quantity of reconcile Argo has to process. It does not and should not in my opinion affect the sync behavior.

I also agree with this statement. Also, from the code perspective, the CompareAppState function is already begging for a good refactoring. I wouldn't introduce an additional parameter in that function.

I think ignore difference from the cluster is a good idea. I am aware of #15116 that was created around this Idea, however, I think it should be implemented separately from ignoreResourceUpdates.

I am not sure I understand the cluster ignore difference idea. That sounds like what we tried to accomplish with Ignore diffs with manageFieldsManagers.

@uriariel
Copy link
Author

@leoluz @agaudreault-jive First of all sorry for the delay, I was on a vacation those days.

I am not sure I understand the cluster ignore difference idea. That sounds like what we tried to accomplish with Ignore diffs with manageFieldsManagers.
manageFieldsManagers doesn't allow to ignore changes from the cluster only.
e.g. if I want to set requests in the application yamls, as so:

client git repo:

resources:
  requests:
    cpu: 500m

and then ignore changes made to the requests by the VPA inside the cluster only. VPA changes cpu from 500m to 100m.
I want argoCD to be able to ignore those changes but don't ignore changes made to the yaml's in git.

client git repo:

resources:
  requests:
    cpu: 600m

with ignoreResourceUpdates.managedFieldsManagers argoCD should sync this change, with ignoreDifferences.managedFieldsManagers argoCD won't sync this change since the resources.requests.cpu field is managed by the VPA-manager.

@agaudreault-jive I get it that this feature wasn't implemented as ignoreClusterDifferences, but practically it does this for jsonPointers and jqPathExpression(From the tests I've done on my cluster, maybe you can describe a scenario where it would be completely different) , you're right that to implement it for managedFieldsManagers in the same manner its implemented for the other specifiers is impossible since we would have to "know" the managed fields in the live object in advance.
I think this implementation gives the closest alternative to implementing ignoreResourceUpdates.managedFieldsManagers.

@zachaller
Copy link
Contributor

There documentation is quite low on this but it sounds like another way you could think about "defaults" instead of using the requests from the resource itself is to use a policy. They mention this here

When setting limits VPA will conform to [resource policies](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L100). It will maintain limit to request ratio specified for all containers.

VPA will try to cap recommendations between min and max of [limit ranges](https://kubernetes.io/docs/concepts/policy/limit-range/). If limit range conflicts and VPA resource policy conflict, VPA will follow VPA policy (and set values outside the limit range).

To disable getting VPA recommendations for an individual container, set mode to "Off" in containerPolicies.

This eventually leads us to this

type ContainerResourcePolicy struct {
	// Name of the container or DefaultContainerResourcePolicy, in which
	// case the policy is used by the containers that don't have their own
	// policy specified.
	ContainerName string `json:"containerName,omitempty" protobuf:"bytes,1,opt,name=containerName"`
	// Whether autoscaler is enabled for the container. The default is "Auto".
	// +optional
	Mode *ContainerScalingMode `json:"mode,omitempty" protobuf:"bytes,2,opt,name=mode"`
	// Specifies the minimal amount of resources that will be recommended
	// for the container. The default is no minimum.
	// +optional
	MinAllowed v1.ResourceList `json:"minAllowed,omitempty" protobuf:"bytes,3,rep,name=minAllowed,casttype=ResourceList,castkey=ResourceName"`
	// Specifies the maximum amount of resources that will be recommended
	// for the container. The default is no maximum.
	// +optional
	MaxAllowed v1.ResourceList `json:"maxAllowed,omitempty" protobuf:"bytes,4,rep,name=maxAllowed,casttype=ResourceList,castkey=ResourceName"`

	// Specifies the type of recommendations that will be computed
	// (and possibly applied) by VPA.
	// If not specified, the default of [ResourceCPU, ResourceMemory] will be used.
	ControlledResources *[]v1.ResourceName `json:"controlledResources,omitempty" patchStrategy:"merge" protobuf:"bytes,5,rep,name=controlledResources"`

	// Specifies which resource values should be controlled.
	// The default is "RequestsAndLimits".
	// +optional
	ControlledValues *ContainerControlledValues `json:"controlledValues,omitempty" protobuf:"bytes,6,rep,name=controlledValues"`
}

It would seem then that you could instead of update the resource limits on the object in question Deployment etc you could define the min as a policy and adjust that in git

@zachaller
Copy link
Contributor

zachaller commented Sep 14, 2023

Although after re-reading the docs again it might actually still require the limits/requests on the resource in order to figure out the ratios, is this true? If so I feel that the vpa should also let you configure said ratio in the policy vs argocd trying to handle multiple inputs.

It will maintain limit to request ratio specified for all containers.

@zachaller
Copy link
Contributor

This issue also seems to be semi related: kubernetes/autoscaler#5934

Copy link
Collaborator

@leoluz leoluz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the discussion we had in the contributor's meeting, it was clarified that this is not the direction that we want to go in Argo CD for a number of reasons:

  • It changes the semantics of ignoreResourceUpdates by adding extra behaviour to it in order to address an use-case that it wasn't originally designed to.
  • It creates a competition for fields ownership that could potentially drive into situations where Argo CD is in eternal sync state depending on how the application is configured.

Please consider the suggestions given during the meeting by finding alternative and better approaches to drive the behaviour you want.

If you still think that this use-case is something that Argo CD should handle, please submit a proposal with the details of how it would behave in different edge-cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants