-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServerSideApply fails with "conversion failed" #11136
Comments
Which version of k8s are you using? |
We use 1.22.14. |
@Dbzman Please inspect your Argo CD controller logs and see if you find an entry with this message:
If so, can you provide the full message in the log? |
@leoluz We didn't see any of those errors. We configured the loglevel to info. Not sure if the error is supposed to show there. |
We noticed a very strange behavior here. We saved the affected CronJob manifest locally, deleted it on Kubernetes and re-created it again. (so it's the exact same manifest, just re-created) After that, Argo was able to sync the application. |
Thanks for the additional info. That actually makes sense. What is strange to me is that from your error message it seems that Argo CD is trying to convert from I'll try to reproduce this error locally anyways. |
Thanks for checking. Indeed, it's really weird that it tries to convert to an older version. We had this issue on 60 of our 400 apps. Yesterday we fixed them all with the above mentioned workaround. Today all of those 60 apps show the error again. So it seems that it has nothing to do with old manifests that were upgraded. |
@Dbzman just confirming.. Are the steps to reproduce still valid with your latest findings?? |
@leoluz I would say yes. |
Using 2.5.1 version and having similar issues. |
Same here with |
Same behavior with adding Ingress in case someone hits the issue with that resource. |
Just to provide some direction for users that might get into this error, the current workaround is disabling SSA in the failing resources by adding the annotation: |
Hi @leoluz, I add the annotation but it didn't work, still the same problem (HorizontalPodAutoscaler case) |
fwiw the same occurs with |
We run into similar issues when enabling SSA for our apps. However, the issue isn't consistent between clusters/apps (the same app/resource might work on one but not the other).
@leoluz I believe Managed fields of an affected `Ingress` resource:metadata:
managedFields:
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:alb.ingress.kubernetes.io/actions.ssl-redirect: {}
f:alb.ingress.kubernetes.io/certificate-arn: {}
f:alb.ingress.kubernetes.io/listen-ports: {}
f:alb.ingress.kubernetes.io/scheme: {}
f:alb.ingress.kubernetes.io/ssl-policy: {}
f:alb.ingress.kubernetes.io/target-type: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
manager: kubectl
operation: Update
time: "2021-05-28T16:20:40Z"
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers: {}
manager: controller
operation: Update
time: "2021-08-02T09:10:54Z"
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:ingressClassName: {}
manager: argocd-application-controller
operation: Update
time: "2021-08-02T09:18:03Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
v:"group.ingress.k8s.aws/argo-ingresses": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: controller
operation: Update
time: "2022-03-21T15:25:24Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:alb.ingress.kubernetes.io/group.name: {}
f:alb.ingress.kubernetes.io/load-balancer-attributes: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:rules: {}
manager: argocd-application-controller
operation: Update
time: "2022-08-15T11:22:05Z"
name: argocd
namespace: argocd
resourceVersion: "206036857"
uid: 3df56465-962b-42bb-9075-e61740b636cc Managed fields of corresponding resource (same name / namespace) but on a different cluster (just different cluster / app age):metadata:
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:alb.ingress.kubernetes.io/actions.ssl-redirect: {}
f:alb.ingress.kubernetes.io/certificate-arn: {}
f:alb.ingress.kubernetes.io/group.name: {}
f:alb.ingress.kubernetes.io/listen-ports: {}
f:alb.ingress.kubernetes.io/scheme: {}
f:alb.ingress.kubernetes.io/ssl-policy: {}
f:alb.ingress.kubernetes.io/target-type: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:spec:
f:ingressClassName: {}
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-05-05T15:11:18Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"group.ingress.k8s.aws/argo-ingresses": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: controller
operation: Update
time: "2022-05-05T15:11:20Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:alb.ingress.kubernetes.io/load-balancer-attributes: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: argocd-application-controller
operation: Update
time: "2022-08-15T11:21:51Z" It also explains why recreating works - it clears the Sadly, it does not help me yet to resolve this issue without recreating the resources (I haven't found a way to clear/edit the managedFields). |
This is not a might this is the definitive issue. 😓 |
@leoluz perhaps this will help: https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/555-server-side-apply/README.md Links that are useful in the readme are: |
Do we why ArgoCD does not respect ".Capabilities.APIVersions", but use the "managedField" (supposed it is the reason, I don't know internally which component does this) as the way to decide which ApiGroup to use? |
We are seeing this in 2.8 with HPA, clusterrole, clusterrolebinding and roles, on clusters that have all been properly upgraded and resource manifests updated but the clusters were created back when these beta api versions were k8s and are now removed. |
We're seeing the same issue with ClusterRole, ClusterRoleBinding. |
K8s docs notes that you can clear managed fields with a json patch. We've been employing that to get past this issue but this is really tiresome. Not sure if Argo can somehow handle it, which would be great. The errors in the ArgoCD sync panel aren't helpful enough because they don't tell us which resource had the conversion error.
@msw-kialo fyi ^ |
ServerSide Diff feature is merged and available in Argo CD 2.10-RC1. If enabled, it should address this and other diff problems when ServerSide Apply is used. I am closing this for now and feel free to reopen if the issue persists. |
Ran into a similar issue failing to calculate diff for ClusterRole
Enabling server side diff on the application resolved the issue for me. |
We are using 2.10.5 and have this problem when we want to enable server side apply. I deleted all the hpas, but it didn't help.
|
deleting the old HPAs and an old secret solve the issue in my case. |
Checklist:
argocd version
.Describe the bug
Using ServerSideApply, configured in an Application via Sync Options, fails with
Using it only with the "Sync" button, without having it configured for the app, works, though.
To Reproduce
batch/v1
or HPA with apiVersionautoscaling/v2beta2
synced without SSAExpected behavior
ServerSideApply should work in both cases (app config + manual sync)
Screenshots
Application configuration which breaks:
Using it only with the Sync button works:
Version
The text was updated successfully, but these errors were encountered: