Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make install fails with metadata.annotations too long .. #1140

Closed
champak opened this issue Oct 30, 2019 · 30 comments
Closed

make install fails with metadata.annotations too long .. #1140

champak opened this issue Oct 30, 2019 · 30 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@champak
Copy link

champak commented Oct 30, 2019

I saw an issue with the "make install" step that complains:
...
The CustomResourceDefinition "..." is invalid: metadata.annotations: Too long: must have at most 262144 characters

Looked like the issue was from :
kustomize build config/crd | kubectl apply -f -

changing it to
kustomize build config/crd | kubectl create -f -
worked OK.

I did not see any corrupt or super long annotation generated at the kustomize build config/crd output.

Versions:
KustomizeVersion:3.0.3 // tried more recent versions too
KubeBuilderVersion:"2.0.1"

Could not identify controller-gen version used. go.mod specified ::
sigs.k8s.io/controller-tools v0.2.1 // indirect

The CRD spec contains fairly standard stuff.
Has anyone else seen/addressed this ?

@champak champak added the kind/support Categorizes issue or PR as a support question. label Oct 30, 2019
@camilamacedo86
Copy link
Member

Hi @champak,

Are you able to provide the steps to create a POC to reproduce this issue using the latest version of kubebuilder? I mean, after following the Quick Start steps what is required to do to face this issue?

@camilamacedo86
Copy link
Member

/assing @camilamacedo86

@champak
Copy link
Author

champak commented Nov 14, 2019

I suspect this issue is caused by the size of the annotation that kubectl apply creates. I check the total size of the yaml generated by kustomize that gets piped to kubectl and its past 650K bytes. Think kubectl maintains state for an apply/merge. To support these cases could "make install" use kubectl replace <> instead of create ?

@rockmenjack
Copy link

same here...changed to kustomize build config/crd | kubectl create -f - fixed the issue

@camilamacedo86
Copy link
Member

HI @rockmenjack and @champak,

Could you help us by letting us know the steps to reproduce it and/or sharing the link for the project where you are able to check it?

@champak
Copy link
Author

champak commented Nov 27, 2019

I think the root cause is that if we include a native k8 type like pod specs into the spec for a kubebuilder operator, the entire spec gets included which blows up the size. I noted a couple of other issues that relate to this: #962 and #82292. @DirectXMan12 's comments seemed to indicate there is not a good way around this at present. Given that, could we consider replacing the kubectl apply with a few additional steps that allow the "make install/deploy" to work for large CRDs

@camilamacedo86
Copy link
Member

camilamacedo86 commented Nov 27, 2019

HI @rockmenjack and @champak,

You are saying that the issue is faced because of kustomize build config/crd | kubectl apply -f - and was solved when you replace it by kustomize build config/crd | kubectl create -f -. Am I right?

However, note that the apply means that it will create/update the resources when the create will shows an error in the case of the resources be already created instead of update it. It shows NOT the root cause of this issue at all. May something was not updated applied and then because of this, the error did not occur. However, I am unable to reproduce this scenario.

So, could you please provide the steps by following up the quick start to face the same? I mean, if I followed up the quick start what additional step should I do and when in order to face this scenario? Without the steps to reproduce the issue is very complicated to identify what is wrong and properly solve it.

@champak
Copy link
Author

champak commented Nov 27, 2019

@camilamacedo86 I added a trivial example at : https://github.com/champak/kbuildercrdtoobig

The spec contains 3 pods (https://github.com/champak/kbuildercrdtoobig/blob/master/example/api/v1alpha1/toobig_types.go ) and the CRD gets too big to be installed via the kustomize build config/crd | kubectl apply -f method. Hope that helps illustrates. Thanks for taking a look !

@champak
Copy link
Author

champak commented Dec 3, 2019

FWIW, here's what I did to get around the issue. Since we cannot always use replace:

(kustomize build config/crd | kubectl replace -f -) || (kustomize build config/crd | kubectl create -f -)

replace will fail if the CRD is not already in place, in which case create does the job.

This is merely a workaround. The root cause I think is the inclusion of the entire native spec(s) in the CRD spec. I have not dug into why an indirect reference to the native spec is not used instead. Does seem to be a significant limitation. If anyone knows why this approach was chosen please post.

@camilamacedo86
Copy link
Member

camilamacedo86 commented Dec 4, 2019

Hi @champak,

It happens because when we use the apply it will add the annotation last-applied-configuration with all.

This sets the kubectl.kubernetes.io/last-applied-configuration: '{...}' annotation on each object. The annotation contains the contents of the object configuration file that was used to create the object."

To know more about see here doc over it: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-to-create-objects

So, yes. The usage of create instead of apply flag is a valid workaround in this scenario. Also, IHMO it leads the need to re-check the CRD's definition and project design since shows that they may break the domain layers and or it has as spec unnecessary data which can indeed be hurting concepts such as single-responsibility for example.

Then, I think it could be closed as sorted out. WDYT?

@rockmenjack
Copy link

Also, IHMO it leads the need to re-check the CRD's definition and project design since shows that they may break the domain layers and or it has as spec unnecessary data which can indeed be hurting concepts such as single-responsibility for example.

Well, in my case, bring in PodSpec makes CRD definition resemble what users usually do for defining a Pod, which provides a smooth experience on using the CRD.

@camilamacedo86
Copy link
Member

camilamacedo86 commented Dec 5, 2019

HI @rockmenjack,

Well, in my case, bring in PodSpec makes CRD definition resemble what users usually do for defining a Pod, which provides a smooth experience on using the CRD.

I do not agree with this approach at all. Just to clarifies, could you imagine have an interface and ask for a user put a POD yaml in the field? IMHO the CRD should have specs/attributes that just is part of its own domain in order to not hurt concepts such as encapsulation, single responsibility principle and almost for sure cohesion that may cause unexpected sides effects such difficulty to read, maintain/improve and extend, to mention a few.

Is really the Pod info part of the Kind domain or a Pod is one of the secondary resources which is required to have for the Kind be installed/configured? What data from a POD actually is part of the Kind domain? So, instead of passing the Pod spec, you could inform, for example just the relevant data which can be customized as the image, the policy and etc and then create the object programmatically with the info provided.

Example

In this way, I'd recommend you persuade the DDD concept to design your solutions. For further information check the book

I hope that it helps you.

@champak
Copy link
Author

champak commented Dec 5, 2019

@camilamacedo86 Could we consider adding a separate make target that allows large CRDs to get installed. I used a combination of replace and create but there may be a better way. Given the declarative verbose syntax, we could blow past the limits even without using Pod specs.

With regard to being able to add native k8 objects like Pods into CRDs, I think this is a very useful feature of kube-builder and allows for re-use/composition of CRD specs in interesting and powerful ways. This particular issue I presume will eventually be solved by server side application where we will not be limited by the current kubectl implementation. My 2 cents ..Given past related threads on this topic wold be good to get input from @DirectXMan12

@DirectXMan12
Copy link
Contributor

it's pretty common to embed podspec in your CRD, that's definitely a feature we want to support.

the workaround is to not kubectl apply when installing OR to set maxDescLen on controller-tools when generating the CRD

@DirectXMan12
Copy link
Contributor

we introduced maxDescLen to work around this problem until we get rid of client-side apply

@champak
Copy link
Author

champak commented Dec 6, 2019

@DirectXMan12 Thanks much for weighing in. Would it make sense to have a kubectl workaround as a make target ? I looked around and could not tell in what time-frame we can expect server-side apply to be available.

If you had a pointer on why we needed to include complete component specs into the CRD spec instead of indirect references please post. Thanks.

@DirectXMan12
Copy link
Contributor

you can't use references in CRD validation. It was an upstream decision. They've gone back and forth on it. I bring it up at meetings every once and a while.

@DirectXMan12
Copy link
Contributor

IIRC their logic was that it might cause objects to become unexpectedly valid or invalid, but it's been up for debate for a while now

@camilamacedo86
Copy link
Member

camilamacedo86 commented Jan 7, 2020

Hi @DirectXMan12,

Sorry, still not clear for me what should be done in kubebuilder regards this scenario.

Reasons:

So, if some change needs to be done it shows that should be in K8S API instead of kubebuilder. Am I right? Or the makefile scaffolded should be changed with this approach as well? if not, would not make more sense close this issue and raise a new for the specific project since it is not a kubebuilder thing at all?

@champak
Copy link
Author

champak commented Jan 15, 2020

@DirectXMan12 Regarding the maxDescLen workaround adding the option to CRD_OPTIONS ?= "crd:trivialVersions=true,maxDescLen=0" works; Is this Makefile var CRD_OPTIONS a good place for the workaround ? Some controller-gen options go with the // + ,. syntax and so was wondering. I think limiting maxDescLen is better that the kubectl delete/recreate path since it allows for non-disruptive patching of the operator to work.

@camilamacedo86
Copy link
Member

camilamacedo86 commented Jan 15, 2020

Hi @champak,

IHMO what could be made here is doc the scenario, reason and possible solutions to avoid. However, I'd not change the makefile scaffolded. WDYT @DirectXMan12? Has it any place like a FAQ that it could be added?

@champak
Copy link
Author

champak commented Jan 16, 2020

@camilamacedo86 Thanks for your input. Can the maxDescLen be hinted to controller-gen by means other than passing it in directly as a cli option like the kubebuilder Makefile does ? Would have been nice if there was a way to skip descriptions for included stuff like PodSpecs but keep descriptions for the new attributes that we create but did not seem to be a way. In any case glad the workaround exists :)

@jpkrohling
Copy link

@champak, I couldn't find any evidence in the code that would make it possible to give a hint to the operator-sdk and/or controller-gen to skip the descriptions. In the end, I just moved from using the operator-sdk to using controller-gen directly, as you can see in this PR: jaegertracing/jaeger-operator#932 .

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 27, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bilalcaliskan
Copy link

Hello @camilamacedo86,

I am facing same issue, and i fix it by replacing kubectl apply with kubectl create as @champak suggested. I am using below versions:

Kubebuilder: 2.3.1
sigs.k8s.io/controller-runtime: v0.6.4
sigs.k8s.io/controller-tools: v0.4.1

As @DirectXMan12 mentioned, where to put tag to change verification with maxDescLen? Both CRD Generation and CRD Validation docs does not provide such information about maxDescLen. Also the error log says that metadata.annotations: Too long: must have at most 262144 bytes but metadata.annotations is managed by metav1.ObjectMeta, so how to put a generation tag on it? I have tried but could not succeeded on extracting metav1.ObjectMeta.Annotations.

Currently i have fixed my issue with workaround solution with @champak 's answer but it does not feel right. I will appreciate if you guys can suggest more stable solution.

@bilalcaliskan
Copy link

bilalcaliskan commented Dec 2, 2020

@camilamacedo86 I thought that i have fixed my issue but i was wrong. kubectl create is working on the first run but when you want to upgrade your controller with make deploy, it causes error because of that resource already exists. So better workaround still required. When i check the api-server logs, there are nothing on there, i think it is client-side problem somehow.

By the way, i had that issue during second tutorial of the Kubebuilder book.

adutra added a commit to adutra/k8ssandra-operator that referenced this issue Nov 12, 2021
This is a workaround for the following issue:
kubernetes-sigs/kubebuilder#1140
@camilamacedo86
Copy link
Member

camilamacedo86 commented Nov 1, 2022

HI @bilalcaliskan,

The workaround that I am aware so far is what is provided via controller-gen, see how we sort it out in the samples;

manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
# Note that the option maxDescLen=0 was added in the default scaffold in order to sort out the issue
# Too long: must have at most 262144 bytes. By using kubectl apply to create / update resources an annotation
# is created by K8s API to store the latest version of the resource ( kubectl.kubernetes.io/last-applied-configuration).
# However, it has a size limit and if the CRD is too big with so many long descriptions as this one it will cause the failure.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd:maxDescLen=0 webhook paths="./..." output:crd:artifacts:config=config/crd/bases

However, the seems that a more ideal solutions would to be move from client side to server-side, see:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants