Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too long CRD definition #873

Closed
haf opened this issue Jan 27, 2020 · 25 comments · Fixed by #932
Closed

Too long CRD definition #873

haf opened this issue Jan 27, 2020 · 25 comments · Fixed by #932
Labels
bug Something isn't working duplicate This issue or pull request already exists

Comments

@haf
Copy link

haf commented Jan 27, 2020

Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "jaegers.jaegertracing.io" is invalid: metadata.annotations: Too long: must have at most 262144 characters

From master, right now. Onto docker-desktop macOS

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

repro

curl --silent -L https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml \
                > k8s/base/jaegertracing.io_jaegers_crd.yaml
kubectl apply -f k8s/base/jaegertracing.io_jaegers_crd.yaml
@ghost ghost added the needs-triage New issues, in need of classification label Jan 27, 2020
@jpkrohling jpkrohling added duplicate This issue or pull request already exists and removed needs-triage New issues, in need of classification labels Jan 27, 2020
@jpkrohling
Copy link
Contributor

jpkrohling commented Jan 27, 2020

See #854. In short, use kubectl create -f ..., or remove the validation node if you absolutely have to use kubectl apply.

@haf
Copy link
Author

haf commented Jan 27, 2020

This is not a good enough solution, if kustomize uses apply and if I'm expected to continuously re-run apply from a scheduled job (as I am). Can you at least ship a CRD definition without the validation node, so that the kubectl tooling works as expected?

@jpkrohling
Copy link
Contributor

If there are more people in need of this, we can certainly consider providing a CRD that shouldn't be used for production purposes.

@haf
Copy link
Author

haf commented Jan 27, 2020

Could you give a suggestion on how to do gitops with your solution then?

@jpkrohling
Copy link
Contributor

What prevents you from storing your own modified version of the CRD? Or from piping the result of cURL to a json converter + jq ? I can think of quite a few possibilities, but as you own your environment, you'd be the best person to come up with solutions ;-)

@haf
Copy link
Author

haf commented Jan 28, 2020

  • Repeatability
  • Documentation requirements
  • Reproducibility; jq is not a default tool on Windows which is part of the use-case for developer environments for us, and even less yq or python which are commonly used to transform yaml to json and then filter. Your expectations of our environment don't match the real-world; it has to run on Linux, Windows and macOS.
  • Resilience to change; if I customise here, you'll break me in the future for sure. This is an anti-pattern; I've worked in places where most things boiled down to "just add a button for ops to click" — it could be everything from clearing a cache to triggering a fetch of some external resource, and in the end everyone ended up afraid of changing the system — there's a lot of work on this done under resilience engineering
  • Expectations for people unfamiliar with this limitation
  • Expectations; it wouldn't make sense to include validation in the schema if it's so utterly useless that you make it sound like
  • Expectations; as such I don't know what I'm missing out on; does this mean that the validation hooks don't validate the jaeger schema as stringently? This is not explained in the issue that you link to, so I have no foundation to make a decision to cut out what seems to me to be a critical bit of config.
  • Expectations; if you want me to customise the crd declaration; why doesn't everyone? it's not only in Jaeger that I've seen PVC/volume support — why doesn't e.g. ECK/Elastic's operator have the same limitation?

Anecdotally: I'm storing modified versions of tooling output e.g. for the Istio mesh that we got configured because they have a bug; istio/istio#20082 which seems to be extra problematic when its output is applied to already applied k8s state; triggering a much worse bug — grey failure — istio/istio#20454 . Every bug report in that repo is preceeded by a 1-3 week triage phase where they ensure it's not "operator error"; which it is sometimes, but like in the above bugs, it also makes the DX suck.

But why am I stupid enough to apply the output of the tooling (istioctl) straight to the cluster? Because the documentation says that's how it's done and it doesn't explain the link; that the Helm chart is inlined in the tool and can actually be used to generate k8s manifests; as such they've made what was previously explicit, implicit, and this makes people make mistakes. Your suggestion is exactly this; let's work around a bug in an upstream project with a hack, and it will bite someone in the ass sooner or later ;)

@jpkrohling
Copy link
Contributor

jpkrohling commented Jan 28, 2020

By suggesting that we should keep an alternate version of the CRD for your use case, you are putting the burden of maintenance on us. If there are enough users with the same use case as you, I don't have a problem in keeping the special version in our repository, but this is the first case that we're receiving saying that "kubectl create" can't be used instead of "kubectl apply".

@haf
Copy link
Author

haf commented Jan 28, 2020

this is the first case that we're receiving

In fact, everyone I've worked with in enterprises (presumably your market/audience) don't write issues on github.

If you think that "being able to deploy with kustomize" is only my use-case, I think you need to take a look around you ;) It's built into kubectl nowadays!

@haf
Copy link
Author

haf commented Jan 28, 2020

Also, I think this is the first issue since I'm pulling from master and not your latest deploy tag. Canaries, and what not ;)

@haf
Copy link
Author

haf commented Jan 28, 2020

Thirdly, I count this to be the second issue, right? # duplicate

@jpkrohling
Copy link
Contributor

The issue has been reported a few times in different channels, but everyone else seems to be happy with kubectl create being an alternative to kubectl apply.

In any case, could you provide the kustomize commands you are using?

@jpkrohling
Copy link
Contributor

For reference, here are the upstream issues relevant to this:
operator-framework/operator-sdk#2285 (comment)
kubernetes-sigs/kubebuilder#1140

@haf
Copy link
Author

haf commented Jan 28, 2020

.PHONY: view
view:
	@kustomize build k8s/dev

.PHONY: view_test
view_test:
	@kustomize build k8s/test

.PHONY: crds
crds:
	sh -c '[ -f k8s/base/crd-full.yaml ]' || curl --output k8s/base/crd-full.yaml https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
	kubectl get crd | grep jaegers.jaegertracing.io || kubectl create -f k8s/base/crd-full.yaml

.PHONY: template
template:
	curl --output k8s/base/crd.yaml $(BASE_URI)/crds/jaegertracing_v1_jaegers_crd.yaml
	curl --output k8s/base/sa.yaml $(BASE_URI)/service_account.yaml
	curl --output k8s/base/role.yaml $(BASE_URI)/role.yaml
	curl --output k8s/base/role-binding.yaml $(BASE_URI)/role_binding.yaml
	curl --output k8s/base/operator.yaml $(BASE_URI)/operator.yaml

.PHONY: deploy_dev
deploy_dev: crds
	kustomize build k8s/dev | kubectl apply -f -
	kubectl get deployment jaeger-operator -n jaeger-system

.PHONY: deploy_test
deploy_test: crds
	kustomize build k8s/test | kubectl apply -f -
	kubectl get deployment jaeger-operator -n jaeger-system

@pavolloffay
Copy link
Member

One idea I have tried is to remove repetitive nodes and use pointers https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd

However after handling volumes nodes the file is still too large.

crd-yaml.txt

@nouney
Copy link

nouney commented Jan 29, 2020

The issue has been reported a few times in different channels, but everyone else seems to be happy with kubectl create being an alternative to kubectl apply.

kubectl create and kubectl apply are not the same. I only use kubectl apply and now my workflow is broken because of the CRD.

@haf
Copy link
Author

haf commented Jan 29, 2020

@pavolloffay I don't think that will help since the k8s API server writes the annotation (the problem area here) as JSON, which doesn't have pointers.

@jpkrohling
Copy link
Contributor

I only use kubectl apply and now my workflow is broken because of the CRD.

How does your workflow look like? Are you also using kustomize?

@jpkrohling jpkrohling added the needs-info We need some info from you! If not provided after a few weeks, we'll close this issue. label Feb 5, 2020
@jpkrohling
Copy link
Contributor

I'm reopening, but marking this as "needs-info" (see my last comment). I'm also yet to test kustomize myself, as I'm really not familiar with the tool.

@jpkrohling jpkrohling reopened this Feb 5, 2020
@adamhosier
Copy link

We've worked around this by removing all "description" fields from the provided CRDs, based on a suggestion by the upstream Kubebuilder issue: kubernetes-sigs/kubebuilder#1140

Would you consider publishing the CRDs without the description fields to allow them to be compatible with kubectl apply out of the box?

@jpkrohling
Copy link
Contributor

Absolutely. Is it required only for fields in the JaegerCommonSpec, or should it be disabled for all types?

@adamhosier
Copy link

We have it disabled for all fields, but happy to work with a subset of this if it can drop the size down enough.

@jpkrohling
Copy link
Contributor

I agree in principle, especially because we can build a document elsewhere with the doc for those fields. Would you like to send in a PR with the required changes?

@jpkrohling
Copy link
Contributor

Another related issue: kubernetes/kubernetes#82292

Given the amount of people that seems affected by this one, I'll try to apply the suggested workaround, by disabling the description fields.

@jpkrohling jpkrohling added bug Something isn't working and removed needs-info We need some info from you! If not provided after a few weeks, we'll close this issue. labels Feb 28, 2020
@jpkrohling
Copy link
Contributor

@haf, @adamhosier, @nouney, and others in this issue: would you be able to test the CRD that is part of #932? It now has about 400k, down from 1.3M. Locally, I'm able to issue kubectl apply with this CRD, but I need your help in confirming that it works fine with your workflow/kustomize as well.

@adamhosier
Copy link

@jpkrohling thanks for this! i've tested the updated CRD and it works fine with kustomize + apply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working duplicate This issue or pull request already exists
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants