-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too long CRD definition #873
Comments
See #854. In short, use |
This is not a good enough solution, if kustomize uses apply and if I'm expected to continuously re-run apply from a scheduled job (as I am). Can you at least ship a CRD definition without the validation node, so that the kubectl tooling works as expected? |
If there are more people in need of this, we can certainly consider providing a CRD that shouldn't be used for production purposes. |
Could you give a suggestion on how to do gitops with your solution then? |
What prevents you from storing your own modified version of the CRD? Or from piping the result of |
Anecdotally: I'm storing modified versions of tooling output e.g. for the Istio mesh that we got configured because they have a bug; istio/istio#20082 which seems to be extra problematic when its output is applied to already applied k8s state; triggering a much worse bug — grey failure — istio/istio#20454 . Every bug report in that repo is preceeded by a 1-3 week triage phase where they ensure it's not "operator error"; which it is sometimes, but like in the above bugs, it also makes the DX suck. But why am I stupid enough to apply the output of the tooling (istioctl) straight to the cluster? Because the documentation says that's how it's done and it doesn't explain the link; that the Helm chart is inlined in the tool and can actually be used to generate k8s manifests; as such they've made what was previously explicit, implicit, and this makes people make mistakes. Your suggestion is exactly this; let's work around a bug in an upstream project with a hack, and it will bite someone in the ass sooner or later ;) |
By suggesting that we should keep an alternate version of the CRD for your use case, you are putting the burden of maintenance on us. If there are enough users with the same use case as you, I don't have a problem in keeping the special version in our repository, but this is the first case that we're receiving saying that "kubectl create" can't be used instead of "kubectl apply". |
In fact, everyone I've worked with in enterprises (presumably your market/audience) don't write issues on github. If you think that "being able to deploy with kustomize" is only my use-case, I think you need to take a look around you ;) It's built into kubectl nowadays! |
Also, I think this is the first issue since I'm pulling from master and not your latest deploy tag. Canaries, and what not ;) |
Thirdly, I count this to be the second issue, right? # duplicate |
The issue has been reported a few times in different channels, but everyone else seems to be happy with In any case, could you provide the |
For reference, here are the upstream issues relevant to this: |
.PHONY: view
view:
@kustomize build k8s/dev
.PHONY: view_test
view_test:
@kustomize build k8s/test
.PHONY: crds
crds:
sh -c '[ -f k8s/base/crd-full.yaml ]' || curl --output k8s/base/crd-full.yaml https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
kubectl get crd | grep jaegers.jaegertracing.io || kubectl create -f k8s/base/crd-full.yaml
.PHONY: template
template:
curl --output k8s/base/crd.yaml $(BASE_URI)/crds/jaegertracing_v1_jaegers_crd.yaml
curl --output k8s/base/sa.yaml $(BASE_URI)/service_account.yaml
curl --output k8s/base/role.yaml $(BASE_URI)/role.yaml
curl --output k8s/base/role-binding.yaml $(BASE_URI)/role_binding.yaml
curl --output k8s/base/operator.yaml $(BASE_URI)/operator.yaml
.PHONY: deploy_dev
deploy_dev: crds
kustomize build k8s/dev | kubectl apply -f -
kubectl get deployment jaeger-operator -n jaeger-system
.PHONY: deploy_test
deploy_test: crds
kustomize build k8s/test | kubectl apply -f -
kubectl get deployment jaeger-operator -n jaeger-system
|
One idea I have tried is to remove repetitive nodes and use pointers https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd However after handling |
|
@pavolloffay I don't think that will help since the k8s API server writes the annotation (the problem area here) as JSON, which doesn't have pointers. |
How does your workflow look like? Are you also using kustomize? |
I'm reopening, but marking this as "needs-info" (see my last comment). I'm also yet to test kustomize myself, as I'm really not familiar with the tool. |
We've worked around this by removing all "description" fields from the provided CRDs, based on a suggestion by the upstream Kubebuilder issue: kubernetes-sigs/kubebuilder#1140 Would you consider publishing the CRDs without the description fields to allow them to be compatible with |
Absolutely. Is it required only for fields in the |
We have it disabled for all fields, but happy to work with a subset of this if it can drop the size down enough. |
I agree in principle, especially because we can build a document elsewhere with the doc for those fields. Would you like to send in a PR with the required changes? |
Another related issue: kubernetes/kubernetes#82292 Given the amount of people that seems affected by this one, I'll try to apply the suggested workaround, by disabling the description fields. |
@haf, @adamhosier, @nouney, and others in this issue: would you be able to test the CRD that is part of #932? It now has about 400k, down from 1.3M. Locally, I'm able to issue |
@jpkrohling thanks for this! i've tested the updated CRD and it works fine with kustomize + apply. |
From master, right now. Onto docker-desktop macOS
repro
curl --silent -L https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml \ > k8s/base/jaegertracing.io_jaegers_crd.yaml kubectl apply -f k8s/base/jaegertracing.io_jaegers_crd.yaml
The text was updated successfully, but these errors were encountered: