-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: data too long issue rendering #923
Comments
Thanks for submitting this issue! There are two things that cause this bug:
|
We used tar + gz to create some of these resources, and that might be necessary here... unless even that is too big. |
We have a couple options to choose from (thanks @varshaprasad96 and @komish! ):
|
For (2) we do that for the test registry:
|
This seems really strange to me that we are offloading CRD creation/management to Should we possibly look into a 4th option of managing the CRD's ourselves, or is that out of scope(effort/timeline)? |
Adding some thoughts based on initial findings and discussion with @itroyano: Option 3: Option 2: The reason it works in the e2e, I think, is because Kaniko uncompresses the tarball from its context. Based on a quick glance, when we provide a tarball as the context for Kaniko, it extracts the contents of the tarball and then proceeds to build the Docker image using the extracted files. Which means the manifests that are passed for the chart's creation are still uncompressed. The other option here, as alluded by @joelanford was to reimplement the whole secret driver - which is the code available here: https://github.com/helm/helm/blob/1a500d5625419a524fdae4b33de351cc4f58ec35/pkg/storage/driver/secrets.go. Re-implementing an additional compression, or even creating shards would probably take more effort for us as well as maintaining it could be an additional problem. Option 1: Helm by default does not manage the lifecycle of CRDs. If the CRDs are stored in a separate a. Handle CRDs on our own - with a kubectl apply Implementing (a) and (b) are synonymous imo. The only thing is - there shouldn't be an edge case, where CRDs themselves exceed the allowable size of Helm. I'm not sure if that is even a best practice (with other practical concerns that such huge CRDs have on performance, caching, (probably etcd limits) etc). Both of these methods - which make us manage CRDs separately than manifests, bring in two concerns:
Both options (2) and (3) (ie reimplementing secret driver or separating out CRDs in another chart) come with their own maintenance challenges. The decision is to choose the one that is easier for us to implement and manage. |
I tinkered on this today and came up with a custom driver in helm-operator-plugins that:
In theory, it could handle up to |
User story
issue created , due to mention in slack
as a recurent tester on this project, ( because i like it)
i test upon some wanted operator on my stack.
atm i have some issue with following manifest with
0.10
OLMList if issues
error message rendered.
The text was updated successfully, but these errors were encountered: