Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

newline discrepancy in CRDs generated by controller-gen #514

Closed
marcoderama opened this issue Nov 4, 2020 · 17 comments
Closed

newline discrepancy in CRDs generated by controller-gen #514

marcoderama opened this issue Nov 4, 2020 · 17 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@marcoderama
Copy link

Using controller-gen v0.4.0 (and older versions) for CRD generation on ubuntu, long descriptions are all on one line. On a Mac, they're broken across multiple lines. While both are equivalent from a YAML perspective, they're very different as far as git is concerned. This means if dev A on ubuntu commits a CRD, and then dev B on MacOS makes a change that is effectively a small change to the yaml, the git diff is huge and completely obscures the actual change. (And obviously the same happens if the order is dev B then dev A making changes.)

This makes it very difficult to review/understand CRD diffs and confuses devs.

Is there a way to make the outputs consistent? If not, then I'd consider this a bug.

As an example, here's what happens for one of the descriptions in our CRD when going from generated-on-Mac to generated-on-ubuntu. There is no diff yaml-wise but git diff includes:

           kind:
-            description: 'Kind is a string value representing the REST resource this
-              object represents. Servers may infer this from the endpoint the client
-              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+            description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
@marcoderama
Copy link
Author

marcoderama commented Nov 4, 2020

Fwiw, I tried
controller-gen crd:maxDescLen=1000 ...
on a Mac and the line breaks were still there.

@derekperkins
Copy link

We have the same issues in v0.3.0

@marcoderama
Copy link
Author

I guess we can just run the controller-gen output through a YAML pretty-printer before committing it. It would seem reasonable to assume that's not necessary though.

@marcoderama
Copy link
Author

This happens after running make manifests, as setup by kubebuilder. I assumed it was due to controller-gen but it turns out the culprit is kustomize.
c.f. kubernetes-sigs/kustomize#947

Sorry for casting aspersions your way.

(And working around it isn't a simple matter of piping things through yq (yet). See mikefarah/yq#573)

@zhangtbj
Copy link

zhangtbj commented Mar 17, 2021

Hi all,

I also found this problem on my Mac (without line-breaking), but my colleagues all generate normally WITH line-breaking.

I also tried on my Ubuntu VM, it also generates with line-breaking.

I don't know what happens on my Mac, I also upgrade my Golang to 1.15 or upgrade/downgrade controller-gen version, they all didn't work.

Do you have any other idea?

Thanks!

@PaulsBecks
Copy link

Hey all,

I too got this problem on my Mac, even though my colleagues do not. @zhangtbj did you find a solution?

Thanks

@zhangtbj
Copy link

Hi @PaulsBecks ,

I didn't fix it on my Mac, I switched to generate on my Ubuntu VM... :(

1 similar comment
@zhangtbj
Copy link

Hi @PaulsBecks ,

I didn't fix it on my Mac, I switched to generate on my Ubuntu VM... :(

@imjasonh
Copy link

This seems to still be happening, without involving kustomize. On macOS YAML is generated with line wrapping, and on Ubuntu it's generated without line wrapping.

Does anybody have any ideas what might cause this discrepancy?

@dims
Copy link
Member

dims commented Jul 12, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@dims: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@akram
Copy link

akram commented Sep 21, 2021

This requires kubernetes-sigs/yaml to bump to version v2.4.0. Actually v2.3.0 would fix it, but it would introduce many regressions to kubernetes, so v2.4.0 reverted the fix but introduced the method yaml.FutureLineWrap to disable line wrapping.
Once kubernetes-sigs/yaml bumps a new version, controller-gen will need to call yaml.FutureLineWrap to globally disable line wrapping.
https://github.com/go-yaml/yaml/blob/7649d4548cb53a614db133b2a8ac1f31859dda8c/yaml.go#L476

Edit: kubernetes-sigs/yaml bumped v1.3.0 that allows us to fix this issue. Please someone review #623 .
cc @joelanford @dims @imjasonh

@antonu17
Copy link

Confirm the issue on my macbook. Managed to mitigate like this:

  1. update gopkg.in/yaml.v2 to the latest
  2. rebuild controller-gen

Example:

go get gopkg.in/yaml.v2
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.4.1

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 14, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants