Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abandon checked-in vendor directory in Cluster Autoscaler #4878

Closed
x13n opened this issue May 10, 2022 · 25 comments
Closed

Abandon checked-in vendor directory in Cluster Autoscaler #4878

x13n opened this issue May 10, 2022 · 25 comments
Assignees
Labels
area/cluster-autoscaler area/core-autoscaler Denotes an issue that is related to the core autoscaler and is not specific to any provider. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@x13n
Copy link
Member

x13n commented May 10, 2022

Which component are you using?:

cluster-autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Even though go modules are used in cluster-autoscaler repository, we still track vendor/ directory in git source control, which can lead to issues like #4875 , where a bug was fixed directly in vendor, instead of bumping module version. Also, reviewing vendor version bumps is not really feasible for humans, with thousands of lines of code changing all at once. If we just relied on go.mod, we wouldn't have these issues.

Describe the solution you'd like.:

Drop vendor/ dir, use go.mod only.

Describe any alternative solutions you've considered.:

Presubmit check for verifying vendor/ contents. This is my second choice if it turns out we really need to keep vendor dir for some reason.

Additional context.:

@x13n x13n added the kind/feature Categorizes issue or PR as related to a new feature. label May 10, 2022
@MaciekPytel
Copy link
Contributor

We needed vendor/ back when we used our own, custom made dependency management system. But I think you have a good point here - I don't really see any reason why we need a materialized vendor/ anymore. +1 for this idea.

@towca @feiskyer WDYT?

@x13n
Copy link
Member Author

x13n commented May 25, 2022

/help

@k8s-ci-robot
Copy link
Contributor

@x13n:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label May 25, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 23, 2022
@x13n
Copy link
Member Author

x13n commented Aug 23, 2022

/priority important-longterm
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@x13n
Copy link
Member Author

x13n commented Feb 8, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@fmuyassarov
Copy link
Member

@x13n Can I help on this issue?

@x13n
Copy link
Member Author

x13n commented May 24, 2023

Yes, definitely! Let me know if you need any help.

@elmiko
Copy link
Contributor

elmiko commented Jun 2, 2023

i suppose it's not a big deal to the sig autoscaling community, but removing the vendor folder does cause headaches for downstream maintainers who rely on these artifacts in their build pipelines.

can we at least give a deprecation warning of some sort, perhaps 1 release, before we take this away?

@x13n
Copy link
Member Author

x13n commented Jun 2, 2023

Thanks @elmiko for chiming in. Could you elaborate a bit on what does the use case here looks like? go.mod should be the authoritative source of deps already, with vendor dir being derived and subject to change with go.mod updates.

@elmiko
Copy link
Contributor

elmiko commented Jun 2, 2023

go.mod should be the authoritative source of deps already, with vendor dir being derived and subject to change with go.mod updates.

yes, totally agree. the go.mod should be fine, but it will take us time to change our automation so that we can download the vendor files at the proper state in our release process. currently, we rely on the vendor folder present in the upstream to do our builds, in the future we will need to ensure that we vendor those dependencies on their own before we do a final build since our build system does not allow downloading external dependencies during the release process.

i have a feeling that other downstreams who have tight build control processes to prevent supply-side injection will do something similar. for us, we will need to ensure that the mirror-fork we keep of the upstream has a vendor folder present before we send it to our release mechanisms. this is not a problem itself, but it will take us some time to fix our processes. if we have a release cycle to plan for it, i'm sure that would be enough for us to make the changes.

my main concern her is removing the vendor folder without giving the community some advance notice about it, since it will have transitive effects. in general, i don't have an objection to removing the folder (even though it does mean a little more work for us), if the community has a strong desire to remove it from the repository. we certainly deal with other upstreams that have done similar.

@x13n
Copy link
Member Author

x13n commented Jun 2, 2023

Ack, heads up sounds reasonable. I wonder what would be the best channel for this though - release notes are not the best place I think since there's no way of opting in or out of such a change and we'd need to wait 1.5 release with this change here. SIG mailing list?

we will need to ensure that the mirror-fork we keep of the upstream has a vendor folder present before we send it to our release mechanisms

Sounds like running go mod vendor after fetching upstream changes would suffice here?

@elmiko
Copy link
Contributor

elmiko commented Jun 2, 2023

Ack, heads up sounds reasonable. I wonder what would be the best channel for this though - release notes are not the best place I think since there's no way of opting in or out of such a change and we'd need to wait 1.5 release with this change here. SIG mailing list?

mailing list is good, also maybe an echo statement in the make build target warning about the impending vendor removal?

Sounds like running go mod vendor after fetching upstream changes would suffice here?

absolutely, we just need to coordinate adding that to our forking/rebasing automation.

@x13n
Copy link
Member Author

x13n commented Jun 2, 2023

Thanks, I like the echo statement idea. Since we're over 2 months from 1.28 k8s release, I suggest the following timeline:

  1. [now] Add the echo statement.
  2. [now] Send email to the SIG list.
  3. [in 2 months, before 1.28 release] Merge the PR removing vendor dir (and dropping the echo statement)

Does that make sense?

@elmiko
Copy link
Contributor

elmiko commented Jun 2, 2023

Does that make sense?

yes it does, but i would ask that we do 1 and 2 now, and then do 3 when the 1.29 branch opens. that way we announce deprecation in 1.28, and actually deprecate in 1.29.

fwiw, at red hat (and perhaps other places) we do our rebases for release just after the kubernetes releases are made. by removing just before release it adds extra work on our end. removing at the beginning of the release cycle gives us much more time to be prepared.

@x13n
Copy link
Member Author

x13n commented Jun 2, 2023

Ok, let's make it 3 months then. It means we may hit some incompatibilities in the vendor dir when releasing CA 1.28, but hopefully that'll be the last time.

@fmuyassarov - would you like to add the echo statement? Your #5807 would have to be put on hold for a while (and it should then remove the statement).

@elmiko
Copy link
Contributor

elmiko commented Jun 2, 2023

thank you @x13n , i appreciate the discussion and consideration =)

@fmuyassarov
Copy link
Member

fmuyassarov commented Feb 27, 2024

Thanks, I like the echo statement idea. Since we're over 2 months from 1.28 k8s release, I suggest the following timeline:

1. [now] Add the echo statement.

2. [now] Send email to the SIG list.

3. [in 2 months, before 1.28 release] Merge the PR removing vendor dir (and dropping the echo statement)

Does that make sense?

Hi folks. I'm back on this issue. As discussed above, I have followed the same steps.

@fmuyassarov
Copy link
Member

/assign

@x13n
Copy link
Member Author

x13n commented Mar 6, 2024

Thanks for following up on this!

@towca towca added the area/core-autoscaler Denotes an issue that is related to the core autoscaler and is not specific to any provider. label Mar 21, 2024
@fmuyassarov
Copy link
Member

Hi @x13n , @elmiko, @vadasambar
We have now reached the consensus date, which was end of April and I have rebased the #6572. Would you mind to take a look at #6572 and eventually merge it if it looks good to you?

@elmiko
Copy link
Contributor

elmiko commented May 7, 2024

thanks @fmuyassarov !

@fmuyassarov
Copy link
Member

I think we can close the issue now as all the steps are now completed.
Feel free to open it back in case I missed something.
/close

@k8s-ci-robot
Copy link
Contributor

@fmuyassarov: Closing this issue.

In response to this:

I think we can close the issue now as all the steps are now completed.
Feel free to open it back in case I missed something.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/core-autoscaler Denotes an issue that is related to the core autoscaler and is not specific to any provider. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
8 participants