Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kep to move kubeproxy out of tree (to staging) #2635

Closed
wants to merge 1 commit into from

Conversation

rikatz
Copy link
Contributor

@rikatz rikatz commented Apr 18, 2021

This PR adds the KEP to move kubeproxy to staging, letting people vendor it without vendoring the whole kubernetes/kubernetes repo

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Apr 18, 2021
@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 18, 2021
@rikatz
Copy link
Contributor Author

rikatz commented Apr 18, 2021

/sig network
/assign @thockin @caseydavenport

Casey, Tim, can you be the reviewer/approver of this?

Thanks!!

@rikatz
Copy link
Contributor Author

rikatz commented Apr 18, 2021

/cc @aojea @jayunit100

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: rikatz
To complete the pull request process, please ask for approval from caseydavenport and additionally assign ehashman after the PR has been reviewed.
You can assign the PR to them by writing /assign @ehashman in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

kube-proxy already have its own repo. The proposal is to:

### Move the internal network dependencies that kube-proxy already has
kube-proxy have some internal dependencies, mostly notable in `kubernetes/pkg/utils`. We
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a really really good test suite to make sure these dependencies which we move out can be tested independently of kubernetes. This is the same problem we had with sig-storage's try to move mount utils outside of k/k and eventually we had to bring it back into staging

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the proposal is the same as mount utils, create a new staging repo staging/src/k8s.io/network-utils

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aojea line 87 is a section that says UNRESOLVED repository for network utils :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dims

@jayunit100 @aojea you are my reference for testing things, can use some help here! :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, so here's my point. We need to define where all those things needs to go. My proposal is to move to network-utils, BUT as we need to create a kep for moving kube-proxy to staging, I want to use this to discuss as well where everything should be.

async is the hardest example: @danwinship opened that PR in utils, but by utils guideline async should not be there (because it vendors k8s.io/client-go) so, maybe component-helpers?

We can use this Kep to justify why are we moving things the way we are :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a huge +1 to move the networking utils to k/net-utils

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to avoid more catch-all "utils" packages if we can avoid it, so we should consider each one independently.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we KIND OF want to avoid polluting the "k8s.io" space (e.g. "k8s.io/iptables", "k8s.io/ipvs", "k8s.io/conntrack" all feel somewhat "big". Additionally, we probably don't want to be telling the whole go ecosystem "hey, use this library" because, frankly, we aren't putting that much thought into generalizing theme.

Some alternatives to an omnibus "util" repo.

  1. Make lots of small repos in util.k8s.io/... - we can still keep the source of truth in k/k (the monorepo) and publish to these if we need. "util.k8s.io/iptables" feels less "endorsed".

  2. Make lots of small repos in internal.k8s.io - same as above but even more "warning".

  3. Adopt a convention that makes these libs feel less "big". E.g. "libiptables" or something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make lots of small repos in internal.k8s.io - same as above but even more "warning".

That seems plausible...

Adopt a convention that makes these libs feel less "big"

I don't understand your comments about big-ness... Why do we care whether they feel big as opposed to caring whether they are big?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This topic is on the agenda for sig-arch code organization next week (I just missed last week's)

@rikatz rikatz changed the title Add kep to move kubeproxy to staging Add kep to move kubeproxy out of tree (to staging) Apr 23, 2021
@wojtek-t wojtek-t self-assigned this May 10, 2021
Copy link
Contributor

@sftim sftim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a project vendors Kubernetes to import kubeproxy code, this will break them.
On the bright side, afterwards, these importers will have a much cleaner path to
include kubeproxy code. Before moving forward with this plan, we will identify
and communicate to these projects.

SIG Docs' blog subproject can help - and so can the Kubernetes contributor comms team? Would someone like to draft a blog article about this change? Would anyone like help communicating this to the community?


## Proposal

kube-proxy already have its own repo. The proposal is to:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
kube-proxy already have its own repo. The proposal is to:
kube-proxy already has its own repository. The proposal is to:

Comment on lines +116 to +119
"If a project vendors Kubernetes to import kubeproxy code, this will break them.
On the bright side, afterwards, these importers will have a much cleaner path to
include kubeproxy code. Before moving forward with this plan, we will identify
and communicate to these projects." - Shameless copied from
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
"If a project vendors Kubernetes to import kubeproxy code, this will break them.
On the bright side, afterwards, these importers will have a much cleaner path to
include kubeproxy code. Before moving forward with this plan, we will identify
and communicate to these projects." - Shameless copied from
> If a project vendors Kubernetes to import kubeproxy code, this will break them.
> On the bright side, afterwards, these importers will have a much cleaner path to
> include kubeproxy code. Before moving forward with this plan, we will identify
> and communicate to these projects.
–Shamelessly copied from

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2021
@rikatz
Copy link
Contributor Author

rikatz commented Aug 10, 2021

/remove-lifecycle stale
/lifecycle active

@k8s-ci-robot k8s-ci-robot added lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. labels Nov 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 8, 2021
@thockin thockin removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 17, 2021
to future kube-proxy implementations to import this logic without needing to import the
whole Kubernetes code.

This was also motivated by [kubernetes #92369](https://github.com/kubernetes/kubernetes/issues/92369)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we instead send people to kpng and try to make that the "source of truth" for kube-proxy logic?

Or maybe both? I still rather like the idea that kube-proxy get more encapsulated, but ultimately we don't want multiple implementations of the endpoint selection logic (seeing how complex it has become), so kpng seems like the great leveller.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No opposition to that (even if I had, Jay would kill me!) and also I have the feeling that this enhancement would be much more a buried pipe change than a real need.

Originally this was created to make it easier for the CNI implementations but I guess now every one of the majors one already have their own logics/copy/etc so does this still makes sense?

kube-proxy already have its own repo. The proposal is to:

### Move the internal network dependencies that kube-proxy already has
kube-proxy have some internal dependencies, mostly notable in `kubernetes/pkg/utils`. We
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to avoid more catch-all "utils" packages if we can avoid it, so we should consider each one independently.

vendor the whole kubernetes repo.

<<[UNRESOLVED repository for network utils ]>>
Although the majority of utils used by kube-proxy can be moved to component-helpers,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's NOT just keep dumping garbage into component-helpers

kube-proxy already have its own repo. The proposal is to:

### Move the internal network dependencies that kube-proxy already has
kube-proxy have some internal dependencies, mostly notable in `kubernetes/pkg/utils`. We
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we KIND OF want to avoid polluting the "k8s.io" space (e.g. "k8s.io/iptables", "k8s.io/ipvs", "k8s.io/conntrack" all feel somewhat "big". Additionally, we probably don't want to be telling the whole go ecosystem "hey, use this library" because, frankly, we aren't putting that much thought into generalizing theme.

Some alternatives to an omnibus "util" repo.

  1. Make lots of small repos in util.k8s.io/... - we can still keep the source of truth in k/k (the monorepo) and publish to these if we need. "util.k8s.io/iptables" feels less "endorsed".

  2. Make lots of small repos in internal.k8s.io - same as above but even more "warning".

  3. Adopt a convention that makes these libs feel less "big". E.g. "libiptables" or something?

`kubernetes/kube-proxy/pkg/features` and changing on the code, and putting all the feature
gates referenced by kube-proxy there

### Move pkg/proxy to staging
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also do cmd/kube-proxy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we? not sure, I think that right now cmd/kube-proxy is in a good place where it is (following the same approach of kubelet/scheduler/etc) that are there.

But you are here on a really longer time that I am, so I may be missing something really important!!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO the split between cmd/kube-proxy and pkg/proxy is pretty much completely artificial. Nothing uses pkg/proxy except cmd/proxy (as far as I caan tell). Repeat for kubelet, controllers, etc and their respective binaries. It just makes the code harder to find.

That said, this is FAR from the most important thing going on right now, so I don't know if it makes sense to do a bunch of busywork?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, so I'm wondering lately, I'm wondering if it's worth to create a whole new kep to split something that probably wont get much attention after kpng.

I still think we need to do some cleanup on pkg/proxy, but probably this can be done without a KEP, and we can do this on a "found this, let's raise a PR and clean" other than discuss a batch/big approach in a KEP.

So should we close this KEP?

@thockin
Copy link
Member

thockin commented Feb 1, 2022 via email

@rikatz
Copy link
Contributor Author

rikatz commented Feb 1, 2022

/close
👋

@k8s-ci-robot
Copy link
Contributor

@rikatz: Closed this PR.

In response to this:

/close
👋

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/network Categorizes an issue or PR as relevant to SIG Network. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.