-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kep to move kubeproxy out of tree (to staging) #2635
Conversation
/sig network Casey, Tim, can you be the reviewer/approver of this? Thanks!! |
/cc @aojea @jayunit100 |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rikatz The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
kube-proxy already have its own repo. The proposal is to: | ||
|
||
### Move the internal network dependencies that kube-proxy already has | ||
kube-proxy have some internal dependencies, mostly notable in `kubernetes/pkg/utils`. We |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a really really good test suite to make sure these dependencies which we move out can be tested independently of kubernetes. This is the same problem we had with sig-storage's try to move mount utils outside of k/k and eventually we had to bring it back into staging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the proposal is the same as mount utils, create a new staging repo staging/src/k8s.io/network-utils
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aojea line 87 is a section that says UNRESOLVED repository for network utils
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @dims
@jayunit100 @aojea you are my reference for testing things, can use some help here! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, so here's my point. We need to define where all those things needs to go. My proposal is to move to network-utils, BUT as we need to create a kep for moving kube-proxy to staging, I want to use this to discuss as well where everything should be.
async is the hardest example: @danwinship opened that PR in utils, but by utils guideline async should not be there (because it vendors k8s.io/client-go) so, maybe component-helpers?
We can use this Kep to justify why are we moving things the way we are :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a huge +1 to move the networking utils to k/net-utils
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to avoid more catch-all "utils" packages if we can avoid it, so we should consider each one independently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know we KIND OF want to avoid polluting the "k8s.io" space (e.g. "k8s.io/iptables", "k8s.io/ipvs", "k8s.io/conntrack" all feel somewhat "big". Additionally, we probably don't want to be telling the whole go ecosystem "hey, use this library" because, frankly, we aren't putting that much thought into generalizing theme.
Some alternatives to an omnibus "util" repo.
-
Make lots of small repos in
util.k8s.io/...
- we can still keep the source of truth in k/k (the monorepo) and publish to these if we need. "util.k8s.io/iptables" feels less "endorsed". -
Make lots of small repos in
internal.k8s.io
- same as above but even more "warning". -
Adopt a convention that makes these libs feel less "big". E.g. "libiptables" or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make lots of small repos in internal.k8s.io - same as above but even more "warning".
That seems plausible...
Adopt a convention that makes these libs feel less "big"
I don't understand your comments about big-ness... Why do we care whether they feel big as opposed to caring whether they are big?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This topic is on the agenda for sig-arch code organization next week (I just missed last week's)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a project vendors Kubernetes to import kubeproxy code, this will break them.
On the bright side, afterwards, these importers will have a much cleaner path to
include kubeproxy code. Before moving forward with this plan, we will identify
and communicate to these projects.
SIG Docs' blog subproject can help - and so can the Kubernetes contributor comms team? Would someone like to draft a blog article about this change? Would anyone like help communicating this to the community?
|
||
## Proposal | ||
|
||
kube-proxy already have its own repo. The proposal is to: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
kube-proxy already have its own repo. The proposal is to: | |
kube-proxy already has its own repository. The proposal is to: |
"If a project vendors Kubernetes to import kubeproxy code, this will break them. | ||
On the bright side, afterwards, these importers will have a much cleaner path to | ||
include kubeproxy code. Before moving forward with this plan, we will identify | ||
and communicate to these projects." - Shameless copied from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
"If a project vendors Kubernetes to import kubeproxy code, this will break them. | |
On the bright side, afterwards, these importers will have a much cleaner path to | |
include kubeproxy code. Before moving forward with this plan, we will identify | |
and communicate to these projects." - Shameless copied from | |
> If a project vendors Kubernetes to import kubeproxy code, this will break them. | |
> On the bright side, afterwards, these importers will have a much cleaner path to | |
> include kubeproxy code. Before moving forward with this plan, we will identify | |
> and communicate to these projects. | |
–Shamelessly copied from |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
to future kube-proxy implementations to import this logic without needing to import the | ||
whole Kubernetes code. | ||
|
||
This was also motivated by [kubernetes #92369](https://github.com/kubernetes/kubernetes/issues/92369) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we instead send people to kpng and try to make that the "source of truth" for kube-proxy logic?
Or maybe both? I still rather like the idea that kube-proxy get more encapsulated, but ultimately we don't want multiple implementations of the endpoint selection logic (seeing how complex it has become), so kpng seems like the great leveller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No opposition to that (even if I had, Jay would kill me!) and also I have the feeling that this enhancement would be much more a buried pipe change than a real need.
Originally this was created to make it easier for the CNI implementations but I guess now every one of the majors one already have their own logics/copy/etc so does this still makes sense?
kube-proxy already have its own repo. The proposal is to: | ||
|
||
### Move the internal network dependencies that kube-proxy already has | ||
kube-proxy have some internal dependencies, mostly notable in `kubernetes/pkg/utils`. We |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to avoid more catch-all "utils" packages if we can avoid it, so we should consider each one independently.
vendor the whole kubernetes repo. | ||
|
||
<<[UNRESOLVED repository for network utils ]>> | ||
Although the majority of utils used by kube-proxy can be moved to component-helpers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's NOT just keep dumping garbage into component-helpers
kube-proxy already have its own repo. The proposal is to: | ||
|
||
### Move the internal network dependencies that kube-proxy already has | ||
kube-proxy have some internal dependencies, mostly notable in `kubernetes/pkg/utils`. We |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know we KIND OF want to avoid polluting the "k8s.io" space (e.g. "k8s.io/iptables", "k8s.io/ipvs", "k8s.io/conntrack" all feel somewhat "big". Additionally, we probably don't want to be telling the whole go ecosystem "hey, use this library" because, frankly, we aren't putting that much thought into generalizing theme.
Some alternatives to an omnibus "util" repo.
-
Make lots of small repos in
util.k8s.io/...
- we can still keep the source of truth in k/k (the monorepo) and publish to these if we need. "util.k8s.io/iptables" feels less "endorsed". -
Make lots of small repos in
internal.k8s.io
- same as above but even more "warning". -
Adopt a convention that makes these libs feel less "big". E.g. "libiptables" or something?
`kubernetes/kube-proxy/pkg/features` and changing on the code, and putting all the feature | ||
gates referenced by kube-proxy there | ||
|
||
### Move pkg/proxy to staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also do cmd/kube-proxy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we? not sure, I think that right now cmd/kube-proxy is in a good place where it is (following the same approach of kubelet/scheduler/etc) that are there.
But you are here on a really longer time that I am, so I may be missing something really important!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO the split between cmd/kube-proxy and pkg/proxy is pretty much completely artificial. Nothing uses pkg/proxy except cmd/proxy (as far as I caan tell). Repeat for kubelet, controllers, etc and their respective binaries. It just makes the code harder to find.
That said, this is FAR from the most important thing going on right now, so I don't know if it makes sense to do a bunch of busywork?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, so I'm wondering lately, I'm wondering if it's worth to create a whole new kep to split something that probably wont get much attention after kpng.
I still think we need to do some cleanup on pkg/proxy, but probably this can be done without a KEP, and we can do this on a "found this, let's raise a PR and clean" other than discuss a batch/big approach in a KEP.
So should we close this KEP?
SGTM
…On Mon, Jan 31, 2022 at 4:58 PM Ricardo Katz ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In keps/sig-network/2634-kubeproxy-staging/README.md
<#2635 (comment)>
:
> +<<[/UNRESOLVED]>>
+
+### Move other dependencies kube-proxy already has inside k/k
+kube-proxy vendors some packages that are used by other parts of kubernetes codebase.
+From now, we can point:
+* pkg/util/async (async bounder) which is used by pkg/controlplane
+* pkg/util/sysctl which is used by pkg/kubelet and kubemark
+
+### Move feature gate package dependency
+Because kube-proxy relies on defined Feature gates, it vendors `pkg/features`.
+
+This needs to be copied/moved the same way controller manager already does, moving this to
+`kubernetes/kube-proxy/pkg/features` and changing on the code, and putting all the feature
+gates referenced by kube-proxy there
+
+### Move pkg/proxy to staging
yeah, so I'm wondering lately, I'm wondering if it's worth to create a
whole new kep to split something that probably wont get much attention
after kpng.
I still think we need to do some cleanup on pkg/proxy, but probably this
can be done without a KEP, and we can do this on a "found this, let's raise
a PR and clean" other than discuss a batch/big approach in a KEP.
So should we close this KEP?
—
Reply to this email directly, view it on GitHub
<#2635 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVCIIFVX4AUFCDCLJ6LUY4V2RANCNFSM43ERO6RA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
/close |
@rikatz: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This PR adds the KEP to move kubeproxy to staging, letting people vendor it without vendoring the whole kubernetes/kubernetes repo