-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: migrate networkoverhead and topologicalsort client to ctrl runtime #522
refactor: migrate networkoverhead and topologicalsort client to ctrl runtime #522
Conversation
Hi @zwpaper. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@zwpaper a heads-up: I will cut a v0.25.x release next week. So I'll hold off reviewing PRs that migrate to use ctl-runtime in plugins. After v0.25.x is cut, and master gets bumped to vendor k8s v1.26.x, I will start review these PRs. |
And it'd be very appreciable if you can spread the words to other WIP work of migrating to use ctl-runtime in plugins. 🙏 |
gotcha, let PodGroup be the guinea pig for ctl-runtime in 0.25.x, and we can migrate the rest in later releases, I will notify this to the rest of the migrations. @Huang-Wei |
Not quite. Even for PodGroup, we did the migration on controllers only, right? (plugins are still using typed clientset) |
yes, we did only do the PodGroup Controller migration only, I have no particular tendency for the migration in or out of the release, it should be all good depending on your opinions. |
Yes, I just meant to leave plugin's migration to 0.26. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
busy days past, let's continue the migration! /remove-lifecycle stale |
b0f47ff
to
108bc91
Compare
@zwpaper we may not be able to finish this in this release (v0.26). Let's move it to v0.27. |
90307d6
to
86e8724
Compare
e456e8f
to
56f28f5
Compare
6e62a5c
to
c2d873d
Compare
finally, all tests passed! @Huang-Wei can you take a look or assign this to the creator of networkaware plugin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @zwpaper ! Some comments below.
Moreover, I'd expect to eliminate all deps to typed client of networktopology entirely, but I still see these entries:
⇒ ag "github.com/diktyo-io/appgroup-api/pkg/generated"
test/integration/utils.go
42: agversioned "github.com/diktyo-io/appgroup-api/pkg/generated/clientset/versioned"
test/integration/networkoverhead_test.go
44: agversioned "github.com/diktyo-io/appgroup-api/pkg/generated/clientset/versioned"
test/integration/topologicalsort_test.go
45: agversioned "github.com/diktyo-io/appgroup-api/pkg/generated/clientset/versioned"
could you check if you can remove them?
cc @jpedro1992 for review. |
Currently, both appgroup and networktopology controllers are hosted outside this repo and deployed separately from sig-scheduling scheduler and controller. Would this break this PR and consequently networkAware plugins? Or are you planning to integrate the controllers? |
No, controllers hosting outside won't be impacted.
Nope ;) |
Ok then, I don't see any issues...
OK :) |
@zwpaper there are some outstanding comments that need to be resolved. |
thanks for the reminder and comments! I will have a look and fix them this weekend |
65d24a6
to
cd7aea8
Compare
It should be good to go now! @Huang-Wei PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits; LGTM otherwise.
if err != nil { | ||
b.Fatalf("Failed to create Workload %q: %v", p.Name, err) | ||
} | ||
builder.WithObjects(p.DeepCopy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to eliminate this for loop via .WithRuntimeObjects():
diff --git a/pkg/networkaware/topologicalsort/topologicalsort_test.go b/pkg/networkaware/topologicalsort/topologicalsort_test.go
index d450cdbd..391b0f22 100644
--- a/pkg/networkaware/topologicalsort/topologicalsort_test.go
+++ b/pkg/networkaware/topologicalsort/topologicalsort_test.go
@@ -25,6 +25,7 @@ import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/rand"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/kubernetes/scheme"
@@ -250,13 +251,11 @@ func TestTopologicalSortLess(t *testing.T) {
s := scheme.Scheme
utilruntime.Must(agv1alpha1.AddToScheme(s))
- builder := fake.NewClientBuilder().
+ client := fake.NewClientBuilder().
WithScheme(s).
- WithStatusSubresource(&agv1alpha1.AppGroup{})
- for _, p := range pods {
- builder.WithObjects(p.DeepCopy())
- }
- client := builder.Build()
+ WithRuntimeObjects(pods...).
+ WithStatusSubresource(&agv1alpha1.AppGroup{}).
+ Build()
// Sort TopologyList by Selector
sort.Sort(util.ByWorkloadSelector(tt.appGroup.Status.TopologyOrder))
@@ -427,19 +426,16 @@ func BenchmarkTopologicalSortPlugin(b *testing.B) {
}
for _, tt := range tests {
b.Run(tt.name, func(b *testing.B) {
-
ps := makePodsAppGroup(tt.deploymentNames, tt.agName, tt.podPhase)
s := scheme.Scheme
utilruntime.Must(agv1alpha1.AddToScheme(s))
- builder := fake.NewClientBuilder().
+ client := fake.NewClientBuilder().
WithScheme(s).
+ WithRuntimeObjects(ps...).
WithStatusSubresource(&agv1alpha1.AppGroup{}).
- WithObjects(tt.appGroup)
- for _, p := range ps {
- builder.WithObjects(p.DeepCopy())
- }
- client := builder.Build()
+ WithObjects(tt.appGroup).
+ Build()
ts := &TopologicalSort{
Client: client,
@@ -449,9 +445,6 @@ func BenchmarkTopologicalSortPlugin(b *testing.B) {
pInfo1 := getPodInfos(b, tt.podNum, tt.agName, tt.selectors, tt.deploymentNames)
pInfo2 := getPodInfos(b, tt.podNum, tt.agName, tt.selectors, tt.deploymentNames)
- //b.Logf("len(pInfo1): %v", len(pInfo1))
- //b.Logf("len(pInfo2): %v", len(pInfo2))
-
b.ResetTimer()
for i := 0; i < b.N; i++ {
sorting := func(i int) {
@@ -498,8 +491,8 @@ func Until(ctx context.Context, pieces int, doWorkPiece workqueue.DoWorkPieceFun
workqueue.ParallelizeUntil(ctx, parallelism, pieces, doWorkPiece, chunkSizeFor(pieces))
}
-func makePodsAppGroup(podNames []string, agName string, phase v1.PodPhase) []*v1.Pod {
- pds := make([]*v1.Pod, 0)
+func makePodsAppGroup(podNames []string, agName string, phase v1.PodPhase) []runtime.Object {
+ var pds []runtime.Object
for _, name := range podNames {
pod := st.MakePod().Namespace("default").Name(name).Obj()
pod.Labels = map[string]string{agv1alpha1.AppGroupLabel: agName}
Signed-off-by: Wei Zhang <kweizh@gmail.com>
7d4e69c
to
a4ecf1a
Compare
a4ecf1a
to
7098da7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
Thanks @zwpaper !
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Huang-Wei, zwpaper The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Part of #485
Special notes for your reviewer:
Does this PR introduce a user-facing change?