Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ run work and registration as a single binary #201

Merged

Conversation

qiujian16
Copy link
Member

Summary

This is start the registration/work together with registration-operator agent command

Related issue(s)

Fixes #

@openshift-ci openshift-ci bot requested review from skeeey and zhiweiyin318 June 28, 2023 09:15
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 28, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: qiujian16

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@qiujian16 qiujian16 changed the title ✨ run work and registration as a single binary [wip]✨ run work and registration as a single binary Jun 28, 2023
elgnay pushed a commit to elgnay/ocm that referenced this pull request Jun 29, 2023
Signed-off-by: ldpliu <daliu@redhat.com>
@qiujian16 qiujian16 force-pushed the singleton branch 3 times, most recently from aa9f250 to 4ad212e Compare July 3, 2023 01:41
@codecov
Copy link

codecov bot commented Jul 4, 2023

Codecov Report

Patch coverage: 61.38% and project coverage change: +0.37 🎉

Comparison is base (c8ec750) 59.91% compared to head (788b5da) 60.28%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #201      +/-   ##
==========================================
+ Coverage   59.91%   60.28%   +0.37%     
==========================================
  Files         128      130       +2     
  Lines       13241    13494     +253     
==========================================
+ Hits         7933     8135     +202     
- Misses       4568     4609      +41     
- Partials      740      750      +10     
Flag Coverage Δ
unit 60.28% <61.38%> (+0.37%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...stermanagercontroller/clustermanager_controller.go 54.14% <0.00%> (ø)
...atuscontroller/clustermanager_status_controller.go 79.59% <0.00%> (ø)
pkg/registration/clientcert/certificate.go 42.61% <0.00%> (ø)
...gistration/spoke/managedcluster/claim_reconcile.go 71.73% <0.00%> (ø)
pkg/registration/spoke/spokeagent.go 13.22% <23.21%> (-17.68%) ⬇️
...sterletcontroller/klusterlet_cleanup_controller.go 55.10% <60.00%> (+0.41%) ⬆️
pkg/registration/spoke/options.go 61.11% <61.11%> (ø)
...usterletcontroller/klusterlet_runtime_reconcile.go 56.93% <65.51%> (-0.09%) ⬇️
pkg/operator/helpers/queuekey.go 84.00% <75.00%> (+14.39%) ⬆️
...llers/ssarcontroller/klusterlet_ssar_controller.go 77.05% <75.00%> (ø)
... and 5 more

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

qiujian16 added 3 commits July 6, 2023 12:24
Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
@qiujian16 qiujian16 force-pushed the singleton branch 8 times, most recently from 7939442 to e989bfe Compare July 7, 2023 08:39
@qiujian16 qiujian16 changed the title [wip]✨ run work and registration as a single binary ✨ run work and registration as a single binary Jul 10, 2023
requests:
cpu: 2m
memory: 16Mi
volumes:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is not supported in hosted mode, if we want hosted mode, need another mode

@@ -174,6 +179,9 @@ func (n *klusterletController) sync(ctx context.Context, controllerContext facto
ExternalManagedKubeConfigWorkSecret: helpers.ExternalManagedKubeConfigWork,
InstallMode: klusterlet.Spec.DeployOption.Mode,
HubApiServerHostAlias: klusterlet.Spec.HubApiServerHostAlias,

RegistrationServiceAccount: serviceAccountName("registration-sa", klusterlet),
WorkServiceAccount: serviceAccountName("work-sa", klusterlet),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in InstallModeSingleton, the registration and work sa have the same name %s-agent-sa , currently it's ok because they have the same content in the yaml. but it may bring some confusion, and in the future if someone change one of them, there may meet issue.
could we consider to define a new sa yaml for Singleton mode?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will need to check the mode for each rolebinding and clusterrolebinding files. I think it is harder to maintain then just mutate the service account name.

features.SpokeMutableFeatureGate.AddFlag(flags)

// add disable leader election flag
flags.BoolVar(&cmdConfig.DisableLeaderElection, "disable-leader-election", false, "Disable leader election for the agent.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this a common option?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, it is a flag provided by library-go, we cannot set in the common options.

pkg/singleton/spoke/agent.go Outdated Show resolved Hide resolved

workCfg := work.NewWorkAgentConfig(a.agentOption, a.workOption)
// start work agent at first
go func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can the work agent and registration agent share some informers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they do not share the same informer.

test/integration/operator/klusterlet_singleton_test.go Outdated Show resolved Hide resolved
Signed-off-by: Jian Qiu <jqiu@redhat.com>
@qiujian16
Copy link
Member Author

/assign @skeeey

@@ -33,6 +60,10 @@ func (o *AgentOptions) AddFlags(flags *pflag.FlagSet) {
_ = flags.MarkDeprecated("cluster-name", "use spoke-cluster-name flag")
flags.StringVar(&o.SpokeClusterName, "cluster-name", o.SpokeClusterName,
"Name of the spoke cluster.")
flags.StringVar(&o.HubKubeconfigDir, "hub-kubeconfig-dir", o.HubKubeconfigDir,
Copy link
Member

@skeeey skeeey Jul 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this may have some confusion with hubkubeconfigfile, this dir is used to store the hubkubeconfig certs, could we rename it to hubkubeconfigsecretdir/hubkubeconfigcertdir

and we may allow this empty, if empty, we will find the certs in the dir of hubkubeconfigfile?

we allow hubkubeconfigfile empty

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hrm, I thought we also need kubeconfig to put in the same directory in registration agent?

@zhiweiyin318
Copy link
Member

/lgtm
/hold

@skeeey
Copy link
Member

skeeey commented Jul 14, 2023

/lgtm

@skeeey
Copy link
Member

skeeey commented Jul 14, 2023

/unhold

@openshift-merge-robot openshift-merge-robot merged commit f7cd140 into open-cluster-management-io:main Jul 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants