-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ run work and registration as a single binary #201
✨ run work and registration as a single binary #201
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qiujian16 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: ldpliu <daliu@redhat.com>
aa9f250
to
4ad212e
Compare
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #201 +/- ##
==========================================
+ Coverage 59.91% 60.28% +0.37%
==========================================
Files 128 130 +2
Lines 13241 13494 +253
==========================================
+ Hits 7933 8135 +202
- Misses 4568 4609 +41
- Partials 740 750 +10
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
7939442
to
e989bfe
Compare
requests: | ||
cpu: 2m | ||
memory: 16Mi | ||
volumes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should have spoke-kubeconfig-secret volume for Hosted mode ?
https://github.com/open-cluster-management-io/ocm/blob/main/manifests/klusterlet/management/klusterlet-registration-deployment.yaml#L118-L121
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not supported in hosted mode, if we want hosted mode, need another mode
@@ -174,6 +179,9 @@ func (n *klusterletController) sync(ctx context.Context, controllerContext facto | |||
ExternalManagedKubeConfigWorkSecret: helpers.ExternalManagedKubeConfigWork, | |||
InstallMode: klusterlet.Spec.DeployOption.Mode, | |||
HubApiServerHostAlias: klusterlet.Spec.HubApiServerHostAlias, | |||
|
|||
RegistrationServiceAccount: serviceAccountName("registration-sa", klusterlet), | |||
WorkServiceAccount: serviceAccountName("work-sa", klusterlet), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in InstallModeSingleton, the registration and work sa have the same name %s-agent-sa
, currently it's ok because they have the same content in the yaml. but it may bring some confusion, and in the future if someone change one of them, there may meet issue.
could we consider to define a new sa yaml for Singleton mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will need to check the mode for each rolebinding and clusterrolebinding files. I think it is harder to maintain then just mutate the service account name.
manifests/klusterlet/management/klusterlet-agent-deployment.yaml
Outdated
Show resolved
Hide resolved
features.SpokeMutableFeatureGate.AddFlag(flags) | ||
|
||
// add disable leader election flag | ||
flags.BoolVar(&cmdConfig.DisableLeaderElection, "disable-leader-election", false, "Disable leader election for the agent.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this a common option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it is a flag provided by library-go, we cannot set in the common options.
...erator/operators/klusterlet/controllers/klusterletcontroller/klusterlet_runtime_reconcile.go
Outdated
Show resolved
Hide resolved
|
||
workCfg := work.NewWorkAgentConfig(a.agentOption, a.workOption) | ||
// start work agent at first | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can the work agent and registration agent share some informers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they do not share the same informer.
Signed-off-by: Jian Qiu <jqiu@redhat.com>
/assign @skeeey |
@@ -33,6 +60,10 @@ func (o *AgentOptions) AddFlags(flags *pflag.FlagSet) { | |||
_ = flags.MarkDeprecated("cluster-name", "use spoke-cluster-name flag") | |||
flags.StringVar(&o.SpokeClusterName, "cluster-name", o.SpokeClusterName, | |||
"Name of the spoke cluster.") | |||
flags.StringVar(&o.HubKubeconfigDir, "hub-kubeconfig-dir", o.HubKubeconfigDir, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this may have some confusion with hubkubeconfigfile, this dir is used to store the hubkubeconfig certs, could we rename it to hubkubeconfigsecretdir/hubkubeconfigcertdir
and we may allow this empty, if empty, we will find the certs in the dir of hubkubeconfigfile?
we allow hubkubeconfigfile empty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hrm, I thought we also need kubeconfig to put in the same directory in registration agent?
/lgtm |
/lgtm |
/unhold |
Summary
This is start the registration/work together with
registration-operator agent
commandRelated issue(s)
Fixes #