Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add the ability to specify a drain timeout for machines #3662

Merged
merged 1 commit into from
Sep 30, 2020

Conversation

namnx228
Copy link
Contributor

@namnx228 namnx228 commented Sep 18, 2020

Add an option nodeDrainTimeout to KCP and machiceSpec of machinedeployment.
nodeDrainTimeout defines the amount of time we want a node to be drained. The node is forcefully removed if the time is over.
Note: Unset this option means there is no time limit.

How does this PR do?

  • Add an option nodeDrainTimeout to KCP and to MachineSpec field of Machideployment.
  • KCP or Machinedeployment create machines which have this nodeDrainTimeout option.
  • For each machine, In the first time the machine is drained, the conditions DrainingSucceededCondition is set to "False", and its last trasition time is the first time the machine is drained.
  • The machine ignores the draining phase if the timeout is over.

Fixes #2331

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 18, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @namnx228!

It looks like this is your first PR to kubernetes-sigs/cluster-api 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @namnx228. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Sep 18, 2020
@namnx228
Copy link
Contributor Author

Signed CLA

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Sep 18, 2020
@namnx228
Copy link
Contributor Author

/assign @ncdc @vincepri @detiber

@namnx228 namnx228 changed the title Add option to ignore draining after a while ✨ Add option to ignore draining after a while Sep 18, 2020
@namnx228
Copy link
Contributor Author

/assign @neolit123

@vincepri
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 18, 2020
Copy link
Member

@vincepri vincepri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks good, thanks for working on it!

We'll need to add more tests and integration tests in the Machine controller, especially around the logic that checks if the node drain is still allowed or not

api/v1alpha3/machine_types.go Outdated Show resolved Hide resolved
@@ -89,6 +89,11 @@ type MachineSpec struct {
// Must match a key in the FailureDomains map stored on the cluster object.
// +optional
FailureDomain *string `json:"failureDomain,omitempty"`

// NodeDrainTimeout is the total amount of time for draining a worker node
// Note that this NodeDrainTimeout is different from `kubectl drain --timeout`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Note that this NodeDrainTimeout is different from `kubectl drain --timeout`
//
// NOTE: NodeDrainTimeout is different from `kubectl drain --timeout`

Let's also add some color on how it's different?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have a default here?

@@ -299,14 +299,18 @@ func (r *MachineReconciler) reconcileDelete(ctx context.Context, cluster *cluste
conditions.MarkTrue(m, clusterv1.PreDrainDeleteHookSucceededCondition)

// Drain node before deletion and issue a patch in order to make this operation visible to the users.
if _, exists := m.ObjectMeta.Annotations[clusterv1.ExcludeNodeDrainingAnnotation]; !exists {
if _, exists := m.ObjectMeta.Annotations[clusterv1.ExcludeNodeDrainingAnnotation]; !exists && !isNodeDraintimeoutOver(m) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of chaining if conditions here, let's add a new if condition after entering this block that checks the timeout

controllers/machine_controller.go Show resolved Hide resolved
Comment on lines 370 to 400
func isNodeDraintimeoutOver(machine *clusterv1.Machine) bool {
// if the start draining condition does not exist
if conditions.Get(machine, clusterv1.DrainingSucceededCondition) == nil {
return false
}
// if the NodeDrainTineout type is not set by user
if machine.Spec.NodeDrainTimeout <= 0 {
return false
}
now := time.Now()
firstTimeDrain := conditions.GetLastTransitionTime(machine, clusterv1.DrainingSucceededCondition)
diff := now.Sub(firstTimeDrain.Time)
return diff.Seconds() >= float64(machine.Spec.NodeDrainTimeout)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this a method? Seems everything else is on the MachineReconcile object

@@ -363,6 +367,21 @@ func (r *MachineReconciler) reconcileDelete(ctx context.Context, cluster *cluste
return ctrl.Result{}, nil
}

func isNodeDraintimeoutOver(machine *clusterv1.Machine) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func isNodeDraintimeoutOver(machine *clusterv1.Machine) bool {
func isNodeDrainAllowed(machine *clusterv1.Machine) bool {

To make it a little bit more generic, in case in the future we might want to check more than just a timeout. We can also add the check for the annotation if _, exists := m.ObjectMeta.Annotations[clusterv1.ExcludeNodeDrainingAnnotation]

@vincepri
Copy link
Member

/milestone v0.3.10

@k8s-ci-robot k8s-ci-robot added this to the v0.3.10 milestone Sep 18, 2020
@namnx228
Copy link
Contributor Author

Thanks for your review, @vincepri. I will push a fix that responds to your comments soon.
I have two questions:

We'll need to add more tests and integration tests in the Machine controller, especially around the logic that checks if the node drain is still allowed or not

  • Do we need the test to be included in the current PR, or we can have a separated PR for the test?
  • It seems like the two failing CI tests are related to the API change. Should I ignore them?

@vincepri
Copy link
Member

Do we need the test to be included in the current PR, or we can have a separated PR for the test?

Same PR, usually changes should be tested before merging them, it has happened in the past that we've merged code without tests and forgot to follow-up, since then we've decided that PR should come with tests + docs whenever possible

It seems like the two failing CI tests are related to the API change. Should I ignore them?

For now yes, if they keep failing after a number of retries we might want to look into it a bit more. We've merged a bunch of fixes for the tests, can you try rebasing?

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 22, 2020
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 22, 2020
@namnx228
Copy link
Contributor Author

/retest

@namnx228
Copy link
Contributor Author

@namnx228: The following test failed, say /retest to rerun all failed tests:
Test name Commit Details Rerun command
pull-cluster-api-verify-external-links 6239da5 link /test pull-cluster-api-verify-external-links

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Is it an issue in CI that the K8s-CI-robot keeps showing the failed test result of an old commit?

@vincepri
Copy link
Member

@namnx228 You can ignore the pull-cluster-api-verify-external-links, it was moved to a periodic and it shouldn't actually run anymore

Copy link
Member

@vincepri vincepri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/assign @CecileRobertMichon @ncdc

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vincepri

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 29, 2020
@vincepri
Copy link
Member

/test pull-cluster-api-test

3 similar comments
@namnx228
Copy link
Contributor Author

/test pull-cluster-api-test

@namnx228
Copy link
Contributor Author

/test pull-cluster-api-test

@namnx228
Copy link
Contributor Author

/test pull-cluster-api-test

// The default value is 0, meaning that the node can be drained without any time limitations.
// NOTE: NodeDrainTimeout is different from `kubectl drain --timeout`
// +optional
NodeDrainTimeout int64 `json:"nodeDrainTimeout,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be a metav1.Duration?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it can also be metav1.Duration.
Do we have any significant advantage if now we change it from int64 to metav1.Duration since it can lead to quite a lot of change to this PR?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We typically use Duration for ... durations. The significant advantage is getting this right early on instead of having to make an API change later. @vincepri @CecileRobertMichon @detiber WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please, that's a good point. We should try to be consistent with the rest of the codebase

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Fixed!

@@ -299,14 +299,20 @@ func (r *MachineReconciler) reconcileDelete(ctx context.Context, cluster *cluste
conditions.MarkTrue(m, clusterv1.PreDrainDeleteHookSucceededCondition)

// Drain node before deletion and issue a patch in order to make this operation visible to the users.
if _, exists := m.ObjectMeta.Annotations[clusterv1.ExcludeNodeDrainingAnnotation]; !exists {
if drainAllowed := r.isNodeDrainAllowed(m); drainAllowed {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if drainAllowed := r.isNodeDrainAllowed(m); drainAllowed {
if r.isNodeDrainAllowed(m) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

return false
}

if timeout := r.isNodeDrainTimeoutOver(m); timeout {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if timeout := r.isNodeDrainTimeoutOver(m); timeout {
if r.isNodeDrainTimeoutOver(m) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed! Thanks


}

func (r *MachineReconciler) isNodeDrainTimeoutOver(machine *clusterv1.Machine) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably call this something like nodeDrainTimeoutExceeded - @vincepri ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM, was struggling to find a better name the other day

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed! Thanks for the suggestion

Comment on lines 390 to 394
// if the NodeDrainTineout type is not set by user
if machine.Spec.NodeDrainTimeout <= 0 {
return false
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this check is less expensive than conditions.Get - maybe consider making it the first check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed! Thanks

@@ -419,7 +453,6 @@ func (r *MachineReconciler) drainNode(ctx context.Context, cluster *clusterv1.Cl
}
return errors.Errorf("unable to get node %q: %v", nodeName, err)
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please undo

Comment on lines 798 to 810
m1 := &clusterv1.Machine{
TypeMeta: metav1.TypeMeta{
Kind: "Machine",
},
ObjectMeta: metav1.ObjectMeta{
Name: "m1",
Namespace: "default",
Labels: map[string]string{
clusterv1.ClusterLabelName: "test-cluster",
},
},
}
objs = append(objs, m1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we don't need it. Deleted. Thanks

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Sep 30, 2020

@namnx228: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-cluster-api-verify-external-links 6239da5 link /test pull-cluster-api-verify-external-links

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@namnx228
Copy link
Contributor Author

@ncdc all the required changes have been made. Please take a look again. Thanks!

Add an option `nodeDrainTimeout` to KCP and machiceSpec of machinedeployment. 
`nodeDrainTimeout` defines the amount of time we want a node to be drained. The node is forcefully removed if the time is over.
Note: Unset this option means there is no time limit.
@ncdc
Copy link
Contributor

ncdc commented Sep 30, 2020

/lgtm

@namnx228 thanks for your patience w/the reviews, and thanks for doing this!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 30, 2020
@k8s-ci-robot k8s-ci-robot merged commit 7523f71 into kubernetes-sigs:master Sep 30, 2020
@namnx228
Copy link
Contributor Author

@ncdc @vincepri Thank you very much for your review and suggestions. I will open a follow-up PR to provide e2e tests for this PR soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow a user specifiable node draining timeout
6 participants