Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Now Finalizer Help It seems a little complicated to use. #2082

Closed
cuisongliu opened this issue Dec 7, 2022 · 7 comments
Closed

Now Finalizer Help It seems a little complicated to use. #2082

cuisongliu opened this issue Dec 7, 2022 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cuisongliu
Copy link

cuisongliu commented Dec 7, 2022

Now Finalizer Help:

#1453 (comment)

I think the controller mode should have its uniqueness. Is it better to deal with one Finalizer in a controller? If multiple Finalizers are a little contrary to our rules of using Finalizer, make this controller is more complicated. Isn't it better for a controller to handle a Finalizer separately? We can also encapsulate the modified logic, so that when using it, we only need to care about the specific controller logic and don't care about how to modify Finalizer.

Finalizer Help code:

type Finalizer struct {
	client        client.Client
	finalizerName string
}

func (f *Finalizer) AddFinalizer(ctx context.Context, obj client.Object) (bool, error) {
	var notDelete bool
	if obj.GetDeletionTimestamp() == nil || obj.GetDeletionTimestamp().IsZero() {
		// The object is not being deleted, so if it does not have our finalizer,
		// then lets add the finalizer and update the object. This is equivalent
		// registering our finalizer.
		notDelete = true
		controllerutil.AddFinalizer(obj, f.finalizerName)
		if err := f.updateFinalizers(ctx, client.ObjectKeyFromObject(obj), obj, obj.GetFinalizers()); err != nil {
			return notDelete, err
		}

	}
	return notDelete, nil
}

func DefaultFunc(ctx context.Context, obj client.Object) error {
	return nil
}

func NewFinalizer(client client.Client, finalizerName string) *Finalizer {
	return &Finalizer{
		client:        client,
		finalizerName: finalizerName,
	}
}

func (f *Finalizer) RemoveFinalizer(ctx context.Context, obj client.Object, fun func(ctx context.Context, obj client.Object) error) (bool, error) {
	var deleteBool bool
	if obj.GetDeletionTimestamp() != nil && !obj.GetDeletionTimestamp().IsZero() {
		deleteBool = true
		if controllerutil.ContainsFinalizer(obj, f.finalizerName) {
			// our finalizer is present, so lets handle any external dependency
			if err := fun(ctx, obj); err != nil {
				return deleteBool, err
			}

			controllerutil.RemoveFinalizer(obj, f.finalizerName)
			if err := f.updateFinalizers(ctx, client.ObjectKeyFromObject(obj), obj, obj.GetFinalizers()); err != nil {
				return deleteBool, err
			}
		}
	}
	return deleteBool, nil
}

func (f *Finalizer) updateFinalizers(ctx context.Context, objectKey client.ObjectKey, obj client.Object, finalizers []string) error {
	return retry.RetryOnConflict(retry.DefaultRetry, func() error {
		gvk := obj.GetObjectKind().GroupVersionKind()
		fetchObject := &unstructured.Unstructured{}
		fetchObject.SetAPIVersion(gvk.GroupVersion().String())
		fetchObject.SetKind(gvk.Kind)
		err := f.client.Get(ctx, objectKey, fetchObject)
		if err != nil {
			// We log this error, but we continue and try to set the ownerRefs on the other resources.
			return err
		}
		fetchObject.SetFinalizers(finalizers)
		err = f.client.Update(ctx, fetchObject)
		if err != nil {
			// We log this error, but we continue and try to set the ownerRefs on the other resources.
			return err
		}
		return nil
	})
}

How to using?

func (r *SettingsReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
       settings := &appsv1beta2.Setting{}
	if err := r.Get(ctx, req.NamespacedName, settings); err != nil {
		return ctrl.Result{}, client.IgnoreNotFound(err)
	}

	if ok, err := r.finalizer.RemoveFinalizer(ctx, settings, controllerlib.DefaultFunc); ok {
		return ctrl.Result{}, err
	}

	if ok, err := r.finalizer.AddFinalizer(ctx, settings); ok {
		if err != nil {
			return ctrl.Result{}, err
		} else {
                        //this code is your controller logic
			return r.reconcile(ctx, settings)
		}
	}
	return ctrl.Result{}, errors.New("reconcile error from Finalizer")
}

func (r *SettingsReconciler) SetupWithManager(mgr ctrl.Manager) error {
	if r.Client == nil {
		r.Client = mgr.GetClient()
	}
        //init finalizer library
	if r.finalizer == nil {
		if r.finalizer == nil {
			r.finalizer = controllerlib.NewFinalizer(r.Client, "finalizers.sigs.k8s.io/testfinalizer")
		}
	}
	return ctrl.NewControllerManagedBy(mgr).
		For(&appsv1beta2.Setting{}).
		Complete(r)
}

Is this more convenient to use?

@fanux
Copy link

fanux commented Dec 7, 2022

RemoveFinalizer(ctx context.Context, obj client.Object, fun func(ctx context.Context, obj client.Object)

Do what you want in the callback function is nice idea~ pretty simple

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 7, 2023
@cuisongliu
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 6, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 6, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants