Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Improve flexibility of the MaxRetriesFailureHandler #554

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

beilCrxmarkets
Copy link
Contributor

Enhance Flexibility in MaxRetriesExceededHandler Configuration

This pull request introduces modifications to the MaxRetriesFailureHandler to enhance the flexibility and action possibilities within the maxRetriesExceededHandler. Previously, invoking executionOperations.stop() universally, even when a maxRetriesExceededHandler was defined, limited the potential actions, such as task rescheduling.
Key Changes:

  • The maxRetriesExceededHandler can now decide independently whether to remove the task, allowing for more nuanced handling (e.g., rescheduling tasks).
  • These modifications represent a breaking change, affecting the behavior of the MaxRetriesFailureHandler. With no defined handler, the behavior remains unchanged via a default handler executing executionOperations.stop().
  • We considered adding a RetriesFailureHandler, which would function similarly to MaxRetriesFailureHandler but without automatic invocation of executionOperations.stop(). However, to avoid user confusion on choosing between handlers, this was not pursued.

Fixes

#538

Reminders

  • Added/ran automated tests
  • Update README and/or examples
  • Ran mvn spotless:apply

cc @kagkarlsson

@beilCrxmarkets beilCrxmarkets changed the title Improve flexibility of the MaxRetriesFailureHandler feat: Improve flexibility of the MaxRetriesFailureHandler Nov 29, 2024
Comment on lines -96 to 99
executionOperations.stop();
maxRetriesExceededHandler.accept(executionComplete, executionOperations);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unsure about the original intent of the maxRetriesExceededHandler. The handler should not have taken executionOperations as a parameter since the execution had already been stopped.

@kagkarlsson
Copy link
Owner

What is your use-case if I might ask? (what operation rather than stop() are you considering)

@beilCrxmarkets
Copy link
Contributor Author

What is your use-case if I might ask? (what operation rather than stop() are you considering)

If the executionOperation is not stoped jet, we could pass new FailureHandler.OnFailureReschedule<Void>(schedule)::onFailure or new FailureHandler.OnFailureRetryLater<Void>(schedule)::onFailure for the maxRetriesExceededHandler.

Our RecurringTask is triggered once a day. If this fails, we want to repeat it. If we exceed the maxRetires, we still want the recurring task to be triggered the next day.

@kagkarlsson
Copy link
Owner

Not sure I understand. Why are you using MaxRetriesFailureHandler if you do not want the "max"-behavior?

You can always implement your own fully custom failurehandler..

@beilCrxmarkets
Copy link
Contributor Author

Not sure I understand. Why are you using MaxRetriesFailureHandler if you do not want the "max"-behavior?

You can always implement your own fully custom failurehandler..

At the moment we use our own implemented FailureHandler to get the desired behavior.

However, I assume that we are not the only ones who need the described behavior, which is why it makes sense to store it in a central location.

I agree with you that the name “MaxRetriesFailureHandler” implies that after the maximum number of attempts is reached, the instance is terminated. We have therefore also considered proposing to add an additional FailureHandler with the name “RetriesFailureHandler”. However, the existence of a “MaxRetriesFailureHandler” and a “RetriesFailureHandler” can be confusing for beginners at first glance, as the difference is not 100% obvious.

If we were to start this project from scratch, it would make sense to add only the “RetriesFailureHandler” instead of the “MaxRetriesFailureHandler”, as the latter offers more flexibility. Since the library and thus the “MaxRetriesFailureHandler” have been around for a while and are used by several users as they are, the compromise would be to adapt the “MaxRetriesFailureHandler” accordingly so that it offers more flexibility.

@kagkarlsson
Copy link
Owner

Are you looking for behavior like:

  • up until X retries, use this FailureHandler
  • after X retries, use this other FailureHandler
    ?

@beilCrxmarkets
Copy link
Contributor Author

Are you looking for behavior like:

  • up until X retries, use this FailureHandler
  • after X retries, use this other FailureHandler
    ?

Yes, our task will be triggered once a day. If this fails, we want to try again 5 times shortly after (RetriesFailureHandler). After all 5 attempts are used up, we want to try again tomorrow at the usual time (OnFailureReschedule).

However, with the current implementation of the MaxRetriesFailureHandler, we cannot do this because the task is deleted from the DB after the 5 attempts are exhausted.

We also combine this with the ExponentialBackoffFailureHandler to get a delay between the 5 additional attempts. This does not cause any problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants