-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quartz - allow bean based jobs to be interruptable #42072
Conversation
This comment has been minimized.
This comment has been minimized.
extensions/quartz/runtime/src/main/java/io/quarkus/quartz/runtime/CdiAwareJob.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
if (handle.getBean().getScope().equals(Dependent.class)) { | ||
handle.destroy(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, is it OK to destroy the bean while the quartz working thread is still doing the Job.execute()
? It will anyway do this in the finally block of the latter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The destroy()
is a no-op for any other than @Dependent
beans.
TBF, I am not sure what would be the best way to handle dep. beans here. Currently, the idea is to have CdiAwareJob
create new bean instance for every invocation of the job (each execute method) and there might be multiple triggers for it.
So at the moment the interrupt()
implementation also spawns a new instance a handles it separately - hence the destroy logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interrupt() implementation also spawns a new instance a handles it separately
I did not notice this. Maybe spawn only one instance inside execute()
, remember it, and call interrupt()
on that exactly instance? Consider an example https://github.com/quartz-scheduler/quartz/blob/main/examples/src/main/java/org/quartz/examples/example7/DumbInterruptableJob.java. If it is also a @Dependent
, the _interrupted
flag will be set on the instance that is not really executing and checking the flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not notice this. Maybe spawn only one instance inside execute(), remember it, and call interrupt() on that exactly instance?
Yea but can't you use the job prescription with several triggers? That would make you keep wrong references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure that Quartz can execute()
the same instance of the job class (obtained using JobFactory.newJob()) in different threads simultaneously.
Consider this answer https://stackoverflow.com/a/10463309
Quartz creates new instance of your job class every time it wants to trigger that job.
And also the implementation:
- The instance is stored in JobRunShell.initialize()
- which is then passed to ThreadPool
- which is then passed to only one worker thread.
- and also there are no other references to instances of Job, and JobRunShell.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, yes, I think that we could just keep the Instance.Handle<? extends Job> handle
reference instead of Instance<? extends Job> jobInstance
. A job instance should not be shared.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be resolved via #42096
Fixes #42034