Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[10.x] Queue worker is getting stopped by timeout when no new jobs available #352

Closed
codercms opened this issue Jul 29, 2020 · 3 comments · Fixed by #355
Closed

[10.x] Queue worker is getting stopped by timeout when no new jobs available #352

codercms opened this issue Jul 29, 2020 · 3 comments · Fixed by #355
Assignees

Comments

@codercms
Copy link
Contributor

codercms commented Jul 29, 2020

  • Laravel/Lumen version: 6.x
  • RabbitMQ version: doesn't matter
  • Package version: 10.2.2

Describe the bug

When running worker with --timeout argument worker is getting stopped after processing job when no new jobs available.

Steps To Reproduce

Just run the queue worker with --timeout argument, for example 10.
Send any job to worker and wait until the job will be processed.
After 10 seconds worker will be terminated by timeout.

Additional text
The reason of behavior I wrote above is this line of code:
https://github.com/vyuldashev/laravel-queue-rabbitmq/blob/master/src/Consumer.php#L96

As you can see in that line of code timeout handler is registered, but timeout handler is not cleaned up after the job is processed.
Original Laravel worker does it - https://github.com/illuminate/queue/blob/master/Worker.php#L164

@codercms codercms changed the title [10.x] Queue worker is getting stopped by timeout when no jobs available [10.x] Queue worker is getting stopped by timeout when no new jobs available Jul 29, 2020
@codercms
Copy link
Contributor Author

codercms commented Oct 9, 2020

@vyuldashev hi! Could you review my PR?

@adm-bome
Copy link
Collaborator

adm-bome commented Dec 6, 2020

This also prevents the worker/consumer to stack-up signals.

Because when running multiple jobs/messages the worker is always killed after 10 seconds after the first message was processed.
I could be wrong... i think adding multiple signals is a pile of callbacks when not cleaning this up after the job has been procesed succefully.

@adm-bome
Copy link
Collaborator

adm-bome commented Dec 8, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants