Struggling to process 100,000+ jobs #1091
-
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 11 replies
-
hmm, that seems unexpected to me. Do you have Heroku+Rails console access ( If you're able to run that experiment, I think that would help diagnose whether the bottleneck is in dequeuing a job, or whether it's in the concurrency.
|
Beta Was this translation helpful? Give feedback.
-
Thank you so much for responding so quickly. 🙏 Yes I do have heroku rails console access. Running
|
Beta Was this translation helpful? Give feedback.
-
These are being processes so quickly that it's noticeable when I spin up a worker (I had it at zero) to just one server, the console slowed way down, the worker wasn't doing much either so I spun it down and then the console sped up again. 🤔 |
Beta Was this translation helpful? Give feedback.
-
Just a few minutes and I've already processed over 13,000 jobs from the console. The previous hour it had only processed 1,914. Here is how the process is defined in my Procfile:
And database.yml...
|
Beta Was this translation helpful? Give feedback.
-
They all seem to be pretty quick. |
Beta Was this translation helpful? Give feedback.
-
fyi, I have to go to bed for the night 🛌🏻 I'll pick this up tomorrow and we'll get to the bottom of it 🙇🏻 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
Yep, that's a very good sign!
I have one more experiment for you to try. Could you try running
GoodJob.perform_inline("+default,low")
? That will dequeue in order by queue that you have configured and I have a hypothesis that using the+
results in a not very performant query. I could be wrong though.Regardless, I'd still recommend that you ran separate threadpools for your queues e.g.
GOOD_JOB_QUEUES=default:2;low:2
(with the semicolon) because that will give you better quality of service.