-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dispatcher not killed on runner.stop() #2938
Comments
Hi Patrik! Right now "stopping" workers is a bit weird - workers send a message to master saying they have stopped and then send a "client_ready" message which is, in the master's eyes, pretty much the same as a worker connecting. I think maybe if we started by cleaning that up a little, it would be easier to make this work.
|
In the meantime, I have realized that the "STOPPED" worker might actually mean different things and I probably automatically assumed one of them, but after scrolling through the code over and over again, I am not sure whether there's an actual consensus on what the STOPPED should mean. In the case of this "bug", it kinda acts as if it means "PAUSED" with the aforementioned feature of auto-unpausing added right after. Biggest issue I have is that when I hit STOP, I assume (expect) that everything truly stops, including dispatcher, and all data, like http users from a current run are purged. If we want to persist the current state and just temporarily halt it maybe adding another PAUSED state would be an option. |
Are we planning for this optimisation, if so please assign the task to me i would like to contribute |
Nice! Assigned it to you! |
Want to check whether the thought process is aligned Currently once the master sends a stop message, each worker does the following steps in sequence
Change : We can remove steps 2,3 and 4, so that now once the master sends the stop message the worker will kill the greenlets and will set its state as ready. Master will still retain all the client list and the worker nodes list in the dispatcher |
I dont think we can remove all of that, because the master still needs to know when the workers completed their stopping procedure (shutting down all Users for example). This issue was originally about the dispatcher not being stopped, perhaps that could be fixed by ensuring it is properly stopped (somehow) before throwing away the instance (setting it to None), but what I was talking about is more about changing the communication to make the communication more clear to the master (sending something like "stopping_finished" instead of "client_ready", if you understand what I mean). Maybe this isnt a huge priority... |
understood, thanks |
Prerequisites
Description
Hello there.
I am building a custom wrapper on top of a locust. Most of the upstream locust functionality is being preserved, some stuff is being monkey-patched. Mentioning it even though it shouldn't have an impact on the issue I am about to describe.
I am trying to implement custom failure rate algorithm/monitor that STOPS test run prematurely, before --run-time is being expended, in case the failure rate goes beyond this threshold.
Standalone runner - no issues.
Distributed run has no issues if all users are already spawned before we reach threshold.
However, real issue is when I set lets say --users=300 and my custom event stops the runner when only subset of all the users is spawned. worker is being stopped and immediately sends message that its ready again. What master does though, is it spawns start() method in a greenlet, which spawns additional greenlets in a separate gevent group that is not being stored anywhere in the runner, therefore I cant see/kill it. On top of that, MasterWorker is NOT actually killing the user dispatcher in its stop() method, it merely sets the dispatcher to None
self._users_dispatcher = None
which means the gevent group hooks back to the worker and keeps spawning users.
Whats strange is that it looks like it spawns exactly 60 users before runner.quit() is called and both master/worker runners quit...
What I would like to do is simply stop both master AND worker runners, reset the user dispatcher/user variables in both runners and be able to restart the test from webUI.
Being forced to straight up quit the worker is a workaround that I would like to not take into account.
All help would be appreciated.
Command line
mylocust --host 127.0.0.1 --root . -L DEBUG -f locustfile.py --autostart --autoquit 1 --users 200 --spawn-rate 1 -t 40 --master; mylocust --host 127.0.0.1 --root . -L DEBUG -f locustfile.py --autostart --autoquit 1 --users 200 --spawn-rate 1 -t 40 --worker
Locustfile contents
Python version
3.12
Locust version
2.31.8
Operating system
Ubuntu 22.04.5 LTS
The text was updated successfully, but these errors were encountered: