-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slave count stuck at 3 instead of decreasing to 1 due to "missing" #1158
Comments
Slaves are not supposed to go missing, and if you get "missing" slaves I think that's a sign something is broken. It could happen because a slave process gets hard-killed. However, it could also happen if a slave node hits the CPU usage roof (and the heartbeat message gets delays), in which case the state could go back from |
OK cool. If locust receives SIGTERM it should shutdown and register gracefully, right? Perhaps related to #1159 ? |
Correct!
Ah, yes, that sounds very plausible. |
@max-rocket-internet Is this still an issue for you? |
Yes, with version 0.13.5 I still see this in the logs:
|
Have you checked the logs on the slaves? Could this be some kind of networking issue? Or is there anything else you can think of that makes your tests ”special”? |
Hmmm, I see it now and again but can't reproduce it reliably. I tried scaling up and down during, before and after a load test but only see it sometimes.
On AWS EC2 I don't think so.
Yes. So for whatever reason, the slave that goes missing is gracefully stopped by k8s but doesn't log:
It just stops. |
I'll reopen when I can reproduce it. |
Hi, |
Master logs after slaves scaling down:
The response from
/stats/requests
endpoint looks like this for very long time:I think the master is not removing the
missing
slaves?The text was updated successfully, but these errors were encountered: