-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenShift config #13
Comments
Emil! Good job on setting up OpenShift config. What I noticed (k8s), Locust is not doing well at all when a dynamic change of the setup (number of workers) comes into play. I haven't seen resetting the stats myself, although I've seen some miscalculated numer of workers and users... I think we should start reporting these things to the Locust team and try to resolve them. For now it looks like the problem of "dynamic" scaling is a bit neglected one. |
yes, I also saw that new slaves are register automatically, but as they got destroyed the counter still showed the old number of slaves. I believe there is a business opportunity there they are missing out as distributed load testing is here to stay. On a side note. Have you been able to determine like a golden ration between |
Issue reported: locustio/locust#1100 |
Check out the response from the locustio team |
Thank you for the article about Locust and kubernetes.
Following your yaml files I put together a similar setup, but for OpenShift (I also tested it in MiniShift).
https://github.com/emilorol/locust-openshift
One thing keeps bugging me. I added the option to auto scale up to 10 slaves, but what I noticed is that every time a new slave is added Locust will reset all other slave to distribute the loads, leaving me with only the manual option of allocating the number of slave manually and before running the test. Any ideas?
The text was updated successfully, but these errors were encountered: