Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift config #13

Open
emilorol opened this issue Sep 30, 2019 · 4 comments
Open

OpenShift config #13

emilorol opened this issue Sep 30, 2019 · 4 comments

Comments

@emilorol
Copy link

emilorol commented Sep 30, 2019

Thank you for the article about Locust and kubernetes.

Following your yaml files I put together a similar setup, but for OpenShift (I also tested it in MiniShift).

https://github.com/emilorol/locust-openshift

One thing keeps bugging me. I added the option to auto scale up to 10 slaves, but what I noticed is that every time a new slave is added Locust will reset all other slave to distribute the loads, leaving me with only the manual option of allocating the number of slave manually and before running the test. Any ideas?

@karol-brejna-i
Copy link
Owner

Emil! Good job on setting up OpenShift config.

What I noticed (k8s), Locust is not doing well at all when a dynamic change of the setup (number of workers) comes into play. I haven't seen resetting the stats myself, although I've seen some miscalculated numer of workers and users...

I think we should start reporting these things to the Locust team and try to resolve them.

For now it looks like the problem of "dynamic" scaling is a bit neglected one.

@emilorol
Copy link
Author

yes, I also saw that new slaves are register automatically, but as they got destroyed the counter still showed the old number of slaves.

I believe there is a business opportunity there they are missing out as distributed load testing is here to stay.

On a side note. Have you been able to determine like a golden ration between cpu and memory for the slave containers? I started with 0.5 cpu and 512 mb and as soon as they hit 80% of the cpu I will spin up a new slave, but in a couple of cases it was too late and the container will crash. I play with the numbers and I end up with 0.5 and 1GB and the cpu resource at 70% before scaling up.

@emilorol
Copy link
Author

Issue reported: locustio/locust#1100

@emilorol
Copy link
Author

Check out the response from the locustio team

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants