-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributed load test k8s and openshift #1100
Comments
is there a question or issue here? |
An issue. Using locust in openshift running with auto scale on. Every time a new slave is added to a running test, the test reset. Also after destroying the slaves they are reported missing instead of just removing them. The logic behind this to scale up at the start of the test and scale down when done, all automatically. |
This functions as designed. |
@emilorol I also tried to start a discussion about autoscaling slaves: #1066 Issue was also abruptly closed. I think the way work is given to the slaves from the master would need to fundamentally change. Currently the number of clients and the hatch rate is simply divided by the number of slaves and then they start. To enable even rudimental autoscaling this process would need to be more synchronised. For example the master would need to adjust number of clients running on each slave whenever a new slave joins or leaves. |
Just by looking at the project main page I noticed that there is no financial support, not even a "donation" button and that might be the real reason behind the feature freeze. It is a shame the potential this project has to become a real company and offer pay services, but it does nothing about it. |
I don't think that's it. There's plenty of open-source projects that are actively developed without donations.
We are on different pages here 😅 I really don't want locust to become a company with paid services! If you want that you can checkout Load Impact and their tool k6. |
I agree with you, but reality is that new features are not even in the back burner. I really want to be wrong here. |
Description of issue
Running distributed test on k8s or openshift the auto scale feature that will bring up more slave to a test based on the load reset the test
Expected behavior
Locust master will send load to the new slaves without resetting the existing slaves.
Actual behavior
Test are been rest when master add a new slave to pool
Environment settings
Steps to reproduce (for bug reports)
openshift: https://github.com/emilorol/locust-openshift
k8s: https://github.com/karol-brejna-i/locust-experiments
The text was updated successfully, but these errors were encountered: