-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add flag to run with uniform weights #1838
Comments
Have you considered using an environment variable to set all weights to 1? e.g. class MyUser(FastHttpUser):
@task(1 if os.getenv("SET_ALL_TASKS_WEIGHT_TO_1", "false") == "true" else 3)
def my_task(self):
... |
While this could work, my main concern would be that we assign task weights in a few different ways depending on the load test being performed. Some are like your example, some are assigned in a dictionary, some are assigned in other files through indirection and downstream computation. As a result, I think that using an env var in a few different ways across hundreds of tasks is non ideal. What do you think? |
I see. Would using tags be an option? When only a few tasks have to be tested to validate that they work as expected, simply add tags on them and then run locust with the Would that work? Otherwise, the logic you want probably needs to be implemented in the |
Hi @shekar-stripe ! Interesting question. When I want to run a specific User I use locust-plugins's run_single_user() https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/debug_ex.py (I should probably add this to locust core at some point). Before I start guessing too much about your configuration, am I correct in assuming that you have large User classes with multiple tasks (some of which are rarely executed because of low task weights)?. Would it be an option for you to have smaller Users? I dont think having an option to weighting all tasks to 1 makes much sense (because it is kind of an internal thing), but an option to weight all Users might. |
Thanks @mboutet for the tags suggestion! Unfortunately, for a similar reason to the env var, I don't think that tags would be an ideal solution because of the many ways we end up assigning task weights. Hi @cyberw! Yep, we do have very large users with lots of tasks, some of which are TaskSets in their own rights, which have O(dozens) of Tasks, some of which themselves are TaskSet... For that reason, it isn't feasible for us to create smaller users (due to the necessary complexity of attempting to load test across the entire API). In addition, I don't completely understand why tasks are considered an "internal thing" since they are explicitly declared in our code. Do you mind elaborating? Thanks so much! |
What I mean is that they are internal to the Users, and even more so if they are nested in TaskSets. Most command line parameters work on a "global" level ( I'm not a fan of TaskSets myself and prefer having more Users (with more complex tasks as needed, using "regular python" for abstraction), but maybe that is just me :) But I definitely see why you would need this feature, and welcome a PR. @mboutet 's suggestion of modifying |
That sounds good! Thanks for the advice. |
Hi @cyberw, are there any settings that the maintainers need to change for me to be able to push a branch to the repo? I'm getting a bunch of git issues and I'm not sure if they're because of some repo setting. I had to go through a somewhat complicated process to create a mirror git account to my work account, so it's definitely possible something is just messed up on my end. Thanks! |
No worries :) You should fork the locust repo, push to a branch (in your repo) and open a PR from your branch to upstream (locustio) master branch. |
You're the second person to ask that this week, so I've tried to improve the documentation: https://docs.locust.io/en/latest/developing-locust.html#install-locust-for-development |
Cool, thank you! |
Hi! I work at Stripe. Our reliability infrastructure team uses locust to load test. Oftentimes there are tasks that occur with extremely low probability, so it is infeasible to simply rely on chance for a task to be run when testing a locustfile locally. Instead, engineers switch all task weights to 1, then switch them back once they're confident their locustfile is correct. This is proving untenable for large locustfiles.
Is your feature request related to a problem? Please describe.
Switching weights often takes place across multiple files (under many layers of indirection). As a result, engineers have to remember past weights, sift through multiple files, etc. simply to test their implementation.
Describe the solution you'd like
This process would be made much easier if there was a flag that simply overrode task weights to an even distribution, or even guaranteed that each task was run sequentially. Somewhat like a "test" mode. I'm happy to open a PR for this.
Describe alternatives you've considered
We've considered two alternatives:
Of these options, the team thinks that adding a flag would be the cleanest solution.
The text was updated successfully, but these errors were encountered: