-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-introduce -n option to specify the number of requests #1085
Comments
I think that's a less common use case... I'm -1 on supporting both options. |
I think it was quite useful (I was actually looking for it the other day, and had to go thru the git history to find out that it had been removed). It is only a few lines of code... |
I am also in favor of bringing this back. My use case is for setting and comparing against a baseline, which in my case is easier to do with # of requests versus just runtime. |
Hmm.. I think max number of requests makes some sense, but I think max iterations is more useful. I was in the process of resurrecting this functionality, but now I am unsure... |
@heyman If I reimplemented this to count the number of task iterations instead of number of requests, does that sound ok to you? As per our discussion earlier, I think it would make sense to calculate and distribute the desired number of requests to each slave at the beginning of the test (so we can ensure the right number iterations are run, at the expense of getting a "ramp down" at the end of the test) I'm thinking the new parameter name could be -i/--iterations. |
Hello @cyberw, I stumbled upon the -n option coming from a different use case: writing / testing new scripts. This is a different use case for sure but it may be taken into account as well. |
Iteration count makes more sense. Please provide this feature. |
We are using locust ( partially ) in our python framework. |
This is how I handled the problem of sending fixed number of requests : `
` |
I'd consider this a critical/must-have feature for any test framework (and incidentally is present on any other testing framework I use or have seen). The particular use-case I have for it at the moment is what is known as a "synthetic health check", which resembles a "ping" in that its a single test iteration, but is being done with a testing framework in order to leverage the sophisticated techniques available for generating a specific sequence of virtual-user action in order to reach the required state in the application. Bottom line: we need to execute exactly 1 iteration, and have no way of knowing how long it will take, and so can't rely on time-to-execute, nor allow time-to-execute to interfere/disrupt. While it seems there may be ways to "coerce" termination, it would certainly be preferrable to have this feature easily-accessible as an execution option. |
I have created a branch that reintroduces -n, similar to how it was before. But the old feature was weird, counting requests instead of iterations. If someone were to create a PR that introduces the flag but counting iterations I’d be happy to merge it. |
I'm still against reintroducing this feature (for either HTTP requests, or task iterations). The reason for this is that it would be hard to implement it in a good way when running Locust distributed (IIRC, in the previous implemenetation, this feature didn't work when running distributed).
However, if we were to reintroduce the feature, I think alternative 1 would be the preferred solution. We would then have to make it very clear that the max requests limit isn't a hard limit and that the test would result in more than exactly N requests. |
Maybe it is two different features? Your "alternative 1" makes sense for tests where you just want constant load, but specify the stop time in terms of iterations instead of seconds. This fits well with the locust model, so I have no issues with someone adding it, although it must be clearly documented that it only guarantees a lower bound on the iteration count. But "alternative 2" would also be very useful, it just needs to be documented that the load will drop off at the end and and new slaves cannot connect during the test (but fall back to just not giving them any work in case they do). I think what most people in this ticked have asked for is an actual hard limit on the number of iterations. and alternative 2 is the only one that provides that. |
I don't think that's an acceptable level of quality for us to include it in Locust itself. In that case I think it would be better to make it so that our 1.0 changes (#1266) makes it fairly easy for users to implement this themselves (by providing a hook where they can retrieve the |
I would argue that it is not a lower level of quality at all, it is just a different compromise (accurate on the number of iterations, but not accurate on having constant load at the very end of the test) But I dont need this feature myself so I wont argue further :) |
Ok :). For the record, I think the fact that new slaves can't connect during a test is the worse of the two mentioned issues. Currently, it's possible to deploy an autoscaling Locust cluster on Kubernetes. Accepting this compromise would change that. |
My $0.02 on this aspect of the conversation: surely it must be considered that the option is actually an "option" - i.e. if the end user decides to supply the option, caveats would be accepted. Eliminating the option to prevent such caveats is what I suspect alot of us desiring this feature would find "unfair". Anecdotally, I can attest that I require the feature and have no concern about the caveats mentioned. In particular, we don't actually use Locust's distributed mode, but instead have our own execution implementation that provides that capability across multiple technologies (JMeter, Gatling, Locust, etc ...). Its also worth mentioning that the "overlap" of the feature with a scenario like adding slaves to a test is unlikely, due to the intrinsic nature of the feature itself (if I'm bottling a test up in a specific number of iterations, it doesn't make much sense to also want to add slaves mid-test). So, for example: I would be perfectly elated to accept the feature with caveats like "not supported in distributed mode, nor with adding slaves mid-test". |
How about a compromise. A feature allowing only a single execution before termination. This would eliminate the concerns about complexity in distributed execution while facilitating the use of Locust for synthetic health checks. |
This should be discussed in a separate issue I think, but my take on it is that Locust is made for load testing and not synthetic health checks, focusing on too different use-cases will result in software that isn't good at any use-case. |
If a lot of people want that feature, and it doesnt make Locust significantly more complicated then I think it makes sense to include it. I think it is great that we have a vision for what Locust should be, but if a significant proportion of our users want a feature, and it is not in direct conflict with something else then I think it deserves to be included. Ignoring the community and saying "use something else then" is not a good idea (unless implementing the feature would require too many compromises or complexity of course). That being said, I think the feature should be "run X iterations" not just "run one iteration". |
In order to not go too much off topic, let's open a new issue if you want to continue discussing synthetic health checks :). Maybe it would be a good idea to have a separate issue for "run X iterations" as well, since it's different from max number of requests? I'm not even 100% sure what the exact definition of an "iteration" is. I think you mean number of tasks executed? |
Good question. I hadnt thought of exactly what makes the most sense. I think number of tasks executed is the best unit of execution (because I guess you can't really count "TaskSet iterations" in a meaningful way, right?) I think we can keep it as the same issue though. I think limiting the number of task executions is a more meaningful feature than limiting the number of requests (terminating in the middle of task executions is not very useful in most cases) |
I actually think that (if were to re-introduce the feature) it would make more sense to put the limit on number of requests. The reason for this are:
One could make it so that the |
I personally prefer counting iterations (because in a user behaviour centric tool, tasks are what you want to run, not individual requests). But I'll take what I can get :) If we do reintrouce -n as it was before, we shouldnt implement it as naively as it was previously done (stopping the test after a certain number of request_success/request_failure events), because it will tend to overshoot if there are a number of requests "in transit".
Tasks shouldnt be unfamiliar to users of other tools, in fact a lot of the time I see questions about how to do things at the task level (e.g. https://stackoverflow.com/questions/58962517/how-to-interpret-locustios-output-simulate-short-user-visits/) rather than request level.
Sure, but then the -n becomes a very "strange" parameter. "Do N number of requests + the ones that were already in transit + finish all the task runs that were in progress" is not at all as precise as "Do N task runs" :) |
I believe iterations is the most relevant/salient construct ... "requests", meaning round-trip invocations to an app-under-test endpoint, is something that is visible/managed in the test's own code and easily scoped under an iteration. My vote would be that iterations at the task level is the desired construct for the feature, and if a test wants to micro-manage its own requests, that's its pervue in its own task implementation. I'd also note that its likely some people use the terms "request" and "iteration" interchangably, so we'll have to try to retain clarity around that. |
I think that - due to all reasons previously stated on why it would be very hard to implement it in a good way when running distributed if we are not allowed to "overshoot" - it would be much better to clearly document that |
My simple question is, I have a system that cannot take more than 100 requests and if we fire more than 100 requests on that system, all subsequent calls will fail. So, how can I make locust to stop when it has hot 100 requests. Else my report is a complete mess as it also includes the 101+ requests which are useless to test. If we have a workaround for this, I am happy to say No to -n. But, please provide me something so that I can use locust (in such scenario). Please-Please-Please help !! |
@anshumangoyal Do you run Locust distributed? In that case a work-around is complicated (though still possible). If not you can just wrap the requests in a function where you increase a counter for every request, and when you reach 100 you simply turn all subsequent calls to noops. If and when we merge #1266 you should also be able to grab a reference to the runner and call it's |
That's why I advocate implementing the feature without distributed support. The feature doesn't even make that much sense in scenarios where distributed makes most sense, to whit: if one is using distributed, one is trying to scale beyond what single-instance delivers, which in turn doesn't really match the "I want to run only N iterations" concept. As far as functionality being tightly confined to only what some definition of "load testing" would prescribe, I would rather observe that the core functionality could also be described more as "mechanisms to invoke application behavior, for the purpose of testing" ... whether that's used to drive load or simply hit the app once doesn't seem like any compromise of product purpose. |
` class ReservationApp(TaskSequence):
class ReservatonLocust(HttpLocust): if name == "main": ` |
Can we have both -t and -n to end the test. And if I can fix the RPS to a specific value during the execution. I am running Locust on distributed mode in Kubernetes cluster and face a lot of challenge to control the end of the testing when the Pod(both Master and Slaves) restarts once the --run-time is reached which makes the whole testing infinite. I have come with a additional Pod called "Locust-Monitor" to read the logs written by Locust-Master. When we get the "teardown" message in the Locust-Master logs, we end the testing by removing the Locust Master and Slaves using the Kubernets commands. Any other suggestion would also be encouraged. |
Also move the checks parameters into __init__.py
I have added the It requires setting the parameter for worker processes and makes no effort to "distribute" the number of iterations across workers, so there is definitely room for improvement, but it works well enough for me. |
I know this ticket is very old now, but is the implementation in locust-plugins good enough for you? I intend to close this ticket soon... |
Looks good to me - thanks :) |
(marking as invalid, because there is no fix made in locust itself. but I dont want to say "wontfix", because it would sound like there is no solution) |
are you guys introducing this option anytime? |
It already has a solution in locust-plugins that works for most cases, as mentioned above. If someone were to take the time to make a PR introducing that feature + add tests then I could be convinced to add it to locust core.. |
what is the name of the plug-in exactly? I don't see use of -n anywhere, I install locust-plugins using pip. |
its called |
locust is not recognizing it, |
your locustfile must import locust_plugins to get the added options. see https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/cmd_line_examples.sh |
Description of issue
Until locust v0.8, there was an option of -n to specify the number of requests. However, this was replaced by -t option in v0.9. We have a use case where we need to run a bunch of requests for a certain number of times and then take the median. To achieve this, we would want to limit the number of request to a certain value (using -n option). The time taken for each task would vary, so we cannot depend on time (-t option). Hence, requesting to re-introduce -n option back.
Expected behavior
TBD
Actual behavior
TBD
Environment settings
Steps to reproduce (for bug reports)
TBD - please provide example code
The text was updated successfully, but these errors were encountered: