Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: "run through" each test once. #98

Closed
cheshire opened this issue Sep 30, 2013 · 7 comments
Closed

Feature request: "run through" each test once. #98

cheshire opened this issue Sep 30, 2013 · 7 comments

Comments

@cheshire
Copy link

Hey Locust team,

While I understand the primary goal of the locust project is load testing, a useful by-product of a load testing suite is integration testing --- if we can check all routes on our app, chances are it is working fine.

It seems unwise to duplicate the code twice in locust tests and python unit tests, and I think it would be really useful to have a runner which instead of spawning millions of locusts will simply call each function ones (and call on_start before that).

Currently I have a hacky solution outside of locust which involves relying on locust internals. If anyone in the core team agrees that it can be useful I'd be glad to contribute a more thought-out patch which would add a command line to "run through" all tasks only once.

@Jahaja
Copy link
Member

Jahaja commented Sep 30, 2013

Hi cheshire,

I actually experimented with this some time ago. It's available in the now quite obsolete testrunner branch.
The basic idea was to have a -t switch to the locust runner which just executed each task once and displayed a simple dotted result like a python unittest.

I reckon our own use-case, basically a lot of tasks distributed among quite a few people, could benefit from this being available as well.

@cheshire
Copy link
Author

My current implementation is slightly more intelligent (I sort of cloned nose API):
by default it runs all tests, if module is specified on the command line, it runs all task sets from this module, if module and class are both specified it runs tasks in that class, if module, class and task are all specified it runs that task only. It's really hacky though, I'll look at the way of making it look nicer.

@Jahaja
Copy link
Member

Jahaja commented Sep 30, 2013

Cool, sounds great.

@cheshire
Copy link
Author

Do you mind having a separate entry point? locust_test_runner? I feel like flags in locust are not applicable to the test mode and vice versa.

@Jahaja
Copy link
Member

Jahaja commented Sep 30, 2013

If possible to get decent match, I'd prefer using the same entry point and the -t switch or similar.

@yonnig
Copy link

yonnig commented Sep 21, 2015

+1 This feature is a must have. It would really help us.
@Jahaja : would you be willing sharing the code you wrote some time ago ?

@justiniso
Copy link
Member

Sorry to say, but I don't support making a special configuration option for this. It takes focus away from the main goal and adds complexity we'll have to support. If you want to hack, it's possible today without any changes to locust core code. To avoid duplicating code, put your test behavior into a function. Have the function be imported by both your test code and locust. At the end of the locustfile function, include a sys.exit(0).

The only valuable feature I see here is having locust read a unittest or pytest-style module and use those test functions to describe the locusts. I often hear of people wanting to run their test suite repeatedly in parallel to construct a complex load test. However, a plugin model is absolutely necessary for that and that logic will exist as a plugin. But if you just want to run a scenario of one user, pytest will do that just fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants