-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add the --concurrency option to launch multiple runs of *.hurl files instead of one #88
Comments
This one is not that easy, there are many ways to do it! Maybe, we could distribute another binary that would also process hurl files but dedicated to performance testing. |
Completely agree with your proposal. |
@lepapareil I agree with @fabricereix there are for sure several ways to implement such a feature. Nevertheless, I would also like to see a "concurrent" and "bulk" (related to #1139) feature in Since the The following
Using a generic mechanism like this would allow you to write almost all the combinations necessary to write proper test and benchmark plans per By introducing first this feature would allow in an additional step to e.g. introduce "nesting" of sub
Since this is the nature of sequential execution only the keyword 1000 runs in sequence of the whole
1000 runs in parallel of the whole
1000 runs whereas 100 in sequence times 10 in parallel of the whole
|
Similar to the
|
@fabricereix sounds good to me for the repetition use case. What about the "load testing" use case for parallel request scenarios?
Like I stated above it would be nice to specify the
The whole specification is repeated twice whereas one run is running in parallel 2 threads (not synchronized, only at the end). A visual of the example test specification would be: flowchart LR
s0(hurl) -->|run| s1{{fork}}
s1 --> s2
s1 --> s3
s6{{join}}
s7{{end}}
s2 --> s4
s3 --> s5
s4 --> s6
s5 --> s6
s6 --> s7
subgraph run
direction LR
subgraph s2[1st thread, 1st repetition]
direction LR
subgraph s20[repeat 3]
direction LR
s200[POST] --> s201
s201[POST] --> s202
s202[POST]
end
subgraph s21[thread 3]
direction LR
s210[GET]
s211[GET]
s212[GET]
s213{{fork}}
s214{{join}}
s213 --> s210
s213 --> s211
s213 --> s212
s210 --> s214
s211 --> s214
s212 --> s214
end
s20 --> s21
end
subgraph s3[2ns thread, 1st repetition]
s30[same as 1st thread, 1st repetition]
end
subgraph s4[1st thread, 2nd repetition]
s40[same as 1st thread, 1st repetition]
end
subgraph s5[2nd thread, 2nd repetition]
s50[same as 1st thread, 1st repetition]
end
end
|
For the concurency usecase, it would be interesting to try to do it first with an external script using parallel. |
I agree on a global concurrency related stress/load test ... but not a systematically one, because the threads run out of sync and |
This feature would help me out as well. I was looking for something simple to just hammer our Rails apps a bit to test some NGINX configs. Jmeter is overkill and Apache Bench is a bit to simple; hurl seems to be just right. I spent a bit of time getting a simple Is there a beta branch or something that I could test with? |
@rnhurt sorry for this, we know it's a very important feature and we haven't begin to work on it (Hurl is unfortunately just a side project). I can insure you that it's in our top priority, and more important, we really want to make it "correct". In a word, we need some times... |
@jcamiel how far is the design or development of this feature? Because, I could provide a proposed solution as sketched above in #88 (comment). |
Hi @ppaulweber we have just begin thinking on it. I would prefer that it will be tackle by the core maintainers just because we want to discuss on how to do this:
I can't prevent you from work on it of course !! But this feature will certainly be very structural and we want to be totally sure on how to address on code on it. Instead of coding, maybe we're going to put our thought on a working document to discuss how to design the code. We've done this for the Hurl grammar and this kind of specification / preparation has really helped us to make "robust" code. As soon as we've started such a document I'll ping you so you can contribute with your inputs. |
Initialisation of an architecture document here => /docs/spec/runner/parallel.md |
Just some notes on my attempt on this with hurl. This is the first project I've used hurl for some testing and when performance testing came up I immediately reached for hurl as all the use cases for the perf testing were already defined, which is how I ended up in this thread... I pre processed a set of tests into one This is basically cat test1.hurl > perf-single.hurl
cat test2.hurl > perf-single.hurl
cat test3.hurl > perf-single.hurl
cat test4.hurl > perf-single.hurl
cat test2.hurl > perf-single.hurl
cat test3.hurl > perf-single.hurl
cat test2.hurl > perf-single.hurl
# Then to elongate the test.
cat perf-single.hurl perf-single.hurl perf-single.hurl perf-single.hurl > perf.hurl The test sequence is the same for each dt=$(date -u '+%Y-%m-%dT%H:%M:%SZ')
n=20
outdir="output/run-${dt}-${n}/"
mkdir "${outdir}"
printf 'writing perf results to "%s"\n' "${outdir}"
parallel -j$n hurl --continue-on-error --json --output "${outdir}/thread-{1}.json" perf.hurl ::: {1..$n}
# or moreutils parallel
parallel -j$n -i hurl --continue-on-error --json --output "${outdir}/thread-{}.json" perf.hurl -- $(seq 1 $n) Then post process the timings out of the json files, which basically amounts to jq filters like The json output is a bit verbose for a load test, my simple 2000 request Nice things to have.
Interactive output doesn't really matter much to me, possibly a test count and error count so a test run can be killed when the server blows up, and a quick summary of totals at the end. |
@mhio thanks you very much for the feedbacks, it's very interesting and useful. I've started a spec documentation for the parallel feature here => https://github.com/Orange-OpenSource/hurl/blob/master/docs/spec/runner/parallel.md for the moment, it's just a list of various things to address, before the implementation. For the past, we've worked on Hurl with spec document before implementation and it helps us a lot. |
Hi all, This is much overdue, but we've implemented The model used is similar to
The parallelism used is multithread sync: a thread pool is instantiated for the whole run, each Hurl file is run in its own thread, synchronously . We've not gone through the full multithreaded async route for implementation simplicity. Moreover, there is no additional dependency, only the standard Rust lib. @ppaulweber we've chosen not to expose a kind of "thread affinity" inside a Hurl file, once again for simplicity of implementation. The only user option is Regarding stdout/stderr, we've, once again, followed the One can use debugging for a particular file with
In test mode, the progress bar is a little different from the non-parallel run, it will be harmonised for the official release (the sequential test progress will look like running Regarding report, the HTML, TAP, JUnit reports are not affected: reported tests, in parallel or in sequential mode, are in the same order as execution one. For instance: $ hurl --test --report-tap a.hurl b.hurl c.hurl Will always produced this TAP report, in this order, no matter what file is executed first:
What's next:
@mhio I'm interested to know if this could potentially replace your usage with parallel ! (same announce made on #87) |
@jcamiel thanks for the update, can't wait in this case for |
I'm closing this issue, we'll open new, more specific issues from now. |
Hi guys 😄
In the context of our project we see two huge advantages using hurl :
customer paths
on our web application. This allows us to guarantee the non-regression between versions.However, we cannot "yet" use hurl for our
metrology
tests, forcing us to duplicate all our scenarios in other tools that are less accessible to non-developers... 😢Currently when I launch this command
hurl runs each
*.hurl
file one time, first a.hurl, then b.hurl, then c.hurl.I propose you to add the
--concurrency
option allowing hurl to launch multiple runs of each *.hurl file at a time, for example:What do you think about it ?
The text was updated successfully, but these errors were encountered: