Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

808/about executors #974

Merged
merged 29 commits into from
Jan 16, 2023
Merged

808/about executors #974

merged 29 commits into from
Jan 16, 2023

Conversation

MattDodsonEnglish
Copy link
Contributor

@MattDodsonEnglish MattDodsonEnglish commented Jan 5, 2023

Part of work in #808 .

As I tried to add explanatory text about VU allocation and dropped metrics, I struggled to put the content in place. Gradually, I moved the content from being grouped together in the top-level executors page, to being separate topics, to moving them to a new section for explanatory texts about scenarios. The organization still seemed incoherent, so I moved the other explanatory pages to a new section. This information architecture is the best I design I could come up to make it easy to find information and to make the work of positioning new topics much easier.

image

This work has a few steps:

  • Get technical review
  • Get info arch and support review
  • Write link fixes and redirects
  • Merge

@MattDodsonEnglish
Copy link
Contributor Author

@na-- I am requesting your review first.

I only request your review on the two new pages, related to dropped iterations and VU allocation. If you want to comment on the structural changes of the docs, feel free, but I really just want your expert opinion on the accuracy and usefulness.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 5, 2023

There's a version of the docs published here:

https://mdr-ci.staging.k6.io/docs/refs/pull/974/merge

It will be deleted automatically in 30 days.

Copy link
Contributor Author

@MattDodsonEnglish MattDodsonEnglish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proofreading


Different scenario configurations can affect many different aspects of your system,
including the generated load, utilized resources, and emitted metrics.
If you know a bit about how scenarios work, you'll both design better tests for resources and goals, and interpret test results with more understanding.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand "design better tests for resources and goals"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thinking was that if you understand scenarios better, you can:

  • Make better decisions for your test goals, because certain scenarios correspond better to certain test design. To spike test a single component for raw throughput probably requires an arrival rate executor. To just see how quickly your system can churn through x number of iterations, shared iterations is a simpler choice.
  • Use resources better, because understanding not to use maxVUs means you'll use CPU cycles more efficiently.

Of course now I realize that this is an enormous amount of implied information.

Will change to:

If you know a bit about how scenarios work, you'll design better tests and interpret test results with more understanding.

@@ -0,0 +1,88 @@
---
title: VU allocation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"VU allocation" seems a bit misleading, since we are only talking about arrival-rate scenarios

Suggested change
title: VU allocation
title: Arrival-rate VU allocation

or maybe even

Suggested change
title: VU allocation
title: Arrival-rate configuration

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to think about this.
"Arrival-rate configuration" probably opens the door to more topics, like using options. That's not bad, but it may mean there's a better place to put this (not blocking for this PR). Is there any reason readers should know about non-arrival rate allocation? If so, maybe we could add it. If not, maybe it doesn't matter to document.

Not sure. It's a good point though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I've thought about it, I don't want to go with "Arrival-rate configuration" because that should include Graceful stop and maybe more, which opens a whole new round of content structure. It's nice to keep info atomic.

Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.

Either way, I choose these new titles, ranked by preference. You can pick and that's what will go with:

  1. VU allocation.
  2. VU pre-allocation
  3. Arrival-rate VU allocation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because that should include Graceful stop and maybe more

graceful stop is not specific to arrival-rate executors and already has its own dedicated page that explains it: https://k6.io/docs/using-k6/scenarios/graceful-stop/

Is VU allocation in non-arrival-rate ever important? If so, we could just add it to the doc later.

Well, considering the configuration of non-arrival-rate executors is specified in terms of VUs, there isn't really anything complicated to explain there 😅

Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, the complexity with arrival-rate is not just how VUs are allocated, but how to balance the VUs and the desired rate and how to find the right values for the former based on the latter and on iteration duration.

Is this not encompassed in VU Pre-allocation? Basically, I'm looking for the shortest way to say the most in the most accurate way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"VU Pre-allocation" only makes sense to you because you already know it applies for arrival-rate executors. A new user won't know that fact and won't click on that menu entry at all, even if this is exactly the information they are looking for.

Comment on lines 25 to 26
- `rate` determines how many iterations k6 starts.
- `timeUnit` determines how frequently it starts the number of iterations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having these 2 in different lines, with a separate explanation for each, is more confusing than helpful in my opinion. The current disjointed explanation is both longer, more confusing and less correct than "k6 will try to start rate iterations evenly spread across a timeUnit (default 1s) interval of time"


<Blockquote mod="attention" title="">

In cloud tests, **both `preAllocatedVUs` and `maxVUs` count against your subscription.**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another problem - you mention maxVUs here, but we haven't mentioned maxVUs anywhere before in this document... This was one of the reasons for my longer explanation that you discarded, it had a paragraph with a cohesive explanation for both preAllocatedVUs and maxVUs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still kind of of the opinion that maxVUs shouldn't be mentioned anywhere :-). What I'll do is just make the first admonition only about preAllocatedVUs, and then a second admonition in the maxVUs section. It's not very elegant, but it doesn't cram so much information together.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, if I could rewrite history, maxVUs probably would not exist 😅 But now that it exists, we should try to make it as clear as possible how it function and why it might not be a good idea to use.

An admonition only for preAllocatedVUs doesn't make sense, this is how cloud subscriptions normally work, i.e. no admonition needed just for it.

And if we want to tuck maxVUs only at the end of the document, somewhat out of sight (which I don't necessarily mind), then we should only have an admonition there.

Copy link
Contributor Author

@MattDodsonEnglish MattDodsonEnglish Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's how I've handled it: two admonitions. No points for subtlety, but I don't think anyone can say we're being sneaky.

EDIT: These are at the top and bottom of the page. In the GitHub UI it looks joined together.

image

image

Copy link
Member

@na-- na-- Jan 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nobody will read the text you have highlighted in the middle of that paragraph 😅 People will just see the two admonitions and get the very wrong impression that in the cloud we will charge them for preAllocatedVUs + maxVUs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Point taken. With 19784d4, it looks like this:

image

MattDodsonEnglish and others added 4 commits January 13, 2023 06:37
Co-authored-by: na-- <n@andreev.sh>
…s/00 About scenarios/02 VU allocation.md

Co-authored-by: na-- <n@andreev.sh>
- More explicit title
- Double admonition
- Join rate and timeUnit in one list item
- More explicit example
- Update list page for new page URI
@MattDodsonEnglish
Copy link
Contributor Author

@ppcano , na-- and I have done a pretty big job on the two new docs, and I don't think anyone has the desire to get to in to them anymore, but would you mind doing a quick review of the info architecture changes?

There's no doubt that there's more work to do to organize this information, but I think it can wait for more PRs.

- Whether VU traffic stays constant or changes
- Whether to model traffic by iteration number or by VU arrival rate.

Your scenario object must define the `executor` property with one of the predefined executors names.
Along with the generic scenario options, each executor object has additional options specific to its workload.
For the list of the executors, refer to the [Executor guide](/using-k6/scenarios/executors/).
For the list of the executors, refer to [Executors](/using-k6/scenarios/executors/).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MattDodsonEnglish I think we should list here all the executors. If not, readers might move to Concepts without getting the "general" idea of the different executor options.

I suggest adding something similar than the Executors table. For example:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ppcano I deleted that list in #936. I didn't like duplicating content in this way.

I'll make a new PR to put it back in, maybe just as a summary:

You can configure executors in to distribute workload according to:

  • Iterations. Either shared by the VUs, or distributed across them.
  • VUs. Either a constant number or a ramping number.
  • Iterations per second. Either constant or ramping.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make a new PR to put it back in

#985

Copy link
Collaborator

@ppcano ppcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a comment with a suggestion

@folex
Copy link

folex commented Oct 25, 2023

Could you please clarify a little bit regarding connection between Open Model and Dropped Iterations?

The Test lifecycle doc states the following:

The default function life-cycle
A VU executes the default() function from start to end in sequence. Once the VU reaches the end of the function, it loops back to the start and executes the code all over.

But doesn't it contradict the Open Model? Which states:

Compared to the closed model, the open model decouples VU iterations from the iteration duration. The response times of the target system no longer influence the load on the target system.

As I understand it, to avoid coordinated omission, the VU's default() function should be called on a strict schedule, every N ms, without waiting for default() to finish.

Now, there's also a question of what it means for a VU to be "free" or "busy".

The Dropped Iterations doc states the following:

With constant-arrival-rate and ramping-arrival-rate, iterations drop if there are no free VUs.

Here I don't really understand what "free VU" or "busy VU" means. Is a VU that has an iteration in progress considered "busy"? How does that interplay with Open Model, which must run VU's function on a strict schedule?


Another question about free/busy arises from the fact that default() is an async function. So there's some time taken to created Promise, and then there's some time taken for that Promise to be completed. Which of these is considered to be "iteration time"?

Dropped Iterations article sheds some light on that:

The reasons for these dropped iterations vary:
The SUT response has become so long that k6 starts dropping scheduled iterations from the queue.
The SUT iteration duration has become so long that k6 needs to schedule more VUs to reach the target arrival rate, exceeding the number of scheduled iterations.

So there's clearly a separation between "iteration duration" and "response". But how exactly are these defined? Is it related to default()'s Promise creation and Promise completion, or is it something else entirely?


Hope my questions make sense! And thank you very much for the amazing work you are doing here, systemazing and documenting benchmark approaches is super important!

@MattDodsonEnglish
Copy link
Contributor Author

MattDodsonEnglish commented Oct 25, 2023

Hey @folex , these are great questions; maybe you should make an issue. I no longer work on the k6 docs, and I don't know if anyone else will see this thread.

But I'm still a k6 fan, and I did write the sentences you quoted, so I'll try to comment on a few things.

  • About the lifecycle: I think you're definitely right: that sentence describes the closed model. Maybe it could be re-written: "Depending on the configuration, VUs either execute in loops from start to finish, or, in the case of arrival-rate executors, at set rates of time."

  • About "busy" VUs. I think this just means they are running an iteration. Have you by chance read this doc? It might help a little. https://k6.io/docs/using-k6/scenarios/concepts/arrival-rate-vu-allocation/

    Before an arrival-rate scenario starts, k6 first initializes the number of preAllocatedVUs. When the test runs, the number of available preAllocatedVUs determines how many iterations k6 can start. k6 tries to reach the target iterations per second, and one of two things can happen: [they are idle; or all are busy and k6 emits a drop]

I can't remember the difference between iteration time and response time, but I think each iteration could make many requests, all with unique response durations. The iteration duration is the entire VU code, including all requests, sleep time, console logs, etc.

About the async stuff, I have no idea. I defer to @mstoykov 🙇

@mstoykov
Copy link
Contributor

Hi @folex,

A busy VU is one that is executing an iteration. A free one is one that isn't executing one ;).

An iteration started ends only ones it has no async jobs to finish irregardless of if the default function is async or just used setTimeout or used pure promises with no async/await.

This is predominantly as anything else would've been really confusing.

export default function() {
  http.asyncRequest("GET", someurl).then((res) => {
    // do something
  })
}

Should this code with ... ramping-vus executor just keep starting asyncRequest request as it gets to the end of the iteration?

The answer we(me specifically) choose that this isn't really a good idea and instead k6 waits for everything async to finish. It already needed to wait for them at the end of k6 run for example - so it made sense to not "contaminate" between iterations.

So in this case a busy VU is busy even if it is waiting on async operation to finish.

And as such an iteration and its duration is the whole time between the iteration starting and the last if any async operation finishing.

As @MattDodsonEnglish pointed out iterations can have multiple responses - or none. K6 just happens to be primarily used in cases where there are things such as request and responses, but this is not really a requirement.

As I understand it, to avoid coordinated omission, the VU's default() function should be called on a strict schedule, every N ms, without waiting for default() to finish.

While in theory this is true - reality makes it impossible to just continue to "start" stuff - which is why if we can't, we tell users that we dropped iterations - which means that the test should probably be marked failed.

Even outside of k6 this isn't really possible - you will run out of resource - memory, CPU, file descriptors and so on.

In the case of k6 another thing that is limiting is k6 runs JS - js is a single-threaded language by specification. As such in order for k6 to run an iteration it needs a js VM that is not running any at the moment. Which is basically what a VU is.

In theory, you can keep making VUs, but:

  1. you then hit the above problems
  2. my experience helping users has shown that the moment you start doing that you usually end up just doing less and less iterations. As more and more resources go towards making more VUs then actually doing the requests.
  3. usually shows that something is very wrong.

Part of what k6 does is run the same test so you can compare and contrast. As such if the test used to pass with X preallocatedVUs, and now it is dropping a bunch of iterations - that is probably pretty bad.

On the other hand there was some discussion around letting arrival rate specifically running multiple async default functions. But that has not been implemented and likely will need some limits as well - as just running 2000 default iterations in the same VU will likely not run great.

Additionally, at the time this was discussed we were just adding async code and still mostly did not have many uses for it.

Not certain if I answered all your questions, but I hope this helps. 🙇

Also commenting on merged PRs isn't the best idea. Maybe try the community forum as I very much by chance saw that someone asked me something on a merged PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants