Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize ramping-vus #1499

Open
na-- opened this issue Jun 9, 2020 · 9 comments
Open

Optimize ramping-vus #1499

na-- opened this issue Jun 9, 2020 · 9 comments
Labels
evaluation needed proposal needs to be validated or tested before fully implementing it in k6 executors performance refactor

Comments

@na--
Copy link
Member

na-- commented Jun 9, 2020

As mentioned in #1496 (comment), currently the ramping-vus executor needlessly generates its raw execution steps twice.

Or rather, it generates them at least 4 times...

This is a problem, because the execution requirements of this executor can easily contain thousands of steps...

With a little effort, we can probably combine the first 2 uses (by splitting up GetFullExecutionRequirements() and keeping the individual scenario requirements, and the last 2 uses (by having a few more private helper methods, so we can reuse data).

So while that would reduce the number of requirement generation to 2, I feel like even that's unnecessary. It seems to me that the problem is that we pass around the ExecutionTuple way too much... That is, despite us knowing what its value would be pretty early in the k6 execution, and that value being a complete constant, we don't "give" it to executor/scenarios configs to keep. So they can't pre-calculate all of these heavy-duty calculations in advance and just reuse them. Instead, we just pass it to the various functions all of the time, for no apparent benefit...

@na-- na-- added performance refactor evaluation needed proposal needs to be validated or tested before fully implementing it in k6 labels Jun 9, 2020
@na-- na-- added this to the v0.27.0 milestone Jun 9, 2020
@mstoykov
Copy link
Collaborator

mstoykov commented Jun 9, 2020

I think part of what is said here is basically #1427

@na--
Copy link
Member Author

na-- commented Jun 10, 2020

Something else to fix with this: #1496 (comment)

@na-- na-- modified the milestones: v0.27.0, v0.28.0 Jun 10, 2020
na-- added a commit that referenced this issue Jul 6, 2020
This was an initial attempt to solve the first point from #1514, but there were a few blockers:
- #1499
- #1427
- #1386 (comment)
@na--
Copy link
Member Author

na-- commented Jul 15, 2020

A potentially bigger problem is the requirements calculations when we have gracefulRampDown - the current algorithm seems to be very inefficient when we have a long and sustained ramp-down period, for example at the end of the test run.

@na-- na-- modified the milestones: v0.28.0, v0.30.0 Sep 9, 2020
@na-- na-- removed this from the v0.30.0 milestone Jan 13, 2021
@na--
Copy link
Member Author

na-- commented May 10, 2021

Another way to optimize the requirement calculations would be to combine steps. If we have to rapidly increase/decrease the number of active VUs (e.g. more than 1 VU starting/stopping in under 50ms), we probably should combine the steps in a single one.

@mstoykov
Copy link
Collaborator

While writing #2256 (comment) I thought about this somewhat and I think it might be better to completely rewrite it to work more like ramping-arrival-rate:

  1. emit the steps and buffer only the next few.
  2. for the calculation of maxVUs I think we can do something like represent the maxVUs as a function for each scenario and add those up and then find what the maximum to this is. While this will be more CPU intensive for small tests but for something a lot more complicated where would otherwise have to have million of steps that should be a lot faster and definitely less memory intensive.

This definitely will need more research.

@na--
Copy link
Member Author

na-- commented Nov 24, 2021

for the calculation of maxVUs I think we can do something like represent the maxVUs as a function for each scenario and add those up and then find what the maximum to this is. While this will be more CPU intensive for small tests but for something a lot more complicated where would otherwise have to have million of steps that should be a lot faster and definitely less memory intensive.

Unfortunately this probably won't help... 😞 The problem is that you can have something like this:

export let options = {
    scenarios: {
        up: {
            executor: 'ramping-vus',
            startVUs: 0,
            stages: [
                { target: 10, duration: '10s' },
            ],
            gracefulRampDown: 0,
            gracefulStop: 0,
        },
        down: {
            executor: 'ramping-vus',
            startVUs: 10,
            stages: [
                { target: 0, duration: '10s' },
            ],
            gracefulRampDown: 0,
            gracefulStop: 0,
        },
    },
};

@mstoykov
Copy link
Collaborator

yes @na-- and this can be calculated as the first having the function f(x)=x for x between 0 and 10 and the second being f(x)=10-x for x between 0 and 10 and so when you combine them it is f(x)=10

@na--
Copy link
Member Author

na-- commented Nov 24, 2021

You've probably forgotten how the calculation of the execution requirements for ramping-vus is implemented 😕 You can't have have a simple f(t) function to calculate the execution requirements without state, since it actually depends on the previous steps because of gracefulRampDown. See the dots and the stars here:

// ^
// |
// VUs 6| *..............................
// 5| ***.......*..............................
// 4|*****.....***.....**..............................
// 3|******...*****...***..............................
// 2|*******.*******.****..............................
// 1|***********************..............................
// 0--------------------------------------------------------> time(s)
// 012345678901234567890123456789012345678901234567890123 (t%10)
// 000000000011111111112222222222333333333344444444445555 (t/10)

@mstoykov
Copy link
Collaborator

I haven't forgotten how it works, I just didn't want into explaining how those need to be modelled if we decide to try this approach. All that gracefulRampDown/shutdown does is that it makes the function change a bit "later" - you obviously drew something in this graph so we can fit a function on it ;).

The bigger problem that I just realized is that while this seems to work really nicely with continuous functions, but in reality, they are step functions. Although given that they have a name I would expect there are also really nice ways to define and add multiple step functions that will also work, but again this will need to be looked into.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation needed proposal needs to be validated or tested before fully implementing it in k6 executors performance refactor
Projects
None yet
Development

No branches or pull requests

2 participants