-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ramping-arrival-rate: Avoid to spawn a new goroutine for every iteration #1957
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1957 +/- ##
==========================================
+ Coverage 71.43% 71.47% +0.03%
==========================================
Files 183 183
Lines 14247 14251 +4
==========================================
+ Hits 10178 10186 +8
+ Misses 3438 3434 -4
Partials 631 631
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
No improvements 😟
The current code adds 2xMaxVUs goroutines, a future refactor could consider re-using the goroutines spawn from
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noted the problem in an inline comment, but to re-iterate, this change brakes gracefulStop
and the assurance that VUs are done executing JS when they are returned to the common pool, which may cause a data race if we try to run another scenario on the same VU (JS runtimes are single-threaded).
Added a test to cover the graceful case, I polished a bit the solution fixing the graceful issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM besides some minor naming nitpicks (for which I don't really have any better ideas 😅)
lib/executor/ramping_arrival_rate.go
Outdated
// runningVUs controls the activeVUs | ||
// executing the received requests for iterations. | ||
type runningVUs struct { | ||
Iterations chan struct{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ❤️ the approach of splitting this into a separate small component, though I'm not 100% sure about the names... Though my ideas aren't necessarily any better... 😅 VUPool
? ActiveVUPool
?
In any case, Iterations
probably should be not be exported and the comment looks a bit wonky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer vuPool
as runningVUs
IMO is "wrong" as they are not all running ... maybe "runnableVUs" but this seems more like activeVUs
and also not one of them communicates that the VUs might be running already or might be waiting for them to run something ... so I think vuPool
is better IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
activeVUPool
sounds slightly better to me.
Also TryRunIteration
or RunIterationMaybe
instead of Run
. Not sure about the Try
/Maybe
, but Iteration
should be part of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general, good job!
I have some nitpicks, one of which grew while I was writing it out 😅 , but nothing blocking IMO ... but we are in code freeze so ;)
lib/executor/ramping_arrival_rate.go
Outdated
// runningVUs controls the activeVUs | ||
// executing the received requests for iterations. | ||
type runningVUs struct { | ||
Iterations chan struct{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer vuPool
as runningVUs
IMO is "wrong" as they are not all running ... maybe "runnableVUs" but this seems more like activeVUs
and also not one of them communicates that the VUs might be running already or might be waiting for them to run something ... so I think vuPool
is better IMO
lib/executor/ramping_arrival_rate.go
Outdated
type runningVUs struct { | ||
Iterations chan struct{} | ||
|
||
listening sync.WaitGroup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: I prefer sync.WaitGroup
's to be called wg
unless if there are many, in general it should be (as in this case) why the wait group exists. Also I consider listening something to do with networking instead of of listening on a channel ... I would in general called them waiting or blocked on a channel, but again (as in the pool above) they might not actually currently be waiting/listening/blocked on a channel but instead running
which in combination with the actual running
counter below make it seem like listening
is only a wait group of actually currently "listening" VUs, while running
is the number of actually running vus. And the last one is actually true - running
is the number of currently running vus.
lib/executor/ramping_arrival_rate.go
Outdated
iterations := make(chan struct{}) | ||
iterators := runningVUs{ | ||
Iterations: iterations, | ||
} | ||
|
||
defer func() { | ||
close(iterations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer that you encapsualte the iterations
as well and have newRunningVUs
(or IMO newVUPool
) which make the iterations
channel and a method that is stop
to close iterations
and effectively stop the "running" of vus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, don't have much to add besides the naming issues already mentioned, and that all runningVUs
fields/methods don't need to be exported.
Even though the main performance issue was addressed in #1955, from a quick profile run this does get rid of the excessive stack creation mentioned in #1386 (comment), so definitely an improvement. 👍
lib/executor/ramping_arrival_rate.go
Outdated
// runningVUs controls the activeVUs | ||
// executing the received requests for iterations. | ||
type runningVUs struct { | ||
Iterations chan struct{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
activeVUPool
sounds slightly better to me.
Also TryRunIteration
or RunIterationMaybe
instead of Run
. Not sure about the Try
/Maybe
, but Iteration
should be part of it.
Create a dedicated goroutine to process iterations when a new ActiveVU is created for the Ramping arrival rate. It changes the previous behaviour where instead a new goroutine was created for every new iteration. The previous solution was impacting the expected rate (iterations/s) using a number of PreAllocatedVUs around thousand. Added a Benchmark to check the rate with a set of PreAllocatedVUs cases. Closes #1944
Codecov Report
@@ Coverage Diff @@
## master #1957 +/- ##
==========================================
- Coverage 71.84% 71.41% -0.44%
==========================================
Files 182 177 -5
Lines 14237 14095 -142
==========================================
- Hits 10229 10066 -163
- Misses 3367 3387 +20
- Partials 641 642 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Done and Rebased Renamed the struct ad the run method, unexported |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just noticed some search/replace typos :)
Co-authored-by: Ivan Mirić <ivan@imiric.com>
vusPool.Close() | ||
// Make sure all VUs aren't executing iterations anymore, for the cancel() | ||
// below to deactivate them. | ||
<-returnedVUs | ||
cancel() | ||
activeVUsWg.Wait() | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should prevent the race from https://github.com/k6io/k6/pull/1957/checks?check_run_id=2631773502
==================
WARNING: DATA RACE
Read at 0x00c000062078 by goroutine 128:
internal/race.Read()
/opt/hostedtoolcache/go/1.16.4/x64/src/internal/race/race.go:37 +0x206
sync.(*WaitGroup).Add()
/opt/hostedtoolcache/go/1.16.4/x64/src/sync/waitgroup.go:71 +0x219
go.k6.io/k6/lib/executor.(*activeVUPool).AddVU()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate.go:519 +0x64
go.k6.io/k6/lib/executor.RampingArrivalRate.Run.func3()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate.go:361 +0x290
go.k6.io/k6/lib/executor.RampingArrivalRate.Run.func4()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate.go:379 +0x571
Previous write at 0x00c000062078 by goroutine 34:
internal/race.Write()
/opt/hostedtoolcache/go/1.16.4/x64/src/internal/race/race.go:41 +0x125
sync.(*WaitGroup).Wait()
/opt/hostedtoolcache/go/1.16.4/x64/src/sync/waitgroup.go:128 +0x126
go.k6.io/k6/lib/executor.(*activeVUPool).Close()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate.go:535 +0x77
go.k6.io/k6/lib/executor.RampingArrivalRate.Run.func1()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate.go:339 +0x54
go.k6.io/k6/lib/executor.RampingArrivalRate.Run()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate.go:481 +0x20d0
go.k6.io/k6/lib/executor.(*RampingArrivalRate).Run()
<autogenerated>:1 +0x127
go.k6.io/k6/lib/executor.TestRampingArrivalRateRunUnplannedVUs()
/home/runner/work/k6/k6/lib/executor/ramping_arrival_rate_test.go:189 +0x7ee
testing.tRunner()
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/testing.go:1193 +0x202
As far as I can see the waitgroup was empty at the time as a VU wasn't executing but it was just starting to and it raced between the Add
and the Close
. This change makes certain we no longer will try to "Add" before we close
vusPool.Close() | |
// Make sure all VUs aren't executing iterations anymore, for the cancel() | |
// below to deactivate them. | |
<-returnedVUs | |
cancel() | |
activeVUsWg.Wait() | |
}() | |
// Make sure all VUs aren't executing iterations anymore, for the cancel() | |
// below to deactivate them. | |
<-returnedVUs | |
cancel() | |
vusPool.Close() | |
activeVUsWg.Wait() | |
}() |
originally discussed in this comment #1957 (comment)
…tion Reusing the work done in #1957
originally discussed in this comment grafana/k6#1957 (comment)
Switched the
ramping-arrival-rate
from goroutine per iteration to goroutine per VU.Refactor the Run method is not part of this PR, it is scheduled for the original issue, so I tried to impact in a minimal way the code.
However, the main issue in terms of performance was already resolved from #1955 as the following tests should assert:
using this config:
result with v0.31.1 still affected by the problem:
and the result with master
Closes #1944