Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Refactor] Slight changes to engine and executors tests to improve readability #2268

Closed
wants to merge 4 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
173 changes: 106 additions & 67 deletions core/engine_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ func TestEngineStopped(t *testing.T) {

func TestEngineOutput(t *testing.T) {
t.Parallel()

testMetric := stats.New("test_metric", stats.Trend)

runner := &minirunner.MiniRunner{Fn: func(ctx context.Context, out chan<- stats.SampleContainer) error {
Expand All @@ -221,7 +222,8 @@ func TestEngineOutput(t *testing.T) {
Iterations: null.IntFrom(1),
})

assert.NoError(t, run())
err := run()
require.NoError(t, err)
wait()

cSamples := []stats.Sample{}
Expand All @@ -241,76 +243,82 @@ func TestEngineOutput(t *testing.T) {
}
}

func TestEngine_processSamples(t *testing.T) {
func TestEngineProcessSamplesOfMetric(t *testing.T) {
t.Parallel()

// Arrange
metric := stats.New("my_metric", stats.Gauge)
e, _, wait := newTestEngine(t, nil, nil, nil, lib.Options{})
defer wait()

t.Run("metric", func(t *testing.T) {
t.Parallel()
e, _, wait := newTestEngine(t, nil, nil, nil, lib.Options{})
defer wait()
// Act
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm against having the code with a lot of these comments, if it isn't clear where the arrange-act-assert state operations start and end then it probably means that the test needs to be refactored again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me too, these comments don't really add anything besides noise for me 😕

Copy link
Member Author

@oleiade oleiade Dec 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wholeheartedly disagree with you folks, but I don't have a strong enough opinion about it to argue 😄

To expand on my view on this: tests are also documentation. They need to be easy to read/parse, structured, and make their intent and the behavior they outline explicit. I personally had a hard time reading through them and understanding them; as it required me quite a bit of overhead. One of the main reasons, I found, was that it was actually not clear to me what is set up code (arrange), what is actually being tested (act), and what is the expected output state (assert).
AAA testing is a technique I've been introduced to a while ago and that I've followed ever since, and in general goes farther than just having comments to outline the different sections of the tests. Another best practice on that front is to have want prefixed local variables to outline what the outcome state is expected to be, and got prefix variables to compare them to, outlining what output we actually got.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Googling "AAA testing" leads me to

Abdominal aortic aneurysm (AAA) screening is a way of checking if there's a bulge or swelling in the aorta

😅 The second result is https://medium.com/@pjbgf/title-testing-code-ocd-and-the-aaa-pattern-df453975ab80 though - do you have a better introduction to the concept? Because I think that's written by someone who likes the concept, like you, but also has the same opinion as @codebien and me about the explicit comments:

Test code, in the same way as production code, is supposed to be clean and self-explanatory. Preempting what you are going to do just adds unnecessary clutter. You can still have a clear distinction of the three steps by using spaces and indentation. Which are ultimately the key features the compiler provides you to structure your code.
It does look silly having a comment preceding each paragraph in this text, it tires the reader, as it has to constantly go through the obvious (or skip it). So why would you do that with your code?

Couldn't have written it better myself 😅 👍 for refactoring tests to be cleaner and easier to read, ideally self-explanatory. The same as any other piece of code. 👎 for comments that should be obvious from the context.

Copy link
Member Author

@oleiade oleiade Dec 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I used an acronym, I'm generally the one chasing people telling them not to do it, and here I go losing at my own game 🟥 ...

AAA stands for Arrange, Act, Assert. It's a very common pattern in software testing, specifically because it helps outline and make the natural flow of unit tests explicit: arrange some input, act upon them, assert the produced output.

I don't mean to play the "someone is wrong on the internet" card, but I found the article you point at quite judgmental, and left me disagreeing rather than convinced. Here a few counter examples I would like to point you too, from Microsoft, a Manning book about Unit testing, or a blog post I often end up stumbling upon when I need a refresher.

Here, I end up arguing about it, although I wanted to actually avoid that. All I can say is that I've been introduced to this technique years ago by a colleague. I was really skeptical at first, but I never went back once I started. I spend 90% reading code rather than writing it, and anything that makes my life "easier" (I acknowledge this means different things to different people 🙏🏻) as a reader is a gain from my perspective. It took me a lot of effort to make sense out of these tests, I found them (too) long, complicated, and it was unclear to me at first glance what was actually setting up code, actually "act" code, and assertion code. You might have a different experience, but that was mine 😇

Now I acknowledge everyone has a different experience, a different brain, and that's exactly why I'm not especially trying to convince you folks. What becomes apparent, though, is that I would like to have some document somewhere where we gather, discuss, and decide on best practices we follow on this repository to be able to quickly sort out these disagreements in the future 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took me a lot of effort to make sense out of these tests, I found them (too) long, complicated, and it was unclear to me at first glance what was actually setting up code, actually "act" code, and assertion code. You might have a different experience, but that was mine innocent

I don't disagree with this part, a lot of the tests can definitely be improved, substantially 😅 I don't even disagree that tests should be split into these 3 sections whenever possible, that's the natural flow for most unit tests. However, I disagree that we should add these // Arrange, // Act, // Assert comments in the tests. Instead, I think the tests should be refactored to be so clean and easy to read that we don't need such comments, i.e. in such a way that the sections are obvious without them 🙂

e.processSamples(
[]stats.SampleContainer{stats.Sample{Metric: metric, Value: 1.25, Tags: stats.IntoSampleTags(&map[string]string{"a": "1"})}},
)

e.processSamples(
[]stats.SampleContainer{stats.Sample{Metric: metric, Value: 1.25, Tags: stats.IntoSampleTags(&map[string]string{"a": "1"})}},
)
// Assert
assert.IsType(t, &stats.GaugeSink{}, e.Metrics["my_metric"].Sink)
}

assert.IsType(t, &stats.GaugeSink{}, e.Metrics["my_metric"].Sink)
})
t.Run("submetric", func(t *testing.T) {
t.Parallel()
ths, err := stats.NewThresholds([]string{`1+1==2`})
assert.NoError(t, err)
func TestEngineProcessSamplesOfSubmetric(t *testing.T) {
t.Parallel()

e, _, wait := newTestEngine(t, nil, nil, nil, lib.Options{
Thresholds: map[string]stats.Thresholds{
"my_metric{a:1}": ths,
},
})
defer wait()
// Arrange
metric := stats.New("my_metric", stats.Gauge)
thresholds, err := stats.NewThresholds([]string{`1+1==2`})
require.NoError(t, err)

sms := e.submetrics["my_metric"]
assert.Len(t, sms, 1)
assert.Equal(t, "my_metric{a:1}", sms[0].Name)
assert.EqualValues(t, map[string]string{"a": "1"}, sms[0].Tags.CloneTags())
engine, _, wait := newTestEngine(t, nil, nil, nil, lib.Options{
Thresholds: map[string]stats.Thresholds{
"my_metric{a:1}": thresholds,
},
})
defer wait()

e.processSamples(
[]stats.SampleContainer{stats.Sample{Metric: metric, Value: 1.25, Tags: stats.IntoSampleTags(&map[string]string{"a": "1", "b": "2"})}},
)
// Act
submetrics := engine.submetrics["my_metric"]
engine.processSamples(
[]stats.SampleContainer{stats.Sample{Metric: metric, Value: 1.25, Tags: stats.IntoSampleTags(&map[string]string{"a": "1", "b": "2"})}},
)

// Assert
assert.Len(t, submetrics, 1)
assert.Equal(t, "my_metric{a:1}", submetrics[0].Name)
assert.EqualValues(t, map[string]string{"a": "1"}, submetrics[0].Tags.CloneTags())
assert.IsType(t, &stats.GaugeSink{}, engine.Metrics["my_metric"].Sink)
assert.IsType(t, &stats.GaugeSink{}, engine.Metrics["my_metric{a:1}"].Sink)

assert.IsType(t, &stats.GaugeSink{}, e.Metrics["my_metric"].Sink)
assert.IsType(t, &stats.GaugeSink{}, e.Metrics["my_metric{a:1}"].Sink)
})
}

func TestEngineThresholdsWillAbort(t *testing.T) {
t.Parallel()
metric := stats.New("my_metric", stats.Gauge)

ths, err := stats.NewThresholds([]string{"1+1==3"})
assert.NoError(t, err)
ths.Thresholds[0].AbortOnFail = true

thresholds := map[string]stats.Thresholds{metric.Name: ths}

e, _, wait := newTestEngine(t, nil, nil, nil, lib.Options{Thresholds: thresholds})
// Arrange
metric := stats.New("my_metric", stats.Gauge)
thresholds, err := stats.NewThresholds([]string{"1+1==3"})
require.NoError(t, err)
thresholds.Thresholds[0].AbortOnFail = true
engine, _, wait := newTestEngine(t, nil, nil, nil, lib.Options{Thresholds: map[string]stats.Thresholds{metric.Name: thresholds}})
defer wait()

e.processSamples(
// Act
engine.processSamples(
[]stats.SampleContainer{stats.Sample{Metric: metric, Value: 1.25, Tags: stats.IntoSampleTags(&map[string]string{"a": "1"})}},
)
assert.True(t, e.processThresholds())
shouldAbort := engine.processThresholds()

// Assert
assert.True(t, shouldAbort)
}

func TestEngineAbortedByThresholds(t *testing.T) {
t.Parallel()
metric := stats.New("my_metric", stats.Gauge)

ths, err := stats.NewThresholds([]string{"1+1==3"})
assert.NoError(t, err)
ths.Thresholds[0].AbortOnFail = true

thresholds := map[string]stats.Thresholds{metric.Name: ths}

// Arrange
metric := stats.New("my_metric", stats.Gauge)
thresholds, err := stats.NewThresholds([]string{"1+1==3"})
require.NoError(t, err)
thresholds.Thresholds[0].AbortOnFail = true
done := make(chan struct{})
runner := &minirunner.MiniRunner{Fn: func(ctx context.Context, out chan<- stats.SampleContainer) error {
out <- stats.Sample{Metric: metric, Value: 1.25, Tags: stats.IntoSampleTags(&map[string]string{"a": "1"})}
Expand All @@ -319,13 +327,15 @@ func TestEngineAbortedByThresholds(t *testing.T) {
return nil
}}

_, run, wait := newTestEngine(t, nil, runner, nil, lib.Options{Thresholds: thresholds})
_, run, wait := newTestEngine(t, nil, runner, nil, lib.Options{Thresholds: map[string]stats.Thresholds{metric.Name: thresholds}})
defer wait()

// Act
go func() {
assert.NoError(t, run())
}()

// Assert
select {
case <-done:
return
Expand Down Expand Up @@ -549,6 +559,8 @@ func TestSentReceivedMetrics(t *testing.T) {

func TestRunTags(t *testing.T) {
t.Parallel()

// Arrange
tb := httpmultibin.NewHTTPMultiBin(t)

runTagsMap := map[string]string{"foo": "bar", "test": "mest", "over": "written"}
Expand Down Expand Up @@ -627,17 +639,6 @@ func TestRunTags(t *testing.T) {
InsecureSkipTLSVerify: null.BoolFrom(true),
})

errC := make(chan error)
go func() { errC <- run() }()

select {
case <-time.After(10 * time.Second):
t.Fatal("Test timed out")
case err := <-errC:
require.NoError(t, err)
}
wait()

systemMetrics := []string{
metrics.VUsName, metrics.VUsMaxName, metrics.IterationsName, metrics.IterationDurationName,
metrics.GroupDurationName, metrics.DataSentName, metrics.DataReceivedName,
Expand All @@ -652,6 +653,19 @@ func TestRunTags(t *testing.T) {
return "the rainbow"
}

// Act
errC := make(chan error)
go func() { errC <- run() }()

// Assert
select {
case <-time.After(10 * time.Second):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could decrease this timeout, probably for getting a feedback faster in case we would hit the case

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why? the timeout is here just in case, so we don't hit the global 120s one 🤷‍♂️

t.Fatal("Test timed out")
case err := <-errC:
require.NoError(t, err)
}
wait()

for _, s := range mockOutput.Samples {
for key, expVal := range runTagsMap {
val, ok := s.Tags.Get(key)
Expand All @@ -668,6 +682,8 @@ func TestRunTags(t *testing.T) {

func TestSetupTeardownThresholds(t *testing.T) {
t.Parallel()

// Arrange
tb := httpmultibin.NewHTTPMultiBin(t)

script := []byte(tb.Replacer.Replace(`
Expand Down Expand Up @@ -720,9 +736,11 @@ func TestSetupTeardownThresholds(t *testing.T) {
})
defer wait()

// Act
errC := make(chan error)
go func() { errC <- run() }()

// Assert
select {
case <-time.After(10 * time.Second):
t.Fatal("Test timed out")
Expand All @@ -735,7 +753,8 @@ func TestSetupTeardownThresholds(t *testing.T) {
func TestSetupException(t *testing.T) {
t.Parallel()

script := []byte(`
// Arrange
srcScript := []byte(`
import bar from "./bar.js";
export function setup() {
bar();
Expand All @@ -744,20 +763,22 @@ func TestSetupException(t *testing.T) {
};
`)

memfs := afero.NewMemMapFs()
require.NoError(t, afero.WriteFile(memfs, "/bar.js", []byte(`
fsScript := []byte(`
export default function () {
baz();
}
function baz() {
throw new Error("baz");
}
`), 0x666))
`)
memfs := afero.NewMemMapFs()
require.NoError(t, afero.WriteFile(memfs, "/bar.js", fsScript, 0x666))

registry := metrics.NewRegistry()
builtinMetrics := metrics.RegisterBuiltinMetrics(registry)
runner, err := js.New(
testutils.NewLogger(t),
&loader.SourceData{URL: &url.URL{Scheme: "file", Path: "/script.js"}, Data: script},
&loader.SourceData{URL: &url.URL{Scheme: "file", Path: "/script.js"}, Data: srcScript},
map[string]afero.Fs{"file": memfs},
lib.RuntimeOptions{},
builtinMetrics,
Expand All @@ -773,9 +794,11 @@ func TestSetupException(t *testing.T) {
})
defer wait()

// Act
errC := make(chan error)
go func() { errC <- run() }()

// Assert
select {
case <-time.After(10 * time.Second):
t.Fatal("Test timed out")
Expand All @@ -792,6 +815,7 @@ func TestSetupException(t *testing.T) {
func TestVuInitException(t *testing.T) {
t.Parallel()

// Arrange
script := []byte(`
export let options = {
vus: 3,
Expand Down Expand Up @@ -829,8 +853,11 @@ func TestVuInitException(t *testing.T) {

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

// Act
_, _, err = engine.Init(ctx, ctx) // no need for 2 different contexts

// Assert
require.Error(t, err)

var exception errext.Exception
Expand All @@ -844,6 +871,8 @@ func TestVuInitException(t *testing.T) {

func TestEmittedMetricsWhenScalingDown(t *testing.T) {
t.Parallel()

// Arrange
tb := httpmultibin.NewHTTPMultiBin(t)

script := []byte(tb.Replacer.Replace(`
Expand Down Expand Up @@ -892,9 +921,11 @@ func TestEmittedMetricsWhenScalingDown(t *testing.T) {
mockOutput := mockoutput.New()
engine, run, wait := newTestEngine(t, nil, runner, []output.Output{mockOutput}, lib.Options{})

// Act
errC := make(chan error)
go func() { errC <- run() }()

// Assert
select {
case <-time.After(12 * time.Second):
t.Fatal("Test timed out")
Expand Down Expand Up @@ -1107,8 +1138,9 @@ func TestMinIterationDurationInSetupTeardownStage(t *testing.T) {

func TestEngineRunsTeardownEvenAfterTestRunIsAborted(t *testing.T) {
t.Parallel()
testMetric := stats.New("teardown_metric", stats.Counter)

// Arrange
testMetric := stats.New("teardown_metric", stats.Counter)
ctx, cancel := context.WithCancel(context.Background())

runner := &minirunner.MiniRunner{
Expand All @@ -1127,7 +1159,9 @@ func TestEngineRunsTeardownEvenAfterTestRunIsAborted(t *testing.T) {
VUs: null.IntFrom(1), Iterations: null.IntFrom(1),
})

assert.NoError(t, run())
// Act
err := run()
require.NoError(t, err)
Comment on lines +1163 to +1164
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert.NoError(t, run())

we often use this form in the codebase, you think that is it not readable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, the single line variant is much more readable... It's the test equivalent of if err := something; err != nil {}. Why would we have err in a global variable if we're not going to use it down the line?

Copy link
Member Author

@oleiade oleiade Dec 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in my previous comment: I wholeheartedly disagree with you folks, but I don't have a strong enough opinion about it to argue 😄

This change was inline with AAA testing, and I should probably have renamed err to gotErr to be more explicit. Personally I find it a better practice, because it's more explicit, to be as explicit and stateful as possible in the context of a test.

Regarding the if err := something; err != nil {} format, I think it's a matter of personal preference, and I don't care enough to argue about it, but personally I find this syntax not so readable and always avoid it in my code. Once again it's personal and likely linked to our own experiences, but I do find:

err := something()
if err != nil {
  log.Fatal(err)
}

Because it creates a sequence of visual blocks that are easier for me to parse visually. Each line does only one thing. I personally tend to prefer explicit to shorter.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explicit > shorter is fine, but I personally prefer to have each variable live in the smallest possible scope, whenever possible, to avoid "pollution". It is also easier to reason about, at least for me, because with require.NoError(t, someFunc()) and if err := something; err != nil {} I don't need to wonder if err will be used below.

wait()

var count float64
Expand All @@ -1136,12 +1170,15 @@ func TestEngineRunsTeardownEvenAfterTestRunIsAborted(t *testing.T) {
count += sample.Value
}
}

// Assert
assert.Equal(t, 1.0, count)
}

func TestActiveVUsCount(t *testing.T) {
t.Parallel()

// Arrange
script := []byte(`
var sleep = require('k6').sleep;

Expand Down Expand Up @@ -1214,9 +1251,11 @@ func TestActiveVUsCount(t *testing.T) {
run, waitFn, err := engine.Init(ctx, ctx) // no need for 2 different contexts
require.NoError(t, err)

// Act
errC := make(chan error)
go func() { errC <- run() }()

// Assert
select {
case <-time.After(15 * time.Second):
t.Fatal("Test timed out")
Expand Down