Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make test suite exit early if beforeEach hook fails (jest-jasmine2) #8654

Closed
wants to merge 1 commit into from

Conversation

lucasfcosta
Copy link
Contributor

Summary

This PR fixes #6527 by not running tests inside of a test suite if one of its beforeEach hooks fail.

This is already the default behaviour in jest-circus and has been confirmed to be the desired behaviour as per this comment.

Why this problem happens

  1. Jest jasmine's Env will start building the hierarchy of blocks with children. This means adding the describe blocks to their respective parents to build a tree

  2. If it finds any beforeEach hooks, jest-jasmine2 will add them to an array in the Suite.

  3. Each it (Spec) is added as a child to the describe block they are in.

  4. jest-jasmine2 will start executing tests through the treeProcessor.

  5. For each test (it/Spec) the beforeEach and afterEach hooks will be put into an array in the order they should run and queueRunner will run the functions in the array sequentially.

  6. If any of the functions in the promise chain throws an error, the onException callback will be invoked, but all the following functions will still run because the chain won't be discontinued.

How the fix was implemented

I essentially broke the single chain of promises with all the hooks and the test into two chains. The first chain contains only the beforeEach hooks and the second chain contains the test and the afterEach hooks. Then, if the chain for beforeEach fails I will turn on a flag which will be used to determine whether the continuation (the second chain with tests and afterEach hooks) should run.

Test plan

I have added extensive E2E tests to this to ensure that multiple scenarios would be covered.

I covered the following cases:

  • For the global context
    • Not running tests if a global beforeEach hook fails
    • Still running global afterEach hooks if a global beforeEach fails
    • Still running global afterAll hooks if a global beforeEach fails
  • Inside a single suite
    • Not running a single test if a beforeEach hook fails
    • Not running any tests in the same suite if a beforeEach hook fails
    • Trying to run all beforeEach hooks even if one fails
    • Still running the afterEach hooks if beforeEach fails
    • Still running the afterAll hooks if beforeEach fails
  • Inside nested suites
    • Canceling tests only for the suite the beforeEach hook is in
    • Still running the upper afterEach hooks if the nested beforeEach fails
    • Still running the upper afterAll hook if the nested beforeEach fails

I don't know if the way I organised these tests is adequate; any ideas on how to improve this are more than welcome.

I thought about writing more tests for more scenarios, but then I came to the conclusion that whatever I could write would end-up being redundant since the basis of them is already covered.

I also made the runAgainstSnapshot function generic since that's what I do in each test anyway.

@lucasfcosta lucasfcosta force-pushed the beforeEach-aborts-tests branch 2 times, most recently from f8c024f to 58c95c0 Compare July 7, 2019 16:00
@lucasfcosta lucasfcosta force-pushed the beforeEach-aborts-tests branch from 58c95c0 to 0b69ab0 Compare July 7, 2019 16:01
runAgainstSnapshot('singleBlock/multipleTests.test.js');
});

test('runs all of the beforeEach hooks if one fails but does not run the tests', () => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test fails in jest-circus because it stops running the beforeEach hooks once one of them fails.

I didn't fix this because I'm not sure if that's the desired behaviour.

IMO if all afterEach and afterAll hooks still run I think that all beforeEach hooks should run too, otherwise the behaviour is not symmetric. Given that beforeAll/beforeEach hooks will usually setup tests and afterAll/afterEach will do clean-up I'd argue that if we're running all the "teardown" part we should run all the "setup" part.

I'm happy to either change the behaviour in jest-circus if you agree with the explanation above or update my PR to make jest-jasmine2 have the same behaviour as jest-circus if you prefer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beforeEach and afterEach is just sugar for

function beforeEach() {
  console.log('before');
}

function afterEach() {
  console.log('after');
}

function test() {
  beforeEach();
  expect(1).toBe(1);
  afterEach();
}

(they can be async and Jest waits, but conceptually this is what happens).

I'm not sure we should assign any semantic "setup" and "teardown" meaning to the lifecycle functions. So (IMO) we should just bail if before fails and not run after.


@thymikee @cpojer @scotthovestadt @jeysal @mattphillips do you agree or disagree?

Copy link
Collaborator

@thymikee thymikee Jul 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if there's a good solution to that. Both have issues. before may start some listeners before failing and:

  1. they won't be cleared by after causing a leak (may surface in watch mode)
  2. after will try to run after failing before and fail as well, because the functionality expected to be there is not present

Since Jest already has a mechanism to listen for open connections after test run, I'd lean towards @SimenB solution. So don't treat after as a "teardown".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lucasfcosta Just to make sure I’m understanding correctly , you are saying that if one before hook fails then the rest are ignored but all of the after hooks are still ran?

If this is the case then I agree, we probably do not want to run any after hooks that haven’t had an equivalentbefore hook already ran.

If we do run the after hook without running the equivalent before we can end up blowing up because something the after depends on isn’t there.

Copy link
Contributor Author

@lucasfcosta lucasfcosta Jul 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I agree with you both. I think that the way @SimenB described it was very interesting to see how the mechanism is actually run but I'd be very careful when translating these concepts into alternative pieces of code as they might become inaccurate. In the excerpt above, for example, the beforeEach failing would demonstrate that a throw occurred and we're out of the normal execution flow, but if the same happened on an afterEach followed by another, there would be a different behaviour.

I think that you are correct especially due to the item 2 mentioned by @thymikee. Even if we treated after as a teardown it should indeed fail if its setup has not been made or was interrupted somehow.

tl;dr I'm convinced that the adequate behaviour should be as both of you preferred: a failing beforeEach should interrupt the next ones in the sequence.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the major points have already been mentioned. Side note: The before, after API that we've had even a long time before Jest seems flawed to me because of things like this (and because it does not help structure tests well). The API should look more like React's useEffect, associating a teardown operation with its setup by having it as a return value or second parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mattphillips, sorry it seems like there was a bit of a race condition in this discussion 😆
I ended up posting without seeing your comment since you submitted after I started writing but before I finished 😅

Please see my last comment in this PR, thanks 😊

@@ -0,0 +1,435 @@
// Jest Snapshot v1, https://goo.gl/fbAQLP
Copy link
Contributor Author

@lucasfcosta lucasfcosta Jul 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also considered only using stdout’s console.logs to determine what has been run and what hasn’t, but I considered it was important to check the failures that are actually displayed.

An example of this alternative approach can be found here. It looks very nice and a lot more concise, but I’m not sure it would be the optimal approach for this PR itself.

I’d appreciate any guidance on how to make these more concise/elegant.

@SimenB
Copy link
Member

SimenB commented Jul 9, 2019

Woah, what a great PR @lucasfcosta, thanks!

@lucasfcosta
Copy link
Contributor Author

lucasfcosta commented Jul 9, 2019

Thank you for the review, I've had a great time so far exploring the codebase and the repo. I learned a lot so far. Thanks for the great work @SimenB @thymikee and all the team 💖

I'll address the point you mentioned about interrupting the beforeEach sequence as soon as I can.

@lucasfcosta
Copy link
Contributor Author

Hello, everyone 😊

I was thinking about this on my way back home today, and I actually think that I'd prefer to go with the current approach and change jest-circus instead because:

Reason 1

  • The current execution flow for each test is beforeEach[] -> test -> afterEach[]
  • If any of the hooks or the test fails, the subsequent "tasks" will still run
  • Changing the current behaviour of jest-jasmine2 so that the beforeEach hooks will stop running if one of them fails would be a more unexpected change. That's very different from what currently happens and what people have requested is just that the test won't run if a beforeEach hook fails, which will be a goal we accomplish with this PR as is.

Reason 2

  • So far the vast majority of users uses jest-jasmine2 so I think that it would be better to adapt jest-circus to have a behaviour which is more similar to jest-jasmine2 in order to facilitate the transition than the other way around.

Reason 3

  • Semantically speaking, it still seems to me that the order in which beforeEach hooks are declared shouldn't matter since they are "hooks" and therefore should be "atomic".
  • Following the same logic, one could argue that we should also fail the afterEach sequence if one of the previous afterEach fails. I think the behaviour of these two hooks should be consistent.

I definitely buy @thymikee's point that afterEach is not really a "teardown", but due to the reasons above I think that going with the current approach would be more desirable.

I don't think there's a solution which is clearly better here, but I think that going with the less risky and drastic one would be desirable (especially considering jest-circus is not yet the default runner).


TL;DR: Keeping it as is would be a change which is less different from current behaviour but still addresses user's needs and since the vast majority of users is still in jest-jasmine2 I think that it's better to change jest-circus instead to facilitate the transition.

@SimenB
Copy link
Member

SimenB commented Jul 11, 2019

Thanks for the detailed writeup!

Reason 1

Not sure I follow the reasoning. Just because the behavior will be different doesn't mean it'll be wrong or worse. How it works currently shouldn't trump any design decisions we make if we think it'll improve clarity of test's lifecycles.

Reason 2

I disagree that Jasmine's behavior should "win" just because it's the current default behavior. We should figure out what behavior is desirable, regardless of what's currently in either Circus or Jasmine, and implement that in Circus. At that point we can figure out what makes sense to also implement in Jasmine.

Reason 3

While I agree they should be atomic, I can guarantee you that a lot of the lifecycle methods people define are not. And yes, I also think we should bail on subsequent afterEach if one fails. I find multiple lifecycle methods to be confusing to reason about, but it is what it is 🙂


I really like @jeysal's idea in a comment above here, with co-locating setup and teardown. I think that'd better encourage correct usage. We could possibly pass some sort of test state into the teardown so it could know if everything went well or if something went wrong? This PR is not really the place to discuss it, but I think it'd be super interesting to explore an alternative lifecycle API.


(random idea to end this rambling response with)

const testWithSetup = jest.setup(
  async () => {
    const context = await fetchData();

    return context;
  },
  context => {
    // do some teardown here, if you want
  },
);

testWithSetup('title', context => {
  expect(context.prop).toBeDefined();
});

And maybe a setupOnce or something that would only run the provided function once, instead of once per test.

@lucasfcosta
Copy link
Contributor Author

lucasfcosta commented Jul 11, 2019

Hi @SimenB, thank you for taking the time to write such a well thought and detailed answer too 💖

I agree with what you said, but I should've been more explicitly about the fact that my argument is focused on getting what's the best solution for now, and have a separate discussion, in length, about what the adequate behaviour should be.

Since this PR solves the most acute problem described in the issue, I think it should be the least drastic as possible so that we can get the problem solved in the meantime while we discuss a better overall approach. I also quite like what @jeysal proposed, but we would also need to discuss that separately.

To address a few of the things you mention specifically:

How it works currently shouldn't trump any design decisions we make if we think it'll improve clarity of test's lifecycles.

I 100% agree with this, but I think that running all the beforeEach hooks is what's more expected by the users at the moment since it's more similar to the current behaviour. We should, of course, discuss what the future of hooks will be, but that's going to require more time and maybe a mechanism which that's very different from what we currently have. Thus I'd advocate for discussing it separately (I'm happy to open an issue for that and summarise the discussion we've had in this PR so far).

I disagree that Jasmine's behavior should "win" just because it's the current default behavior.

I also agree with this. The behaviour which people agree is the more desirable should win, regardless of which runner it's in, but at this moment I reckon it's more valuable to have a fix which is as similar to the current approach as possible to keep users happy while we think about better alternatives.

While I agree they should be atomic, I can guarantee you that a lot of the lifecycle methods people define are not. And yes, I also think we should bail on subsequent afterEach if one fails. I find multiple lifecycle methods to be confusing to reason about, but it is what it is

I also feel very inclined to say the same, and I think your reasoning is 100% correct, but I think it needs further discussion so that we can consider all the minutiae involved and maybe solving this problem will require a completely different approach as we've mentioned.

@SimenB sorry I should've made more clear my point about this being more related to the current context than to what the final behaviour should be. Thank you very much for the quality feedback. Please let me know what you think about the points above.

tl;dr: I don't think we can get to a solution which will address the semantics of hooks as a whole without further discussion in a separate issue and therefore I think we should go with the simplest approach and which includes the least drastic changes while we discuss what the final desired behaviour should be.


@ all

Given this, the approach I'd advocate for would be:

  • Open a separate issue to evaluate the behaviour of hooks and when their execution should be interrupted (in it we can also explore a bit of @jeysal's idea and come up with a separate proposal once we know what user's pain points are)
  • Update jest-circus behaviour to conform with the new jest-jasmine2 behaviour

Please let me know what's preferred. I, of course, don't mind if you disagree with the approach described above and I'd be happy to implement the changes you think are the most desirable.

I'm looking forward to hearing from you.

Thanks everyone for your kindness and such great input and discussion ✨

Best,
Lucas

@lucasfcosta
Copy link
Contributor Author

Hello everyone,

I know this has been a long discussion and that you are all very busy especially given all that happens in Jest, but I just thought I'd ping some of you to check whether there's anything I can do to push this forward. If there's anything I can help you with please just let me know and I'll do my best to answer ASAP.

I'd still advocate for this approach, but I'm happy to change the behaviour of jest-jasmine2 instead in case that's what you prefer.

I'd be happy to follow what you prefer because I think that the most important thing to do atm is getting #6527 fixed as it seems to have a reasonable impact on many users.

All the best,
Lucas

@jeysal
Copy link
Contributor

jeysal commented Jul 28, 2019

I'm with @lucasfcosta on the better short-term solution. The after hook failing because the before hook didn't perform setup correctly doesn't seem so bad (user can probably identify the causation), but potential confusion why hooks don't run or missing out on cleanup causing leaks does.

@lucasfcosta
Copy link
Contributor Author

@jeysal I'm then going to proceed with changing jest-circus so that we can push this forward. We may want to revert this in the future, but, in the meantime, I think it's better to get the fix in and have consistent behaviour.

We can then decide later what's the actual behaviour which we will implement for both runners.

If anyone disagrees with this please let me know.

Thank you all 😊

@fungiboletus
Copy link

Hello, what is missing to get this merged ?

@lucasfcosta
Copy link
Contributor Author

Hi @fungiboletus, in the past few months I've been writing a book and I've been quite busy with other professional matters, so I doubt I'll be able to push this forward before September. I'd be happy to introduce others to the code and explain my concerns on this PR so that they can continue the work.
I'm sorry for that.
If anyone's interested in pushing this forward, please send me an email at opensource@lucasfcosta.com and we can book 30-60 minutes to go through this.

@github-actions
Copy link

github-actions bot commented Sep 8, 2022

This PR is stale because it has been open 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions
Copy link

github-actions bot commented Oct 8, 2022

This PR was closed because it has been stalled for 30 days with no activity. Please open a new PR if the issue is still relevant, linking to this one.

@github-actions github-actions bot closed this Oct 8, 2022
@piranna
Copy link
Contributor

piranna commented Oct 9, 2022

What's missing to get this merged?

@SimenB
Copy link
Member

SimenB commented Oct 9, 2022

Dunno, haven't looked at it in a loooong time 😅

@SimenB SimenB reopened this Oct 9, 2022
@SimenB SimenB removed the Stale label Oct 9, 2022
@github-actions
Copy link

github-actions bot commented Oct 9, 2023

This PR is stale because it has been open 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions github-actions bot added the Stale label Oct 9, 2023
Copy link

github-actions bot commented Nov 8, 2023

This PR was closed because it has been stalled for 30 days with no activity. Please open a new PR if the issue is still relevant, linking to this one.

@github-actions github-actions bot closed this Nov 8, 2023
Copy link

github-actions bot commented Dec 9, 2023

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 9, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Jest should fail fast and exit early (change request for --bail)
8 participants