-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make test suite exit early if beforeEach
hook fails (jest-jasmine2)
#8654
Conversation
f8c024f
to
58c95c0
Compare
58c95c0
to
0b69ab0
Compare
runAgainstSnapshot('singleBlock/multipleTests.test.js'); | ||
}); | ||
|
||
test('runs all of the beforeEach hooks if one fails but does not run the tests', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test fails in jest-circus
because it stops running the beforeEach
hooks once one of them fails.
I didn't fix this because I'm not sure if that's the desired behaviour.
IMO if all afterEach
and afterAll
hooks still run I think that all beforeEach
hooks should run too, otherwise the behaviour is not symmetric. Given that beforeAll
/beforeEach
hooks will usually setup tests and afterAll
/afterEach
will do clean-up I'd argue that if we're running all the "teardown
" part we should run all the "setup
" part.
I'm happy to either change the behaviour in jest-circus
if you agree with the explanation above or update my PR to make jest-jasmine2
have the same behaviour as jest-circus
if you prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
beforeEach
and afterEach
is just sugar for
function beforeEach() {
console.log('before');
}
function afterEach() {
console.log('after');
}
function test() {
beforeEach();
expect(1).toBe(1);
afterEach();
}
(they can be async and Jest waits, but conceptually this is what happens).
I'm not sure we should assign any semantic "setup" and "teardown" meaning to the lifecycle functions. So (IMO) we should just bail if before
fails and not run after
.
@thymikee @cpojer @scotthovestadt @jeysal @mattphillips do you agree or disagree?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if there's a good solution to that. Both have issues. before
may start some listeners before failing and:
- they won't be cleared by
after
causing a leak (may surface in watch mode) after
will try to run after failingbefore
and fail as well, because the functionality expected to be there is not present
Since Jest already has a mechanism to listen for open connections after test run, I'd lean towards @SimenB solution. So don't treat after
as a "teardown".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lucasfcosta Just to make sure I’m understanding correctly , you are saying that if one before
hook fails then the rest are ignored but all of the after
hooks are still ran?
If this is the case then I agree, we probably do not want to run any after
hooks that haven’t had an equivalentbefore
hook already ran.
If we do run the after
hook without running the equivalent before
we can end up blowing up because something the after
depends on isn’t there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I agree with you both. I think that the way @SimenB described it was very interesting to see how the mechanism is actually run but I'd be very careful when translating these concepts into alternative pieces of code as they might become inaccurate. In the excerpt above, for example, the beforeEach
failing would demonstrate that a throw
occurred and we're out of the normal execution flow, but if the same happened on an afterEach
followed by another, there would be a different behaviour.
I think that you are correct especially due to the item 2
mentioned by @thymikee. Even if we treated after
as a teardown
it should indeed fail if its setup has not been made or was interrupted somehow.
tl;dr I'm convinced that the adequate behaviour should be as both of you preferred: a failing beforeEach
should interrupt the next ones in the sequence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the major points have already been mentioned. Side note: The before
, after
API that we've had even a long time before Jest seems flawed to me because of things like this (and because it does not help structure tests well). The API should look more like React's useEffect
, associating a teardown operation with its setup by having it as a return value or second parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mattphillips, sorry it seems like there was a bit of a race condition in this discussion 😆
I ended up posting without seeing your comment since you submitted after I started writing but before I finished 😅
Please see my last comment in this PR, thanks 😊
@@ -0,0 +1,435 @@ | |||
// Jest Snapshot v1, https://goo.gl/fbAQLP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also considered only using stdout
’s console.log
s to determine what has been run and what hasn’t, but I considered it was important to check the failures that are actually displayed.
An example of this alternative approach can be found here. It looks very nice and a lot more concise, but I’m not sure it would be the optimal approach for this PR itself.
I’d appreciate any guidance on how to make these more concise/elegant.
Woah, what a great PR @lucasfcosta, thanks! |
Hello, everyone 😊 I was thinking about this on my way back home today, and I actually think that I'd prefer to go with the current approach and change Reason 1
Reason 2
Reason 3
I definitely buy @thymikee's point that I don't think there's a solution which is clearly better here, but I think that going with the less risky and drastic one would be desirable (especially considering TL;DR: Keeping it as is would be a change which is less different from current behaviour but still addresses user's needs and since the vast majority of users is still in |
Thanks for the detailed writeup!
Not sure I follow the reasoning. Just because the behavior will be different doesn't mean it'll be wrong or worse. How it works currently shouldn't trump any design decisions we make if we think it'll improve clarity of test's lifecycles.
I disagree that Jasmine's behavior should "win" just because it's the current default behavior. We should figure out what behavior is desirable, regardless of what's currently in either Circus or Jasmine, and implement that in Circus. At that point we can figure out what makes sense to also implement in Jasmine.
While I agree they should be atomic, I can guarantee you that a lot of the lifecycle methods people define are not. And yes, I also think we should bail on subsequent I really like @jeysal's idea in a comment above here, with co-locating setup and teardown. I think that'd better encourage correct usage. We could possibly pass some sort of test state into the teardown so it could know if everything went well or if something went wrong? This PR is not really the place to discuss it, but I think it'd be super interesting to explore an alternative lifecycle API. (random idea to end this rambling response with) const testWithSetup = jest.setup(
async () => {
const context = await fetchData();
return context;
},
context => {
// do some teardown here, if you want
},
);
testWithSetup('title', context => {
expect(context.prop).toBeDefined();
}); And maybe a |
Hi @SimenB, thank you for taking the time to write such a well thought and detailed answer too 💖 I agree with what you said, but I should've been more explicitly about the fact that my argument is focused on getting what's the best solution for now, and have a separate discussion, in length, about what the adequate behaviour should be. Since this PR solves the most acute problem described in the issue, I think it should be the least drastic as possible so that we can get the problem solved in the meantime while we discuss a better overall approach. I also quite like what @jeysal proposed, but we would also need to discuss that separately. To address a few of the things you mention specifically:
I 100% agree with this, but I think that running all the
I also agree with this. The behaviour which people agree is the more desirable should win, regardless of which runner it's in, but at this moment I reckon it's more valuable to have a fix which is as similar to the current approach as possible to keep users happy while we think about better alternatives.
I also feel very inclined to say the same, and I think your reasoning is 100% correct, but I think it needs further discussion so that we can consider all the minutiae involved and maybe solving this problem will require a completely different approach as we've mentioned. @SimenB sorry I should've made more clear my point about this being more related to the current context than to what the final behaviour should be. Thank you very much for the quality feedback. Please let me know what you think about the points above. tl;dr: I don't think we can get to a solution which will address the semantics of hooks as a whole without further discussion in a separate issue and therefore I think we should go with the simplest approach and which includes the least drastic changes while we discuss what the final desired behaviour should be. @ all Given this, the approach I'd advocate for would be:
Please let me know what's preferred. I, of course, don't mind if you disagree with the approach described above and I'd be happy to implement the changes you think are the most desirable. I'm looking forward to hearing from you. Thanks everyone for your kindness and such great input and discussion ✨ Best, |
Hello everyone, I know this has been a long discussion and that you are all very busy especially given all that happens in Jest, but I just thought I'd ping some of you to check whether there's anything I can do to push this forward. If there's anything I can help you with please just let me know and I'll do my best to answer ASAP. I'd still advocate for this approach, but I'm happy to change the behaviour of I'd be happy to follow what you prefer because I think that the most important thing to do atm is getting #6527 fixed as it seems to have a reasonable impact on many users. All the best, |
I'm with @lucasfcosta on the better short-term solution. The after hook failing because the before hook didn't perform setup correctly doesn't seem so bad (user can probably identify the causation), but potential confusion why hooks don't run or missing out on cleanup causing leaks does. |
@jeysal I'm then going to proceed with changing We can then decide later what's the actual behaviour which we will implement for both runners. If anyone disagrees with this please let me know. Thank you all 😊 |
Hello, what is missing to get this merged ? |
Hi @fungiboletus, in the past few months I've been writing a book and I've been quite busy with other professional matters, so I doubt I'll be able to push this forward before September. I'd be happy to introduce others to the code and explain my concerns on this PR so that they can continue the work. |
This PR is stale because it has been open 1 year with no activity. Remove stale label or comment or this will be closed in 30 days. |
This PR was closed because it has been stalled for 30 days with no activity. Please open a new PR if the issue is still relevant, linking to this one. |
What's missing to get this merged? |
Dunno, haven't looked at it in a loooong time 😅 |
This PR is stale because it has been open 1 year with no activity. Remove stale label or comment or this will be closed in 30 days. |
This PR was closed because it has been stalled for 30 days with no activity. Please open a new PR if the issue is still relevant, linking to this one. |
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Summary
This PR fixes #6527 by not running tests inside of a test suite if one of its
beforeEach
hooks fail.This is already the default behaviour in
jest-circus
and has been confirmed to be the desired behaviour as per this comment.Why this problem happens
Jest jasmine's
Env
will start building the hierarchy of blocks with children. This means adding thedescribe
blocks to their respective parents to build a treeIf it finds any
beforeEach
hooks,jest-jasmine2
will add them to an array in theSuite
.Each
it
(Spec
) is added as a child to thedescribe
block they are in.jest-jasmine2
will start executing tests through thetreeProcessor
.For each test (
it
/Spec
) thebeforeEach
andafterEach
hooks will be put into an array in the order they should run andqueueRunner
will run the functions in the array sequentially.If any of the functions in the promise chain throws an error, the
onException
callback will be invoked, but all the following functions will still run because the chain won't be discontinued.How the fix was implemented
I essentially broke the single chain of promises with all the hooks and the test into two chains. The first chain contains only the
beforeEach
hooks and the second chain contains the test and theafterEach
hooks. Then, if the chain forbeforeEach
fails I will turn on a flag which will be used to determine whether the continuation (the second chain with tests andafterEach
hooks) should run.Test plan
I have added extensive E2E tests to this to ensure that multiple scenarios would be covered.
I covered the following cases:
beforeEach
hook failsafterEach
hooks if a globalbeforeEach
failsafterAll
hooks if a globalbeforeEach
failsbeforeEach
hook failsbeforeEach
hook failsbeforeEach
hooks even if one failsafterEach
hooks ifbeforeEach
failsafterAll
hooks ifbeforeEach
failsbeforeEach
hook is inafterEach
hooks if the nestedbeforeEach
failsafterAll
hook if the nestedbeforeEach
failsI don't know if the way I organised these tests is adequate; any ideas on how to improve this are more than welcome.
I thought about writing more tests for more scenarios, but then I came to the conclusion that whatever I could write would end-up being redundant since the basis of them is already covered.
I also made the
runAgainstSnapshot
function generic since that's what I do in each test anyway.