Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jsdom or babel-polyfill cause memory leak (?) #1893

Closed
quentin- opened this issue Oct 7, 2016 · 52 comments
Closed

jsdom or babel-polyfill cause memory leak (?) #1893

quentin- opened this issue Oct 7, 2016 · 52 comments

Comments

@quentin-
Copy link

quentin- commented Oct 7, 2016

Hello.

First of all thanks for this project, it's been great so far. I'm running into a issue where when Jest can resolve babel-polyfill the node memory heap keeps on growing resulting in memory allocation errors in node 5 or extremely slow test in node 6 when node starts to reach the memory limit.

This very simple repository reproduce the issue https://github.com/quentin-/jest-test

describe('test', function() {
  beforeAll(function() {
    global.gc();
  });

  it('work', function() {
    expect(1).toBe(1);
  });
});
# heap size is trending upward.
# test become unbearably slow after a while
npm run test -- --logHeapUsage

node: 6.7.0

@quentin- quentin- changed the title babel-polyfill cause memory leak (?) memory leak (?) Oct 9, 2016
@quentin- quentin- changed the title memory leak (?) babel-polyfill cause memory leak (?) Oct 9, 2016
@turadg
Copy link

turadg commented Oct 16, 2016

Some more specific data on the heap growth in the example repo, from running it on my machine.

npm install
node --expose-gc ./node_modules/.bin/jest --runInBand --logHeapUsage

The heap size after the first test is 45MB and it grows linearly with a slope of 0.48.

rm -rf node_modules/babel-polyfill
node --expose-gc ./node_modules/.bin/jest --runInBand --logHeapUsage

The heap size after the first test is 48MB and it grows linearly with a slope of 1.03.

Is that to be expected? This is 1MB growth on the heap for this test:

describe('test', function() {
  it('works', function() {
    expect(1).toBe(1);
  });
});

@thymikee
Copy link
Collaborator

thymikee commented Dec 8, 2016

I've done a little research on this one, but not deep enough.

Indeed we seem to leak some memory here. But it's not because of babel-polyfill for sure (it just makes the heap grow faster, like @turadg noticed).

Tried to profile this with Chrome DevTools, but it didn't show timeline for Allocation Timelines and I didn't have too much time to wonder why.

node --expose-gc --inspect `which jest` --logHeapUsage -i

Any help on this one is highly appreciated!

@jednano
Copy link

jednano commented Dec 14, 2016

@thymikee, what really confuses me is that his repo requires babel-polyfill as a dev dependency, but he doesn't actually load it anywhere in the project. How/why would this make it worse?

@cpojer
Copy link
Member

cpojer commented Dec 14, 2016

When using babel-jest and when babel-polyfill is available, Jest will load it automatically.

@jednano
Copy link

jednano commented Dec 14, 2016

@cpojer, but he's not using babel-jest, just jest.

@thymikee
Copy link
Collaborator

babel-jest is used as a dep of jest-runtime

@turadg
Copy link

turadg commented Dec 14, 2016

FWIW I repeated the run of Quentin jest-test repo with Node 7.2.1 and Jest 17.0.3.

With babel: 44MB + 1.00MB * specs run
Without babel: 41MB + 0.52MB * specs run

@thymikee
Copy link
Collaborator

Further investigation: the suspect is jsdom.

When running Jest with jsdom environment, the Runtime object doesn't get swiped by gc.
On the other hand, when --env=node Runtime is properly garbage collected.

Below are results after running couple of hundred tests:
--env=jsdom:

screen shot 2016-12-14 at 22 47 09

--env=node:

screen shot 2016-12-14 at 22 47 23

Since jest-environment-node and jest-environment-jsdom are nearly identical, it makes it even more probable, that jsdom is to blame here, although I'm not 100% sure.

@cpojer
Copy link
Member

cpojer commented Dec 14, 2016

Do you know what exactly is being retained or what kind of stuff may lead to cycles? We may want to unset more things in the dispose method here: https://github.com/cpojer/jest/blob/master/packages/jest-environment-jsdom/src/index.js#L19

@thymikee thymikee changed the title babel-polyfill cause memory leak (?) jsdom cause memory leak (?) Dec 20, 2016
@thymikee
Copy link
Collaborator

Update: Managed to separate the case with jsdom only and filed an issue there: jsdom/jsdom#1682
Feel free to help debugging this 😄

@romansemko
Copy link

See my comment in jsdom/jsdom#1682 : the memory leak in jsdom only happens when window.close() is called synchronously after jsdom.jsdom() I see two options here:

  1. To try fixing jsdom. There is obviously something asynchronous happening in jsdom.jsdom. This leads to a reference of the window being held beyond the window.close() call.
  2. Try fixing jest-envirnment-jsdom. If it is rewritten to return a promise, instead setting directly a window, we could use jsdom.env, which solves the problem by being asynchronous.

@romansemko
Copy link

I did research further. It seems that JSDOM is just a drop in an ocean. If I implement a fix for JSDOM locally, I still get huge memory leaks. However, as soon as I remove babel-polyfill, the leaks drop by 95%. IT seems that certain shims in corejs (the Object getters/setters, for example) are causing the leaks when combined with jest.

A solution would be disabling babel-polyfill and starting jest with

node --harmony jest --no-polyfill...
  1. I would change the title of this ticket back to babel-polyfill, since it seems to be far bigger memory issue than JSDOM.
  2. I would add a config option to disable automatic babel-polyfill import.

@thymikee
Copy link
Collaborator

thymikee commented Jan 17, 2017

95% drops are interesting, I have experienced some drops, but not this big, and the stack was still growing fast.
I've did a quick test and it looks like on --env=node it grows stack in similar fashion when paired with babel-polyfill (comparing Node env + polyfill with JSDOM env without), so maybe there's something shared about these two?

@thymikee thymikee changed the title jsdom cause memory leak (?) jsdom or babel-polyfill cause memory leak (?) Jan 17, 2017
@cpojer
Copy link
Member

cpojer commented Jan 17, 2017

I'm wondering: do we even need babel-polyfill in recent versions of Node? Shall we just not load it (except for regenerator, which we may still need).

@romansemko
Copy link

romansemko commented Jan 17, 2017

...unless you want to test with exactly the same setup as the "real" thing. The node and core.js implementations of the ES6/7 features could be different. ;-)

Still, it's better to get rid of it, instead of having OOM problems in bigger test suites. Or, at least, giving an option to disable auto-loading of babel-polyfill.

@romansemko
Copy link

Okay, here are my findings so far:

  1. A huge leakage happens when using babel-polyfill.
  2. Jsdom does not correctly destroy the window when jsdom.jsdom() and window.close() are called synchronously one after another.

If I remove babel-polyfill, the memory usage still grows during the tests, since all the windows are still in the memory. How much it grows depends on what your tests are how you utilize the windows. Right after the tests (probably on the next tick) the memory drops since all windows are destroyed.

Babel-polyfill makes the whole thing even worse, since (1) its memory leak makes the memory usage grow even faster from test-to-test and (2) the windows are not destroyed, even after the tests complete.

Just removing babel-polyfill alleviated the problem in one of our projects (memory almost does not grow). However we are still having issues on another, bigger one (500 tests) where the tests OOM randomly just because the windows are not destroyed right after completion of each test.

How do we solve it? Well, just removing babel-polyfill alleviates the problem for smaller projects/tests. Fixing jsdom issue while having babel-polyfill on board would not fix anything, since the windows would not get destroyed because of the leakage. That means that we need to tackle both issues at once:

  1. Remove babel-polyfill (or at least give an option to disable it in jest).
  2. Fix jsdom or change the way jest calls jsdom (use jsdom.env) so that the windows get destroyed after each test and not at the end of all tests. Otherwise we might hit memory limits during the tests.

@thymikee
Copy link
Collaborator

Are you sure jsdom.env fixes the problem for you? I've changed jest-environment-jsdom locally to use it once and didn't see better results. Jest disposes jsdom (calls window.close()) not in the same tick. also wrapping the close() method in setTimeout didn't help.

BTW, I appreciate your help on investigating that so much, thank you!

@romansemko
Copy link

romansemko commented Jan 18, 2017

It's in our all interest to get this issue fixed. So you are most welcome! ^^

Check out the code in my comment on the jsdom issue:
jsdom/jsdom#1682 (comment)

If I use the jsdom.env version for spwanWindow and print the heap usage, I barely see any memory spikes at all. Maybe my methodology is wrong. Can you try it?

@thymikee
Copy link
Collaborator

Sure I have tried that and it indeed worked for me (but didn't check in the inspector). I need to find some time to test it better in Jest, because last time it was a really quick check, where I could miss something!

@romansemko
Copy link

romansemko commented Jan 25, 2017

Okay. We have this issue fixed in our project with a small workaround. As already said earlier, both issues (jsdom and babel-polyfill) have to be tackled in order to have a constant memory usage.
These steps are necessary:

  1. Remove babel-polyfill from your project. This is essential. Otherwise jsdom windows will still leak.
  2. Create a new setup file in your project that overrides the describe method and automatically creates/destroys the JSDom environment before/after each test. See the example below.
  3. Run jest with node --harmony (for babel-polyfill replacement), --env node (instead of the default jsdom environment) and --setupTestFrameworkScriptFile path/to/your/setup.js

Following example of a setup file:

import jsdom from 'jsdom'
 
const FAKE_DOM_HTML = `
    <html>
    <body>
    </body>
    </html>
`
 
/*
 * First, we override Jasmine's describe method so that we can automatically add beforeAll and afterAll.
 * In the beforeAll and afterAll we asynchronously setup and tear down the jsdom instance.
 * This solves the synchronous window.close() issue with jsdom described here:
 *
 * https://github.com/tmpvar/jsdom/issues/1682#issuecomment-270310752
 *
 * Be aware, that if your project has "babel-polyfill" in its node_modules, you will have
 * problems with leaking windows. If you need polyfills for the building process, use the cloned version
 * that is renamed to something else than babel-polyfill.
 */
const desc = global.describe
global.describe = (title, test) => {
    /* We need to keep reference to the window to destroy the correct one afterwards.
     * Otherwise, some tests might fail. This is probably related to this fact here:
     * https://github.com/airbnb/enzyme/blob/master/docs/guides/jsdom.md#describewithdom-api-and-clearing-the-document-after-every-test
     */
    let testWindow = null
    desc(title, () => {
        beforeAll((done) => {
            // We wait until JSDom has finished building the window, before continuing with the tests.
            jsdom.env(FAKE_DOM_HTML, (err, window) => { // eslint-disable-line
                testWindow = window
                global.window = window
                done()
            })
        })
        afterAll((done) => {
            // After the test suite completes, we ASYNCHRONOUSLY delete the apropriate window, making sure it is released correctly.
            setImmediate(() => {
                if (testWindow) {
                    testWindow.close()
                }
                done()
            })
        })
        const testRunner = test.bind({})
        testRunner()
    })
}
global.describe.skip = desc.skip

Now you can run jest with:

node --harmony node_modules/.bin/jest --env node --setupTestFrameworkScriptFile path/to/your/setup.js

Profit!

In the jest project, following could be done to make the life easier:

  1. Make polyfills optional. For, example, if babel-jest is provided, but --no-polyfills flag is set, run the code with --harmony flag.
  2. Rewrite jest-environment-jsdom to create/destroy the window asynchronously as shown above.

@jakubzitny
Copy link

jakubzitny commented Jan 30, 2017

Hi there, let me add some observations from our project. I am trying to migrate our Mocha stack to Jest and I encountered similar problems.

I've tried removing babel-polyfill and adding the afterAll hacks as @romansemko suggested, however, I have got the same results for all cases — around 7 extra MB in heap memory per describe. It makes the heap grow over 1G after 1500 tests and crashes node for JavaScript heap out of memory after a while. We have over 3k tests for now and the main problem with the leakage is that the tests take 25m to finish. With mocha it is under 3 minutes for comparison.

Our test environment is quite complicated, we mock canvas rendering context in jsdom, we still have half of our codebase in coffee-scriptand we have large number of dependencies, any of them might have another leaks on their own.

I realized that besides transforming coffee-script via transform config option I had require('coffee-script/register') in one of our setupFiles as well. So I removed it and tada 🎉 the memory usage dropped. So the main culprit in our case was global coffee-script/register.

We have some async-await stuff in our codebase so we need babel-polyfill. I added it to the env and I could see the rise of the heap again. This time it was around 50% of the original space filled with coffee-script + babel-polyfill leaks. It is interesting that when I removed babel-polyfill only and kept coffee-script there were similar amounts of leaks as with coffee-script + babel-polyfill. But babel-polyfill itself had half of them too.

Since we don't really need the whole babel-polyfill I replaced it with regenerator-runtime/runtime and I was able to get almost the same results as without it. I also tried Node 7 with --harmony --harmony-async-await but I was not able to make it work, I was still getting errors ReferenceError: regeneratorRuntime is not defined.

Anyway, I think the best solution would be to solve this on jest level. Not in jsdom or babel-polyfill and not in testEnv or testRunner (in afterAlls). Currently we run our tests with Mocha and everything works (with possibly present hidden leaks) quite fast. I guess there has to be more thorough environment reset in Mocha. Yes, we can delete/unset more things in dispose method as @cpojer suggested, but it shouldn't be jsdom or babel-polyfill thinggie only, it should be something more radical as the leaks might be in variety of dependencies. Maybe it should be even higher up in the environment creation hierarchy? Could there be a particular reason, maybe a design decision, that leads to the leaks from the specs being leaked "globally" (copared to Mocha), @cpojer?

@obukspan
Copy link

@thymikee any idea when this will be released? Also hitting issues with memory leaks on circleci and the setImmediate change appears to do the trick. Thanks for tracking it down!

@thymikee
Copy link
Collaborator

It will land in Jest 19. We plan to release it in a week.

@cpojer
Copy link
Member

cpojer commented Feb 25, 2017

This should be fixed now for the most part, right? :)

@cpojer cpojer closed this as completed Feb 25, 2017
@jednano
Copy link

jednano commented Feb 27, 2017

We upgraded to Jest 19 in the hopes that this memory leak would be fixed, but unfortunately that's not the case. We've been using the --runInBand flag to run tests on individual folders to dodge this memory leak up until this point, but now (with Jest 19) the memory leak has resurfaced. You can see how the test times are snowballing here:

image

As a result, Circle CI gives us this failure:

image

Is anyone else still plagued by this memory leak in Jest 19?

@mute
Copy link
Contributor

mute commented Feb 27, 2017

Yes -- after upgrading to Jest 19, we were also still seeing a memory leak with a combination of babel-polyfill being present in node_modules and jest in our project.

Removing babel-polyfill addressed the issue for us.

@jednano
Copy link

jednano commented Feb 27, 2017

FWIW, it seems the issue only presents itself when running async, as the --runInBand flag completely avoids it. Is that helpful at all?

@jednano
Copy link

jednano commented Feb 27, 2017

@mute I did an npm uninstall babel-polyfill right before our npm test command and it did get us a little bit farther (22 tests instead of 8) before crashing, but it still crashes very early on in the batch of tests we have to run (about 300).

@cpojer
Copy link
Member

cpojer commented Feb 27, 2017

I apologize. I forgot to merge #2755 into the release (honestly I thought I did), it will be part of the next release.

@jednano
Copy link

jednano commented Feb 27, 2017

All good @cpojer. I'm just glad it's fixed!

@turadg
Copy link

turadg commented Feb 27, 2017

@cpojer when do you expect the release with #2755 to be on NPM? If it's more than a week, would you be open to cutting a prerelease? This memory leak is costing us hours of CI time with all the garbage collection it triggers.

@jednano
Copy link

jednano commented Feb 27, 2017

@turadg if it takes a while, you can always do what we're doing (for now) so you don't waste any more time. Look at our npm run test:ci command below:

{
  "scripts": {
    "test": "jest",
    "test:ci": "suites=\"fooFolder barFolder bazFolder\"; STATUS=0; for i in $suites; do npm test -- \"${i}/*\" --runInBand; STATUS=$(( $? > $STATUS ? $? : $STATUS )); done; exit $STATUS"
  }
}

@jednano
Copy link

jednano commented Mar 6, 2017

@cpojer it's been a week since the last comment and still no release. Any idea when the next one will be?

@cpojer
Copy link
Member

cpojer commented Mar 6, 2017

No, there is no fixed release schedule here. There may be one next week.

@msteitle
Copy link

msteitle commented Apr 5, 2017

Greetings. It's April 5th and we are still experiencing this issue on Jest 19.0.2. Originally when running jest, I was up to 500s elapsed time and 1.6Gb ram usage per concurrent test when I ran out of patience and killed jest.

After removing babel-polyfill from our project, running jest now finishes, but it's still slow and heap size still continuously increases. But now when Jest completes, the heap size is only 500Mb.

Now, if I run node with --expose-gc, suddenly the heap stops growing. Heap size per spec is around 50Mb. Now in our case, we are migrating from Karma, and our test time was around 35 seconds for 550 test specs.

Unfortunately, replacing jsdom with node env is not a feasible solution currently.

It appears that this issue was supposed to be fixed in v19. But is it?

@thymikee
Copy link
Collaborator

thymikee commented Apr 5, 2017

Can you try jest@next for now and see if it helps?

@jednano
Copy link

jednano commented Apr 5, 2017

What's the hold up on a patch release?

@thymikee
Copy link
Collaborator

thymikee commented Apr 5, 2017

@cpojer's time off mainly. Although there's a chance we could roll out 19.1, but @DmitriiAbramov is the one to decide.

@msteitle
Copy link

msteitle commented Apr 5, 2017

@thymikee I'll give it a try.

Update: @next (19.1.0-alpha.eed82034), with babel-polyfill re-installed in node_modules (dependency of another package).

node ./node_modules/.bin/jest - Heap size growing, running slow
node --expose-gc ./node_modules/.bin/jest --logHeapUsage [--runInBand] - Heap size stable, running slow

Note: "running slow" here means:

Test Suites: ...52 of 550 total
Snapshots:   0 total
Time:        50s, estimated 74s

After removing babel-polyfill from node_modules:

node ./node_modules/.bin/jest
(heap size growing)

Test Suites: ...61 of 550 total
Snapshots:   0 total
Time:        50s, estimated 74s

@danieljuhl
Copy link

@thymikee I was facing the same memory issue and then very slow tests. I made a fresh install of node_modules and then jest@next - my tests went from ~750s to ~125s and memory was stable around 800mb per worker (previously hitting the limit of 1.45gb per worker)

@jednano
Copy link

jednano commented May 14, 2017

Still getting the memory leak on Jest 20.0.1, but it only happens in our Travis-CI environment. Any ideas on why that would be?

image

The same tests run in just under a minute on my local machine:

image

@mlanter
Copy link

mlanter commented May 15, 2017

@jedmao We still had the same issue after updating to Jest 20.0.1 with CircleCI. We were able to fix it by using --maxWorkers=1 (more info on that flag)

Testing showed that one worker was the best number (fastest tests). --env=node also sped up the tests if not using --maxWorkers but didn't seem to make a difference when setting maxWorkers so we just went with only --maxWorkers.

@thymikee
Copy link
Collaborator

@mlanter @jedmao I encourage you to debug your tests with --inspect flag for Chrome Debugger and try to figure out where the leak may come from (it's likely something in your tests). To simulate slow CI environment, you can throttle the CPU a bit:

screen shot 2017-05-16 at 08 42 18

@jednano
Copy link

jednano commented May 24, 2017

@thymikee thanks for the tip. It appears that I cannot --inspect unless I also use --runInBand, which defeats the purpose, because the memory leak does not present itself with --runInBand. Have you been able to --inspect w/o --runInBand?

That said, I ran the tests @ 20x slowdown with --runInBand and everything ran swimmingly.

@jednano
Copy link

jednano commented May 25, 2017

@mlanter I have a feeling that --maxWorkers=1 is doing the exact same thing as --runInBand. Either way, thanks for the tip! Both methods worked just fine in my CI environment, even though they had memory leak issues before Jest 20.

I think I'm fixed here, folks!

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests