Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak when running multiple test. Especially with --runInBand #7311

Closed
nikolai-netskope opened this issue Oct 31, 2018 · 21 comments
Closed
Labels

Comments

@nikolai-netskope
Copy link

nikolai-netskope commented Oct 31, 2018

🐛 Bug Report

Hi. I'm seeing a memory leak while trying to run a basic autogenerated ( with "ng g c my-test" and "ng g c "my-test-2") Angular component. If i run jest with --detectLeak , i get messages saying that there are leaks. If i run jest with --runInBand , i see heap size going up ... when i add enough of those "blank" tests or reduce the size with --max_old_space_size= , I run out of memory. From what i was able to see the teardown (between tests) is not happening properly until it finishes the very last test. Any suggestions on what to do?
versions: jest: 23.6.0, jest-preset-angular: 6.0.1, jest-leak-detector: 23.6.0, zone.js: 0.8.26

To Reproduce

Steps to reproduce the behavior:

  1. create a bunch of Angular components with tests using "ng g c my-test-cmp-1", "...cmp-2", etc
  2. run "jest --runInBand --logHeapUsage components/" so it runs all the tests
  3. observe heap size grow as many times there are tests. Also, if you take a heapdump, you could see a bunch of "global > proto :: Window" entries. One for each test.
  4. observe that environments for each test are not torn down until all test complete. If you have enough of those tests or if you lower allocated memory with --max_old_space_size, you'll it running out of memory

Expected behavior

I think the proper behavior would be to tear down environments after a test file each test file is done and allow garbage collector to do it's job.

Link to repl or repo (highly encouraged)

Please provide either a repl.it demo or a minimal repository on GitHub.

Issues without a reproduction link are likely to stall.

Run npx envinfo --preset jest

Paste the results here:

System:
OS: macOS High Sierra 10.13.6
CPU: x64 Intel(R) Core(TM) i7-8559U CPU @ 2.70GHz
Binaries:
Node: 8.12.0 - ~/.nvm/versions/node/v8.12.0/bin/node
npm: 6.4.1 - ~/.nvm/versions/node/v8.12.0/bin/npm
npmPackages:
jest: 23.6.0 => 23.6.0

@malthoff
Copy link

We are facing the same problems. Heap size getting bigger and bigger until Jest crashes. Seems it has todo with graceful-js.

@ozziexsh
Copy link

+1 Here. Thought it had something to do with our DB drivers but after properly tearing down heap still increases, both with runInBand and parallel

@malthoff
Copy link

malthoff commented Jan 30, 2019

We were struggling with java heapsize problems for the last month, with karma/jasmine and jest respectively. After several negative attempts we found one major problem in our test suites - we imported a module in almost all tests, which itself imported lots of google material modules. That caused all tests to run slow (compared to now) and lead to the heap size problems. We restructured our test suites (do not import large modules, mock everything that is not part of the current class under test, use TestBed only if necessary, do not compile the components, add NO_ERROR_SCHEMA etc). Now, we do not have heap space problems under karma/jasmine, and the tests run, as they should, very fast.

Unfortunately, we still get heap space disconnects in jest. Don't know why.

@SimenB
Copy link
Member

SimenB commented Feb 6, 2019

Possibly related: #6814

A repository reproducing this is a good idea so that whoever wants to pick this up do not have to spend time putting a reproduction together 🙂

@w0rp
Copy link

w0rp commented Jun 21, 2019

I can't provide example code which repeats this issue easily, as the code isn't licensed such that I can share it, but my team sees an issue similar to this. jest consumes tons of memory and keeps eating memory until it runs into swap space. We've been able to fix this by manually running global.gc() in afterAll blocks.

It looks like something is stopping the garbage collector from running. It might be nice to have a setting in Jest to force the garbage collector to run when there's only a specified percentage of available memory remaining, or something like that.

@seyfer
Copy link

seyfer commented Nov 13, 2019

@w0rp how did you expose gc() to jest? I have - global.gc() is not a function

@jeysal
Copy link
Contributor

jeysal commented Nov 13, 2019

--expose-gc

@seyfer
Copy link

seyfer commented Dec 2, 2019

@jeysal well, this is not so obvious actually, because you can run jest like that
"test": "jest

but in order to use this option, it should be run in this way
"test": "node --expose-gc ./node_modules/.bin/jest

@jeysal
Copy link
Contributor

jeysal commented Dec 2, 2019

Yeah sorry should have mentioned it's a Node option, not a Jest option

@antgonzales
Copy link

Our team is experiencing issues as well. I also can't post the repo due to proprietary tech. It appears to be related to setup/teardown of internal jasmine. I'm seeing the same objects populate memory over and over again.

https://share.getcloudapp.com/4guxKGwn
https://share.getcloudapp.com/kpumJvLL

Steps to reproduce:

  1. Run the node inspector with Jest and log the usage: node --inspect-brk --expose-gc ./node_modules/jest/bin/jest.js --runInBand --logHeapUsage
  2. Open chrome://inspect
  3. Click the link to inspect the node instance
  4. From the Sources tab, click the Play icon in the top right to Resume script execution. You will now see the Jest console begin running the tests. You can also pause the script execution with the Pause icon to stop Jest
  5. Take heap snapshots
  6. Profit

@antgonzales
Copy link

Our team was able to stabilize heap size using this particular approach:
integrations/slack#895

We definitely have a memory leak but the leaks are not as significant as the ones we were previously experiencing. I hope this helps until we get a long term approach 👍

@uptown
Copy link

uptown commented Jan 27, 2020

If your problem is same with us, I resolved it with "importing every tests from one file, and running the file for the test".

Because of the crash, I dug into our code and the difference of heap dumps between each test, and I realize that somehow my global constant values are not released between each test.

Basically, it would be a memory leak caused by our code structure. However, every global variable in our code are singleton objects and it should be there until the process is dead.

So, we just collect every test in one file, and let the jest use same environment for all tests.

@SimenB
Copy link
Member

SimenB commented Jan 27, 2020

While the leak is in user code and not jest's code, I wonder if we should run the gc automatically after a run is complete? We already have code for it: https://github.com/facebook/jest/blob/084508290f4a90b28c3190021d3358ab1f753c3f/packages/jest-leak-detector/src/index.ts#L57-L68

Thoughts @jeysal @thymikee?

@thymikee
Copy link
Collaborator

If it doesn't affect performance negatively, then I'm in

@SimenB
Copy link
Member

SimenB commented Jan 27, 2020

Shouldn't have much of an impact as long as it runs after all tests are done and not every individual test. I might be wrong though and it takes seconds?

@stuart-clark-45
Copy link

stuart-clark-45 commented Jan 27, 2020 via email

@dav1app
Copy link

dav1app commented Jan 27, 2020

To run the global.gc() on each test, you just need to put afterEach (or afterAll) inside a setupFilesAfterEnv. This way, the function be called inside each test and after Jest module is initiated.

Inside jest.config.js:

{ 
  setupFilesAfterEnv: [
    '<rootDir>/force-gc'
  ]
}

Inside force-gc.js:

afterEach(() => {
  if (global.gc) global.gc()
})

No need to create a particular function on Jest.

@antgonzales
Copy link

Turns out this issue is not because of our test suite. There's something odd going on with Jest retaining multiple instances of files:
#7874

@github-actions
Copy link

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days.

@github-actions github-actions bot added the Stale label Feb 25, 2022
@SimenB
Copy link
Member

SimenB commented Mar 3, 2022

See #7874 (comment) (and the comment below it)

@SimenB SimenB closed this as completed Mar 3, 2022
@github-actions
Copy link

github-actions bot commented Apr 3, 2022

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 3, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests