-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fatal memory error with large response #1700
Comments
@mtoothman As we can see from your shared CLI snippet, the 21.79MB response is not being handled all that well. As a workaround, could you try the following command and confirm that the collection runs correctly? NODE_OPTIONS="--max-old-space-size=2048" newman run ./collections/$COLLECTION_NAME.json -e ./env/$ENV_NAME.json Note that you may have to adjust the value of --max-old-space-size, which is in megabytes. A proper long-term fix is to allow ignoring responses from the underlying collection run layer. Similar to #1516. |
Running this command with Using a smaller value results in
|
@mtoothman Closer inspection of your snippet reveals the following:
While we think of a better way to handle this problem, I've marked this issue as a bug. |
Node v9 and above increases these limits. Please try this out and let us know. |
@shamasis The mentioned problem is still reproducable, even with node v11 |
Another customer has experienced a similar issue. They have tried the following and the problem still persists. I have attached a .txt of the error output as well.
|
We've made significant performance improvements to handle large response payloads. |
Closing this issue as we are not able to reproduce this internally and we haven't seen other users facing the same. Please feel free to reopen if the issue persists or you can help us with the steps to reproduce this issue. |
I am seeing similar issue running collection using newman:
|
Using Newman 4.6.1 or Newman 4.5.5 ( We don't have Node > 10 yet ) <--- Last few GCs ---> n 2643 steps since start of marking, biggest step 9.7 ms, walltime since start of marking 1759 ms) finalize incremental marking via stack guard GC i[13346:0x31c3840] 38894 ms: Mark-sweep 1046.6 (1081.3) -> 551.2 (564.9) MB, 3.5 / 0.0 ms (+ 40.3 ms in 8 steps since start of marking, biggest step 11.2 ms, walltime since start of marking 12114 ms) finalize incremental marking via stack guard GC in o[13346:0x31c3840] 38926 ms: Scavenge 551.9 (564.9) -> 551.4 (564.9) MB, 1.3 / 0.0 ms allocation failure <--- JS stacktrace ---> ==== JS stack trace ========================================= Security context: 0x35e132d25729
FATAL ERROR: invalid array length Allocation failed - JavaScript heap out of memory 1: node::Abort() [node] 2: 0x11dd81c [node] 3: v8::Utils::ReportOOMFailure(char const*, bool) [node] 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [node] 5: v8::internal::Heap::AllocateRawFixedArray(int, v8::internal::PretenureFlag) [node] 6: v8::internal::Heap::AllocateFixedArrayWithFiller(int, v8::internal::PretenureFlag, v8::internal::Object*) [node] 7: v8::internal::Factory::NewUninitializedFixedArray(int) [node] 8: 0xde8473 [node] 9: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [node] 10: 0xd135a8842fd Aborted (core dumped) script returned exit code 134 |
Issue is happening for us too - happens even before we attempt to run any tests
|
This issue still exists. This is a VM with 10GB of ram with NODE_OPTIONS="--max-old-space-size=8192". The response is a file of 137.98MB. The file is downloaded, but then we get the fatal memory error. Smaller responses are fine. $ newman -v $ node -v
|
Same issue here. 64 bit VM with 32GB available ram, NODE_OPTIONS="--max-old-space-size=8192". $ node -v Responses below 107.55MB are passing fine,
|
same issue here |
newman -v Setting node options to 800 MB Get api call which download a file of size more than 700MB doesn't work. FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory |
Newman Version (can be found via
newman -v
):4.1.0
OS details (type, version, and architecture):
Alpine Linux 3.7.0, in docker container
Are you using Newman as a library, or via the CLI?
CLI
Did you encounter this recently, or has this bug always been there:
Recently
Expected behaviour:
No timeouts during long collection or test script execution when options are omitted.
No out-of-memory errors
Command / script used to run Newman:
newman run ./collections/$COLLECTION_NAME.json -e ./env/$ENV_NAME.json
Sample collection, and auxiliary files (minus the sensitive details):
Screenshots (if applicable):
Steps to reproduce the problem:
episode-editor.json.txt
I noticed this problem initially when a job was failing with
Script execution timed out
, but adjusting timeout settings and upgrading to 4.10.0 to use its default Infinite setting didn't help.The text was updated successfully, but these errors were encountered: