Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue #1728 Using const or let in global scope have problem in nodejs… #1929

Merged
merged 1 commit into from
Mar 10, 2017

Conversation

tardieu
Copy link
Contributor

@tardieu tardieu commented Mar 1, 2017

Fixes issue #1728 by combining the evaluation of message.code and message.main into a single eval scope (similar to nodejs require but without the need for a temporary file).

@@ -67,8 +67,7 @@ function NodeActionRunner() {
} else {
// The code is a plain old JS file.
try {
eval(message.code);
thisRunner.userScriptMain = eval(message.main);
thisRunner.userScriptMain = eval('(function(){\n' + message.code + '\nreturn ' + message.main + '})()');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum I think we don't want to continue to use eval.

I would preferred to write the message.code to a file, and then have same flow as zip actions by doing a require on the file.
This means appending exports.main = $message.main to the file

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another advantage of using a file is that the file can be named appropriately. the net is that any resulting stack traces from invocation of the action will show up with myAction.js:32

whereas with eval, stack traces will include the runner.js file name and line numbers, with anonymous:32 as the nested file:line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also prototyped the file-based option, so I can easily switch to the alternative if desired.

However, require basically does the same thing as a function scope/eval except that it also adds additional parameters to the function, e.g., module and exports. These names did not exist until now. The file-based solution is fine and certainly more consistent with the zip flow, but at the same time a bigger departure from the current code. It is also bit trickier to get right as the action code could be already messing with exports and module. Specifically, I don't think exports.main = $message.main is defensive enough.

I don't know openwhisk enough yet to know if the action file name can reuse the action name (in particular if valid action name => valid file name). I suspect a temporary subdirectory is also needed to isolate the arbitrary named file and I guess the random folder will show in the stack trace.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes use same approach as zip, mktemp for now in tmpfs on random folder name.
I think just using action.js as file is good and simple instead of trying to make sure action name is safe to use as filename and encoding.

In terms of modules.exports.main I think it's fine and low risk or use exports.$message.main

Copy link
Contributor

@starpit starpit Mar 2, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the folder won't show up in the stack trace if we "cd" to that directory and require('./myAction.js') ?

@rabbah
Copy link
Member

rabbah commented Mar 2, 2017

Cold start latency?

@csantanapr
Copy link
Member

What you mean overhead of write ?

I thought about that and I think is negible, and writing to memory instead of IO
I think worth the trade off.
But we can measure and compare just to be sure.

@tardieu
Copy link
Contributor Author

tardieu commented Mar 2, 2017

See this branch for a file-based implementation: https://github.com/tardieu/openwhisk/tree/issue-1728-file

The code should probably be refactored, but it should be good enough for performance testing.

@tardieu
Copy link
Contributor Author

tardieu commented Mar 3, 2017

I have updated this pull request to use a file/module based implementation for the two flows as requested.

@ddragosd
Copy link
Contributor

ddragosd commented Mar 3, 2017

but we can measure and compare just to be sure

@csantanapr are these performance tests at the component level and are they in the src ?

@rabbah
Copy link
Member

rabbah commented Mar 3, 2017

not in openwhisk, yet.

@rabbah
Copy link
Member

rabbah commented Mar 7, 2017

I was simply raising the issue, yes of the cold start latency - intuitively, avoiding the write to disk would be prudent, and it's a one line fix. Do we really get significant benefits from writing to disk and loading the modules?

We don't have performance tests that are easily consumable for benchmarking this change - yet. One suggestion is just to do it natively and re-evaluate later.

@csantanapr
Copy link
Member

I will try todo some comparisons locally, on what's the diff between eval and writing the string into memory and then doing a require.
I will repeat myself again there is no disk involved, it's all memory /tmpfs
I think it's important to fix this problem with let const and line number exceptions.

@starpit
Copy link
Contributor

starpit commented Mar 7, 2017

is this the suggestion? i.e. to avoid going to the filesystem?

const vm = require('vm');

let code = `
exports.main = function() {
  console.log('hello world');
}
`

global.exports = {};
vm.runInThisContext(code);

global.exports.main(); // "hello world"

@tardieu
Copy link
Contributor Author

tardieu commented Mar 7, 2017

On my macbook using a ramdisk I get a 100x performance penalty...

var NodeActionRunner = require("./runner.js"); // master branch
var NodeActionRunnerFile = require("./runner-file.js"); // this pull request
var iterations = 400;

process.chdir('/Users/tardieu/tmp'); // a ramdisk

var message = { code: "function main() { console.log('hello'); }", binary: false, main: "main" };

console.time('master');
for (let i = 0; i < iterations; i++) {
    var runner = new NodeActionRunner().init(message).catch(function (error) { console.log(error); });
}
console.timeEnd('master');

console.time('file');
for (let i = 0; i < iterations; i++) {
    var runner = new NodeActionRunnerFile().init(message).catch(function (error) { console.log(error); });
}
console.timeEnd('file');
master: 5.813ms
file: 608.743ms

I suspect tmpfs on Linux would be better than this but this result is not encouraging.

@domdom82
Copy link
Contributor

domdom82 commented Mar 7, 2017

@tardieu wrt to tmpfs: In the past I did some performance testing on tmpfs vs. actual disk using various storage drivers. We are currently using overlay to store container filesystems which has a write speed of about 250M/s compared to tmpfs which peaks at about 700M/s so it is about 3 times faster in writing to a (single) disk.

We haven't used tmpfs for user containers yet because RAM is the most expensive resource you can get on a host. Other than that it is a great improvement of course.

@tardieu
Copy link
Contributor Author

tardieu commented Mar 7, 2017

I have implemented @starpit solution in: master...tardieu:issue-1728-vm

It fixes the current issue. The performance is similar to the original code and the error messages are sensible.

FWIW, using a vm.runInNewContext instead of vm.runInThisContext is about 30x slower but still 3-4x faster than using a file on my system.

@markusthoemmes
Copy link
Contributor

Note that we once used vm.runInNewContext and it caused a huge memory leak. Please keep that in mind.

@rabbah
Copy link
Member

rabbah commented Mar 7, 2017

iirc there was some discussion about whether we were really seeing a memory leak or gc overhead - but yeah what @markusthoemmes said... it bit us before.

@rabbah
Copy link
Member

rabbah commented Mar 7, 2017

Given all this information - I'm inclined to accept the original solution as the best compromise.

@csantanapr
Copy link
Member

Good work @tardieu on getting to the bottom of this and doing the comparisons.
Based on all the input specially what @domdom82 points about no tmpfs (which what I was thinking).

Let's go with the original proposal with the function wrapper function () { return message.code }() or something like that I can't remember the details.

I'm ok with the trade off this give us the ability of using let and const and isolation of global context, even if it doesn't give us line number for exceptions.

@tardieu tardieu closed this Mar 7, 2017
@tardieu tardieu deleted the issue-1728 branch March 7, 2017 19:48
@tardieu tardieu reopened this Mar 7, 2017
@starpit
Copy link
Contributor

starpit commented Mar 7, 2017

it looks like the runInNewContext leaks have been fixed in the 6-series of nodejs. the 4-series may not have ever received a backport of the fixes.

c.f. nodejs/node#3113

@tardieu tardieu reopened this Mar 7, 2017
@tardieu
Copy link
Contributor Author

tardieu commented Mar 7, 2017

I force pushed the original fix.

@rabbah
Copy link
Member

rabbah commented Mar 7, 2017

PG1/1237 approved.

@rabbah rabbah merged commit 1083e43 into apache:master Mar 10, 2017
dubee pushed a commit to dubee/openwhisk that referenced this pull request Mar 10, 2017
dubee pushed a commit to dubee/openwhisk that referenced this pull request Mar 10, 2017
houshengbo pushed a commit to houshengbo/openwhisk that referenced this pull request Jun 23, 2017
jonpspri pushed a commit to jonpspri/openwhisk that referenced this pull request Jul 4, 2017
houshengbo pushed a commit to houshengbo/openwhisk that referenced this pull request Nov 13, 2017
houshengbo pushed a commit to houshengbo/openwhisk that referenced this pull request Nov 13, 2017
houshengbo pushed a commit to houshengbo/openwhisk that referenced this pull request Nov 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants