Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster module not working? #2885

Closed
nicjansma opened this issue May 17, 2017 · 5 comments
Closed

Cluster module not working? #2885

nicjansma opened this issue May 17, 2017 · 5 comments

Comments

@nicjansma
Copy link

What's going wrong?

As far as I can tell, when starting a PM2 app in cluster mode, the NodeJS cluster module is not working as expected:

  • require("cluster").isMaster is true but require("cluster").workers is always empty
  • None of the require("cluster").on("fork", ...) or similar events fire
  • As a result, things like node-cluster-cache don't work

How could we reproduce this issue?

Simple repo app:

// cluster.js
var pm2 = require("pm2");
var cluster = require("cluster");

pm2.connect(function() {
    pm2.start({
        name: "test",
        exec_mode: "cluster",
    }, function(err, proc) {
        setInterval(function() {
            // this always reports {} workers
            console.log("Master workers from cluster.js", cluster.workers, cluster.isMaster);
        }, 1000);
    });
});
// app.js
var express = require("express");
var app = express();

app.listen(3002, function() {
    console.log("Listening on port " + 3002);
});
node cluster.js

If I remove PM2 from this, and run app.js (using cluster and .fork() on my own), everything works OK.

Supporting information

PM2 version: `pm2 -v` 2.4.6
Node version: `node -v` 5.1.2 and 7.10.0
Windows? Mac? Linux? Mac
@vmarchaud
Copy link
Contributor

I believe the only problem is here is that cluster.isMaster is returning true.
When you start a application with PM2, thats the daemon that actually start the processes, thats means its him the master in the cluster.
That's why you can't retrieve workers and event are not triggered inside your script that call the API.

It's pretty much a wanted a behavior with the cluster mode, your process is never the master so it can't run the master part.

@nicjansma
Copy link
Author

@vmarchaud Thanks for the info, that explains things.

Now that I understand the actual "master" in the cluster is the daemon, it makes sense why there are no .workers in the "launch" app (cluster.js). The reason cluster.js reports isMaster=true is that every app is considered a master, unless it started as a fork. cluster.js just doesn't know that another process is the real "master" in this case.

However, it leaves me in a bind, and I'm sure it will cause issues with some other modules too. Any module that makes decision based on isMaster will not work correctly, right?

For this example, node-cluster-cache uses IPC to communicate between forks and the master. To do this, it has to be initialized on the master as well as all forks. When using PM2, the daemon process that is the cluster master isn't initializing node-cluster-cache, so all operations to via node-cluster-cache (e.g. .get()) end up hanging.

I can imagine other modules would have this issue too. A brief look at memored suggests the same issue.

Are there ways we could work around this? Some ideas:

  • Can we give a callback that would be run in the daemon process? I could initialize node-cluster-cache in the code it ran.
  • Can we have the "startup" script (in my case cluster.js) be responsible for starting the cluster? That probably means it would be long-living?
  • Can we have the daemon not start the cluster directly, but have it launch the app (app.js) who would be responsible for .fork()ing? Would probably have to pass some environment variables, and a callback for changing the child process count/etc. app.js would then have to add if (cluster.isMater) logic/etc, but that's basically what I'm doing today instead of using PM2.

The third option is the best for me. In that way, I can switch between using PM2 or directly with no code changes. I think it also makes the most sense and should make the above node modules work. But it probably adds a bit of work the daemon would be doing for instance management.

@vmarchaud
Copy link
Contributor

You are right on the fact that all module that assume that is running in the cluster environment will fail.
Actually we think that if you want to customize the behavior of the cluster, you should simply implement it yourself. It's far more easy for user to implement it themselves than for us to try to satisfy every use-case out there.

@nicjansma
Copy link
Author

nicjansma commented Jun 8, 2017

I might suggest a note in the README stating known incompatible modules or cases where other cluster-aware modules won't work. It would have saved me quite a bit of debugging time (and I would have decided to not use pm2 earlier).

Known incompatible modules:

@vmarchaud
Copy link
Contributor

I would totally merge a PR into our docs here to indicate this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants