Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ThreadStream.write "worker has exited" error. #1429

Closed
dmitrizagidulin opened this issue May 2, 2022 · 18 comments
Closed

ThreadStream.write "worker has exited" error. #1429

dmitrizagidulin opened this issue May 2, 2022 · 18 comments

Comments

@dmitrizagidulin
Copy link

Hi all. We're getting an error trying to use Pino v7.10.0 (and pino-pretty v7.6.1) in our server (running on Node >=17).

our package.json:

    "pino": "^7.10.0",
    "pino-pretty": "^7.6.1",

The logger config is fairly standard: https://github.com/XRFoundation/XREngine/blob/dev/packages/server-core/src/logger.ts

And this is the error we're getting on startup (using a standard builder deployment inside AWS kubernetes, Node v18).

image

Similarly, we ran into a similar error using the ElasticSearch transport (not on kubernetes, this is just Ubuntu linux on AWS):

(node:35772) UnhandledPromiseRejectionWarning
Error: the worker has exited
    at ThreadStream.write (/home/ubuntu/XREngine/node_modules/thread-stream/index.js:214:13)
    at Pino.write (/home/ubuntu/XREngine/node_modules/pino/lib/proto.js:208:10)
    at Pino.LOG [as error] (/home/ubuntu/XREngine/node_modules/pino/lib/tools.js:60:21)
    at Function.app.setup (/home/ubuntu/XREngine/packages/server-core/src/sequelize.ts:97:16)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
/home/ubuntu/XREngine/node_modules/thread-stream/index.js:214
      throw new Error('the worker has exited')

We're able to work around it currently (by downgrading to pino v6), but would definitely like to take advantage of the multi-threaded version 7.

Is there anything we're missing here?

@dmitrizagidulin
Copy link
Author

My initial thought was that this is related to #1400. But it's pino itself that's causing the error (not just that there's some other error, and we're logging during exit).

@mcollina
Copy link
Member

mcollina commented May 2, 2022

Does that code work correctly in development / local machine?

@dmitrizagidulin
Copy link
Author

Does that code work correctly in development / local machine?

That's the mysterious part -- I can't reproduce it locally on Mac OS X. (Will be able to test it locally on Linux tomorrow).

@mcollina
Copy link
Member

mcollina commented May 3, 2022

If you can provide a minimal reproduction code, I'll be happy to test this.

@ashishguptasanu
Copy link

ashishguptasanu commented May 4, 2022

Hi @mcollina, Here is the logger snippet,

import pino from 'pino'

let node = process.env.ELASTIC_HOST || "http://localhost:9200";

const logger = pino({
  transport: {
    targets: [
      {
        level: 'debug',
        target: 'pino-pretty',
        options: {
          colorize: true,
          translateTime: true
        }
      },
      {
        level: 'debug',
        target: 'pino-elasticsearch',
        options: {
          index: 'xr-engine',
          consistency: 'one',
          node: node
        }
      }
    ]
  }
})

export default logger

Getting worker has exited error only when elasticsearch is not running.

Pino version: 7.3.0

@mcollina
Copy link
Member

mcollina commented May 4, 2022

This seems a duplicate of pinojs/pino-elasticsearch#140.
Would you like to send a PR to pino-elasticsearch to handle this case?

@dmitrizagidulin
Copy link
Author

@mcollina - this could be a duplicate of pinojs/pino-elasticsearch#140, but we got the same thread worker error when testing with the pino-pretty transport alone (no elastic).

@mcollina
Copy link
Member

mcollina commented May 4, 2022

Getting worker has exited error only when elasticsearch is not running.

So this is not correct, right?

@mcollina
Copy link
Member

mcollina commented May 4, 2022

How often does this reproduce? Have you got a way to reproduce it reliably?

@mcollina
Copy link
Member

mcollina commented May 4, 2022

I have a few ideas on why this can happen but I really need a repro to verify them.

@dmitrizagidulin
Copy link
Author

@mcollina thanks for looking at this, muchly appreciated.

I'm going to work with our devops team to see if we can replicate. (Because, frustratingly, we can't replicate the error locally on our desktops, on MacOS or Linux, but it happens consistently on our dev cluster (k8s etc)).
So, lemme see if I can find more details.

@mcollina
Copy link
Member

mcollina commented May 5, 2022

Here is my theory: the transport crash because of a bug. However the error is not reported correctly on the main thread due to a race condition.

@dmitrizagidulin
Copy link
Author

Ok, so, we may have solved the issue, and should now be running on pino 7, via PR EtherealEngine/etherealengine#5946

@mcollina
Copy link
Member

mcollina commented May 9, 2022

what was it?

@dmitrizagidulin
Copy link
Author

@mcollina - I'm not entirely sure, but it seems to be fixed by using pino-multi-stream transport to combine th console & elastic search transports. https://github.com/XRFoundation/XREngine/pull/5946/files#diff-cdb6907467cab0c8b4a5eef09756e52cb54375fd7f03121920d31032c96e1cc6R23

@mcollina
Copy link
Member

You should use https://getpino.io/#/docs/api?id=pino-multistream, not pino-multi-stream.

@dmitrizagidulin
Copy link
Author

You should use https://getpino.io/#/docs/api?id=pino-multistream, not pino-multi-stream.

Ah, thanks for the heads up!

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants