Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ERR_INTERNAL_ASSERTION] with http keepAlive #59

Open
dylantack opened this issue Jun 19, 2019 · 12 comments
Open

[ERR_INTERNAL_ASSERTION] with http keepAlive #59

dylantack opened this issue Jun 19, 2019 · 12 comments

Comments

@dylantack
Copy link

dylantack commented Jun 19, 2019

Repro: https://github.com/dylantack/mitm_err_internal_assertion

When running some Jest tests, I got this:

    Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals.
    Please open an issue with this stack trace at https://github.com/nodejs/node/issues

      at InternalSocket.readData (node_modules/mitm/lib/internal_socket.js:156:8)

This is the entire stack trace. The line generating the error is here:
https://github.com/moll/node-mitm/blob/v1.7.0/lib/internal_socket.js#L156

There are also some warnings:

(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap [as on] (_http_server.js:763:54)
    at first (project/node_modules/ee-first/index.js:43:10)
    at onSocket (project/node_modules/on-finished/index.js:114:16)
    at attachFinishedListener (project/node_modules/on-finished/index.js:119:5)
    at attachListener (project/node_modules/on-finished/index.js:146:5)
    at onFinished (project/node_modules/on-finished/index.js:52:3)
    at Application.handleRequest (project/node_modules/koa/lib/application.js:150:5)
(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 end listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap (_http_server.js:763:54)
    at connectionListenerInternal (_http_server.js:401:10)
    at defaultTriggerAsyncIdScope (internal/async_hooks.js:301:12)
    at Mitm.connectionListener (_http_server.js:339:3)
    at Mitm.request (project/node_modules/mitm/index.js:159:28)
    at ClientRequest.onSocket (project/node_modules/underscore/underscore.js:748:26)
    at setRequestSocket (_http_agent.js:363:7)
(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 drain listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap (_http_server.js:763:54)
    at connectionListenerInternal (_http_server.js:403:10)
    at defaultTriggerAsyncIdScope (internal/async_hooks.js:301:12)
    at Mitm.connectionListener (_http_server.js:339:3)
    at Mitm.request (project/node_modules/mitm/index.js:159:28)
    at ClientRequest.onSocket (project/node_modules/underscore/underscore.js:748:26)
    at setRequestSocket (_http_agent.js:363:7)
(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap [as on] (_http_server.js:763:54)
    at first (project/node_modules/ee-first/index.js:43:10)
    at onSocket (project/node_modules/on-finished/index.js:114:16)
    at attachFinishedListener (project/node_modules/on-finished/index.js:119:5)
    at attachListener (project/node_modules/on-finished/index.js:146:5)
    at onFinished (project/node_modules/on-finished/index.js:52:3)
    at Application.handleRequest (project/node_modules/koa/lib/application.js:150:5)

thanks!

@moll
Copy link
Owner

moll commented Jun 19, 2019

I don't have Node.js v12.4 at hand yet to test, but on Travis all except a UCS-2 encoding tests pass (on v12.0.0 everything passes). Would you mind pasting at least some snippets of how Mitm.js is used in your tests? I'll try to skim if something changed since Node v12.0 that could break Mitm.js.

@dylantack
Copy link
Author

Thanks for the reply! I made a reduced repo, unfortunately it doesn't trigger the error, but it will at least demonstrate the context:
https://github.com/dylantack/mitm_err_internal_assertion

I'll update this issue if I'm able to reproduce it independent of my project. It seems to be something specific to the Stripe SDK; if I use the request module to manually POST to api.stripe.com, the issue doesn't occur.

@moll
Copy link
Owner

moll commented Jun 21, 2019

I briefly skimmed https://github.com/stripe/stripe-node/blob/e542902dd8fbe591fe3c3ce07a7e89d1d60e4cf7/lib/StripeResource.js#L320 and nothing particularly jumped out, so I'll await for your update when you manage to isolate it a little.

@dylantack
Copy link
Author

Update: I was able to reproduce it! It just needed more iterations. Curiously, it fails on the 63rd test (could be a coincidence, but this is 2^6-1, which makes me wonder if there's a buffer somewhere that gets full).

The issue occurs on Node v10.16.0, and also v12.6.0.

@dylantack
Copy link
Author

dylantack commented Jul 14, 2019

Another interesting observation: the number of test runs needed to induce a failure changes based on the size of the HTTP body. If you make the mock body larger, mitm will fail sooner, and vice-versa.

One more clue I found: By running git-bisect on the Stripe packages, I was able to locate start of the problem: stripe/stripe-node@e67dd8ce4e

It appears to be related to keepAlive. If I hack the Stripe code to new http.Agent({keepAlive: false}); the problem goes away.

@dylantack dylantack changed the title Error [ERR_INTERNAL_ASSERTION] Error [ERR_INTERNAL_ASSERTION] with http keepAlive Jul 14, 2019
@dylantack dylantack changed the title Error [ERR_INTERNAL_ASSERTION] with http keepAlive [ERR_INTERNAL_ASSERTION] with http keepAlive Jul 14, 2019
@dylantack
Copy link
Author

aha! I finally got a "clean" reproduction with no other dependencies. It's definitely keepalive:
https://github.com/dylantack/mitm_err_internal_assertion

@moll
Copy link
Owner

moll commented Jul 14, 2019

Thanks for the research! I bet it's to do with the fact that Mitm.js kind-of forces a fresh connection every time (

node-mitm/index.js

Lines 59 to 63 in 58e0673

// This will create a lot of sockets in tests, but that's the current price
// to pay until I find a better way to force a new socket for each
// connection.
this.stubs.stub(Http.globalAgent, "maxSockets", Infinity)
this.stubs.stub(Https.globalAgent, "maxSockets", Infinity)
) and Keep-Alive keeping old ones open. Though I would have thought I made Mitm.js close connections when the HTTP response is sent and all that. ^_^ I'll have to do some code archeology. :P

@moll
Copy link
Owner

moll commented Jul 14, 2019

I can confirm it goes haywire when you pass an agent in with keepAlive set to true. In fact, there's a combinatorial explosion with every intercepted connection adding one listener to the reused socket. That doesn't happen with the globalAgent for some reason, which is probably why it hasn't come up so far. Here's an ES5-only test case I'll soon add to Mitm.js's tests that surfaces the problem even in Node v6:

var data = _.times(128, function() { return "deadbeef" }).join("")
data.length.must.equal(1024)

var count = 0
this.mitm.on("request", function(_req, res) {
  ;++count
  res.end(data)
})

var agent = new Http.Agent({keepAlive: true})
var res = _.times(128, noop).reduce(function(res) {
  return res.then(function() {
    return new Promise(function(resolve) {
      var client = Http.request({host: "foo", agent: agent})
      client.end()

      client.on("response", function(res) {
        res.on("data", noop)
        res.on("end", resolve)
      })
    })
  })
}, Promise.resolve())

res.then(function() {
  count.must.equal(128)
  done()
}, done)

I'll see what I can do. Hopefully I can make Mitm.js work with re-used sockets, but for a quick fix until then I suggest disabling keep-alive in tests. :) Thanks again for your debugging!

@Oloompa
Copy link

Oloompa commented Sep 4, 2019

I have a similar issue for the same cause.

I use mitm with ava. Ava allow me to use some context inherent of unit test.

test.beforeEach(t => {
    const mitm = Mitm();

    mitm.on("request", function(request, response) {
        switch(t.context.something) {
            case "behavior A":
                response.end("something");
                break;
            case "behavior B":
                response.end("something else");
                break;
        }
    });

    t.context.mitm = mitm;
});

test.afterEach.always("cleanup", t => {
    t.context.mitm.disable();
});

test.serial("test A", async t => {
    t.context.something = "behavior A";
    // calling http request ...
});

test.serial("test B", async t => {
    t.context.something = "behavior B";
    // calling http request ...
});

When i use keep alive feature, disable mitm has no effect (serial execution of tests). First test is ok but second test fail cause both request handlers (both instance even the disabled one) are called. handler with A context is called first.

This cause second test to get result of first request.

@papandreou
Copy link
Contributor

I guess this might be related to this keep-alive problem I ran into a few years ago: #36

I'll just close that one and hope you get to the bottom of it here :)

@moll
Copy link
Owner

moll commented Sep 4, 2019

Ah, indeed, @papandreou. Thanks for that! I re-opened it to remember to take a look once I jump into this. :)

@marcghorayeb
Copy link

marcghorayeb commented Nov 22, 2019

Hey - I stumbled upon the same weird issue with keepAlive.
I fixed it on my side by explicitly calling req.destroy() after sending back data in the response.

jonasgrunert added a commit to jonasgrunert/mockpi that referenced this issue Sep 16, 2020
Mitm has problems when agents like axios want to keep the connection alive. this probably to a
buffer overflow. This is why we destroy the request up on finish sending data
(moll/node-mitm#59 (comment))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants