-
Notifications
You must be signed in to change notification settings - Fork 215
Mongodb Adaptor won't stop #265
Comments
@cognusion thanks for the detailed report. Would it be possible for you to test a mongo -> mongo pipeline? This would help me confirm whether it's an issue with the file adaptor. |
Yeah, it keeps both mongo sides open indefinitely: INFO[0361] Ping for srcmongodev1:27017 is 24 ms I've looked through the adaptor, but I'm getting lost tracing where the mgo.Session is and isn't available. It seems like the client isn't ever closing it, just the pipe and not the Session. |
ok, I'm curious if you are using this as a library or binary? The code to create the initial session as well as I'll try and reproduce this locally. |
Library: It's a daemonized queue listener that takes parameters from passed messages, and then based on the parameters adds nodes, builds a pipeline and does the right thing. It all works great, just leaking awful on the mongo side. I've only used the mongo, file, and postgresql adaptors, so far. |
ok, this all makes sense then, we never call @trinchan and I have been discussing some changes that should resolve this issue in the |
Excellent! Even if it didn't happen on pipline.Stop() if it was exposed somehow .. pipeline.ReallyStopAllTheThings() or something would be awesome |
@cognusion can you do a fresh pull from master? I just merged #267 which should take care of cleaning up those go routines and mgo session. |
At first glance, there's no change. I haven't dug into it at all, but will do that tonight/tomorrow. Thanks for the quick turnaround |
ok, I've been able to confirm there's still a leaky connection based on the code in |
@cognusion can you test again? I merged in some changes earlier today that I'm hoping solved this issue. |
@jipperinbham FTW! The sessions definitely close when the pipeline is complete, and the goro count drops a little thereafter, and a little more over time, almost back to baseline. There are only two ever-present goros that appear orphaned from compose/transporter thereafter: goroutine 116 [chan receive]: goroutine 117 [chan receive]: |
@cognusion thanks for testing and providing another detailed report, we'll take a look at those two and get them cleaned up soon. |
@cognusion this issue was autoclosed after I merged #280 but it you can test the latest from master, I'm really hoping we've taken care of all leaky goroutines. |
@jipperinbham confirmed: After running a mongo-sourced pipeline the mongo connections are closed, and dumping goros 1 minute later yields nothing directly from compose/transporter. Thanks again for the quick fixes on these! |
👍 thanks for helping us get this issue flushed out and tested. |
Bug report
Using Go to create a transporter pipeline, mongodb as the source and stdout as the sink, the mongo adaptor doesn't stop, even if a pipeline.Stop() is explicitly called. The adaptor maintains its connections to the mongos, periodically emits ping and sync messages, etc. For a short-lived program this isn't a big deal, but these leaks are terminal for long running processes.
System info:
Latest Git
Linux-latest
2.6, 3.2
What did you expect to happened?
Adaptor would stop, connections to mongo would be closed, could carry on.
What actually happened?
Adaptor claimed to have stopped, but sessions persisted, leaking goros, etc. The "Goros: N" is emitted every 1second, and is runtime.NumberGoros() (the number is 7 before the transport stuff is built and run)
INFO[0008] iterating complete collection=thecollection
INFO[0008] Read completed db=thedb
INFO[0008] adaptor Start finished... path=source
INFO[0008] adaptor Stopping... path=source
INFO[0008] adaptor Stopped path=source
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 21
INFO[0023] Ping for mongodev1:27017 is 23 ms
Goros: 21
INFO[0023] Ping for mongodev2:27017 is 23 ms
Goros: 21
Goros: 21
Goros: 21
Goros: 21
Goros: 19
Goros: 19
Goros: 19
Goros: 19
Goros: 19
Goros: 19
Goros: 19
Goros: 19
Goros: 19
Goros: 19
INFO[0038] Ping for mongodev1:27017 is 23 ms
Goros: 19
INFO[0038] Ping for mongodev2:27017 is 24 ms
INFO[0038] SYNC Starting full topology synchronization...
INFO[0038] SYNC Processing mongodev1:27017...
INFO[0038] SYNC Processing mongodev2:27017...
INFO[0038] SYNC Synchronization was complete (got data from primary).
INFO[0038] SYNC Synchronization completed: 1 master(s) and 1 slave(s) alive.
Goros: 19
... etc etc...
The text was updated successfully, but these errors were encountered: