Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segfault on close with pending put #157

Closed
lakowske opened this issue Mar 25, 2015 · 53 comments · Fixed by #174 or #597
Closed

Segfault on close with pending put #157

lakowske opened this issue Mar 25, 2015 · 53 comments · Fixed by #174 or #597
Labels
bug Something isn't working segfault

Comments

@lakowske
Copy link

I am getting a segfault when I run the following unit test. The segfault doesn't happen every time I run the test. Any ideas?

var test        = require('tape');
var level       = require('level');
var path        = require('path');
var through2    = require('through2');
var JSONStream  = require('JSONStream');

function push(db) {

    return through2.obj(function(levelRequest, enc, cb) {

        var self = this;

        db.put(levelRequest.key, levelRequest.value, {sync:true}, function(error) {
            if (error) {
                console.log('encountered an error while putting ' + JSON.stringify(levelRequest) + ' on the database: '
                            + error);
                self.push({result:'error', key: levelRequest.key, msg: error});
            } else {
                self.push({result:'success', key: levelRequest.key});
            }
            cb();
        });

    })

}

test('can push values through a level database', function(t) {
    var db   = level('./test.db');

    var dbify = push(db);
    var stringify = JSONStream.stringify(false);

    dbify.pipe(stringify).pipe(process.stdout);

    dbify.write({key:'hi', value:"wisconsin"});

    db.close(function(er) {
        if (er) throw er;
        t.end();
    })

})
@lakowske
Copy link
Author

I also saw the following core dump:

*** Error in `node': free(): invalid pointer: 0x00007f3cbc0002b8 ***
Aborted (core dumped)

I am running Ubuntu 14.04 on an Intel x64.

@juliangruber
Copy link
Member

can you try require('level-11') instead? that is a pre-release version that is likely to work better in newer nodes

@lakowske
Copy link
Author

I tried using require('level-11') and got the following output:

TAP version 13
# can push values through a level database
Segmentation fault (core dumped)

I suspect

    db.close(function(er) {
        if (er) throw er;
        t.end();
    })

is triggering the issue because when I take it out, I don't get a segfault.

@juliangruber
Copy link
Member

the segfault is probably cause by still open iterators. i think @kesla worked on this?

@lakowske
Copy link
Author

Yea, I took a peak at the source code and thought I'd try waiting like so:

    dbify.on('end', function() {
        db.close(function(er) {
            if (er) throw er;
            t.end();
        })
    })

    dbify.end();

Since I started waiting, I haven't received a segfault. Was it rude how I was shutting down? I am new to level and it hadn't occurred to me to do this.

@ralphtheninja
Copy link
Member

@lakowske Nope you weren't rude, this is the way it's supposed to work. There's some race condition happening somewhere. I'll dive into it and see if I can find something :)

@ralphtheninja
Copy link
Member

@lakowske I'm wondering if the stream haven't finished yet and is trying to do db.put(). Can you double check this?

@lakowske
Copy link
Author

I'm not sure how to test if the stream haven't finished yet, but I tried anyway. Here is what I tried:

test('can push values through a level database', function(t) {
    var db   = level('./test.db');

    var dbify = push(db);
    var stringify = JSONStream.stringify(false);


    dbify.pipe(stringify).pipe(process.stdout);

    dbify.write({key:'hi', value:"wisconsin"});

    db.get('hi', function(error, value) {
        console.log(value);
    })

    db.close(function(er) {
        if (er) throw er;
        t.end();
    })


})

and got the following:

TAP version 13
# can push values through a level database
wisconsin
Segmentation fault (core dumped)

Not sure if that helps, but I have to get to work now.

@ralphtheninja
Copy link
Member

Ok, so I think I found something. It seems it's related to { sync: true } in combination with closing the db. If I remove { sync: true } it works each time (for me).

@ralphtheninja
Copy link
Member

Tried to strip this sample into something more minimalistic to see if we can reproduce the same kind of error without involving streams and other modules.

This seems to segfault as well. I was using iojs@1.6.2 for this.

var level = require('level')
var db = level('./test.db')

db.put('foo', 'bar', { sync: true }, function (err) {
  console.log('back from put', err)
})

db.close(function (err) {
  console.log('closed the db', err)
})

@ralphtheninja ralphtheninja added bug Something isn't working help wanted Extra attention is needed labels Mar 25, 2015
@ralphtheninja
Copy link
Member

I can confirm that this segfaults on 0.10.37 as well so it doesn't seem to be related to any specific version of node.

@ralphtheninja
Copy link
Member

/cc @rvagg @kesla @mcollina Any ideas on why { sync: true } would matter before I dive into C++? :)

@lakowske
Copy link
Author

@ralphtheninja Over lunch I tried your minimalistic version without sync, so:

var level = require('level')
var db = level('./test.db')

db.put('foo', 'bar', function (err) {
    console.log('back from put', err)
})

db.close(function (err) {
    console.log('closed the db', err)
})

and got the following on my third run:

Segmentation fault: 11

This occurred on OS X 10.9.5 (FWIW).

@kesla
Copy link
Contributor

kesla commented Mar 25, 2015

My guess is that there's an open iterator that's not being closed properly - but I don't have any pointers on where to look really to figure it out. Sorry.

@mcollina
Copy link
Member

My gut is that it is my fault, I probably got some bug in updating leveldown 0.10 to node v0.12 and iojs. Basically it's time for lldb/gdb and figuring out what the problem is :(.

@ralphtheninja
Copy link
Member

It's impossible for me to reproduce without sync: true. Perhaps it's more apparent when sync is set.

@No9
Copy link
Contributor

No9 commented Mar 25, 2015

Hey @mcollina

Using https://github.com/ddopson/node-segfault-handler
I get the output below
Ran this a few times and the output is consistent.
Does it ring any bells?

closed the db undefined
PID 21832 received SIGSEGV for address: 0x62
/home/anton/tests/leveldown-157/node_modules/segfault-handler/build/Release/segfault-handler.node(+0x1175)[0x7fdec0062175]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7fdec18a6340]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZNK7leveldb8MemTable13KeyComparatorclEPKcS3_+0x1a)[0x7fdec0a99efa]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN7leveldb8SkipListIPKcNS_8MemTable13KeyComparatorEE6InsertERKS2_+0x7a)[0x7fdec0a9a4aa]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN7leveldb8MemTable3AddEmNS_9ValueTypeERKNS_5SliceES4_+0xf7)[0x7fdec0a9ab67]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(+0x459c4)[0x7fdec0aaa9c4]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZNK7leveldb10WriteBatch7IterateEPNS0_7HandlerE+0x1a7)[0x7fdec0aaac87]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN7leveldb18WriteBatchInternal10InsertIntoEPKNS_10WriteBatchEPNS_8MemTableE+0x45)[0x7fdec0aaaf35]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN7leveldb6DBImpl5WriteERKNS_12WriteOptionsEPNS_10WriteBatchE+0x418)[0x7fdec0a92348]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN7leveldb2DB3PutERKNS_12WriteOptionsERKNS_5SliceES6_+0x52)[0x7fdec0a8d912]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN7leveldb6DBImpl3PutERKNS_12WriteOptionsERKNS_5SliceES6_+0x11)[0x7fdec0a8d941]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN9leveldown8Database13PutToDatabaseEPN7leveldb12WriteOptionsENS1_5SliceES4_+0x26)[0x7fdec0a82696]
/home/anton/tests/leveldown-157/node_modules/level/node_modules/leveldown/build/Release/leveldown.node(_ZN9leveldown11WriteWorker7ExecuteEv+0x4f)[0x7fdec0a8540f]
node[0xe0e939]
node[0xe1bef1]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182)[0x7fdec189e182]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fdec15cb47d]
Aborted (core dumped)
var level = require('level')
var db = level('./test.db')

var SegfaultHandler = require('segfault-handler');

SegfaultHandler.registerHandler();

db.put('foo', 'bar', { sync: true }, function (err) {
  console.log('back from put', err)
})

db.close(function (err) {
  console.log('closed the db', err)
})

@No9
Copy link
Contributor

No9 commented Mar 26, 2015

Dived in a little more this morning and and I think @kesla may have a point.

https://github.com/rvagg/node-leveldown/blob/v0.10.4/src/database.cc#L256
Is returning false so the db is being shut down immediately

My next step would be to see if writebatch in Leveldb

  1. should set up an iterator to make L256 true
  2. are some internals in leveldb that should take care of this.
  3. have we missed something completely

Also note that this is presenting like a race condition.
Sometimes the sample will work and sometimes the process just hangs (1 in 5 ish) instead of the normal dumping.

@ralphtheninja
Copy link
Member

@No9 Yes I've noticed that it hangs as well. Just doesn't end. And on ubuntu I eventually get the system program error dialog:

screenshot from 2015-03-26 12 59 13

@ralphtheninja
Copy link
Member

Ok, so I did the following:

$ ulimit -a unlimited
$ cd node_modules/level/node_modules/leveldown
$ node-gyp rebuild --debug
$ cd -
$ node segfault.js
$ gdb node core

And once in gdb:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `node segfault.js'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f88bc160a59 in leveldb::GetVarint32Ptr (p=0x62 <error: Cannot access memory at address 0x62>, limit=0x67 <error: Cannot access memory at address 0x67>, value=0x7f88b77fd924)
    at ../deps/leveldb/leveldb-1.14.0/util/coding.h:93
93      uint32_t result = *(reinterpret_cast<const unsigned char*>(p));
(gdb) bt
#0  0x00007f88bc160a59 in leveldb::GetVarint32Ptr (p=0x62 <error: Cannot access memory at address 0x62>, limit=0x67 <error: Cannot access memory at address 0x67>, value=0x7f88b77fd924)
    at ../deps/leveldb/leveldb-1.14.0/util/coding.h:93
#1  0x00007f88bc1603f1 in leveldb::GetLengthPrefixedSlice (data=0x62 <error: Cannot access memory at address 0x62>) at ../deps/leveldb/leveldb-1.14.0/db/memtable.cc:17
#2  0x00007f88bc160542 in leveldb::MemTable::KeyComparator::operator() (this=0x7f88b0000f68, aptr=0x62 <error: Cannot access memory at address 0x62>, bptr=0x7f88b0001008 "\vfoo\001\001")
    at ../deps/leveldb/leveldb-1.14.0/db/memtable.cc:36
#3  0x00007f88bc161870 in leveldb::SkipList<char const*, leveldb::MemTable::KeyComparator>::KeyIsAfterNode (this=0x7f88b0000f68, key=@0x7f88b77fdaf8: 0x7f88b0001008 "\vfoo\001\001", n=0x7f88b00008e0)
    at ../deps/leveldb/leveldb-1.14.0/db/skiplist.h:255
#4  0x00007f88bc161456 in leveldb::SkipList<char const*, leveldb::MemTable::KeyComparator>::FindGreaterOrEqual (this=0x7f88b0000f68, key=@0x7f88b77fdaf8: 0x7f88b0001008 "\vfoo\001\001", prev=0x7f88b77fda50)
    at ../deps/leveldb/leveldb-1.14.0/db/skiplist.h:265
#5  0x00007f88bc1611e0 in leveldb::SkipList<char const*, leveldb::MemTable::KeyComparator>::Insert (this=0x7f88b0000f68, key=@0x7f88b77fdaf8: 0x7f88b0001008 "\vfoo\001\001") at ../deps/leveldb/leveldb-1.14.0/db/skiplist.h:338
#6  0x00007f88bc1607c2 in leveldb::MemTable::Add (this=0x7f88b0000f20, s=1, type=leveldb::kTypeValue, key=..., value=...) at ../deps/leveldb/leveldb-1.14.0/db/memtable.cc:105
#7  0x00007f88bc171619 in leveldb::(anonymous namespace)::MemTableInserter::Put (this=0x7f88b77fdc30, key=..., value=...) at ../deps/leveldb/leveldb-1.14.0/db/write_batch.cc:118
#8  0x00007f88bc17130b in leveldb::WriteBatch::Iterate (this=0x7f88b77fdd90, handler=0x7f88b77fdc30) at ../deps/leveldb/leveldb-1.14.0/db/write_batch.cc:59
#9  0x00007f88bc171705 in leveldb::WriteBatchInternal::InsertInto (b=0x7f88b77fdd90, memtable=0x7f88b0000f20) at ../deps/leveldb/leveldb-1.14.0/db/write_batch.cc:133
#10 0x00007f88bc152bad in leveldb::DBImpl::Write (this=0x7f88b00008f0, options=..., my_batch=0x7f88b77fdd90) at ../deps/leveldb/leveldb-1.14.0/db/db_impl.cc:1192
#11 0x00007f88bc153b09 in leveldb::DB::Put (this=0x7f88b00008f0, opt=..., key=..., value=...) at ../deps/leveldb/leveldb-1.14.0/db/db_impl.cc:1419
#12 0x00007f88bc152841 in leveldb::DBImpl::Put (this=0x7f88b00008f0, o=..., key=..., val=...) at ../deps/leveldb/leveldb-1.14.0/db/db_impl.cc:1150
#13 0x00007f88bc13e843 in leveldown::Database::PutToDatabase (this=0x13774b0, options=0x13868d0, key=..., value=...) at ../src/database.cc:52
#14 0x00007f88bc146a5f in leveldown::WriteWorker::Execute (this=0x137c430) at ../src/database_async.cc:190
#15 0x00007f88bc13c3ad in NanAsyncExecute (req=0x137c438) at ../node_modules/nan/nan.h:1638
#16 0x0000000000cf90ac in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:91
#17 0x0000000000d06d29 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
#18 0x00007f88bef85182 in start_thread (arg=0x7f88b77fe700) at pthread_create.c:312
#19 0x00007f88becb247d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb) 

So now we have at least file names and file numbers.

/cc @mcollina @rvagg

@ralphtheninja
Copy link
Member

Digging through some old issues I found #32, reported by @rvagg

Without this, a segfault is possible when you close() while an operation is in-flight.

This feels very much related to the problems we are having here.

@ralphtheninja
Copy link
Member

@mcollina I don't think this is something related to your changes :)

@mcollina
Copy link
Member

👯! Big question: is this fixed in 1.0.0?

@ralphtheninja
Copy link
Member

@mcollina Nope (I just tested it)

@mcollina
Copy link
Member

Unfortunately I have no time now :(, but it is something I would love to solve :).

@ralphtheninja
Copy link
Member

We should solve this before releasing 1.0.0 imo.

@ralphtheninja
Copy link
Member

Updated stack trace with leveldb-1.17.0:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `node segfault.js'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007fd08ea98ff3 in leveldb::GetVarint32Ptr (p=0x62 <error: Cannot access memory at address 0x62>, limit=0x67 <error: Cannot access memory at address 0x67>, value=0x7fd08e212914)
    at ../deps/leveldb/leveldb-1.17.0/util/coding.h:93
93      uint32_t result = *(reinterpret_cast<const unsigned char*>(p));
(gdb) bt
#0  0x00007fd08ea98ff3 in leveldb::GetVarint32Ptr (p=0x62 <error: Cannot access memory at address 0x62>, limit=0x67 <error: Cannot access memory at address 0x67>, value=0x7fd08e212914)
    at ../deps/leveldb/leveldb-1.17.0/util/coding.h:93
#1  0x00007fd08ea9898b in leveldb::GetLengthPrefixedSlice (data=0x62 <error: Cannot access memory at address 0x62>) at ../deps/leveldb/leveldb-1.17.0/db/memtable.cc:17
#2  0x00007fd08ea98adc in leveldb::MemTable::KeyComparator::operator() (this=0x7fd080000f58, aptr=0x62 <error: Cannot access memory at address 0x62>, bptr=0x7fd080000ff8 "\vfoo\001\001")
    at ../deps/leveldb/leveldb-1.17.0/db/memtable.cc:36
#3  0x00007fd08ea99e0a in leveldb::SkipList<char const*, leveldb::MemTable::KeyComparator>::KeyIsAfterNode (this=0x7fd080000f58, key=@0x7fd08e212ae8: 0x7fd080000ff8 "\vfoo\001\001", n=0x7fd0800008e0)
    at ../deps/leveldb/leveldb-1.17.0/db/skiplist.h:255
#4  0x00007fd08ea999f0 in leveldb::SkipList<char const*, leveldb::MemTable::KeyComparator>::FindGreaterOrEqual (this=0x7fd080000f58, key=@0x7fd08e212ae8: 0x7fd080000ff8 "\vfoo\001\001", prev=0x7fd08e212a40)
    at ../deps/leveldb/leveldb-1.17.0/db/skiplist.h:265
#5  0x00007fd08ea9977a in leveldb::SkipList<char const*, leveldb::MemTable::KeyComparator>::Insert (this=0x7fd080000f58, key=@0x7fd08e212ae8: 0x7fd080000ff8 "\vfoo\001\001") at ../deps/leveldb/leveldb-1.17.0/db/skiplist.h:338
#6  0x00007fd08ea98d5c in leveldb::MemTable::Add (this=0x7fd080000f10, s=1, type=leveldb::kTypeValue, key=..., value=...) at ../deps/leveldb/leveldb-1.17.0/db/memtable.cc:105
#7  0x00007fd08eaa9c9b in leveldb::(anonymous namespace)::MemTableInserter::Put (this=0x7fd08e212c20, key=..., value=...) at ../deps/leveldb/leveldb-1.17.0/db/write_batch.cc:118
#8  0x00007fd08eaa998d in leveldb::WriteBatch::Iterate (this=0x7fd08e212d90, handler=0x7fd08e212c20) at ../deps/leveldb/leveldb-1.17.0/db/write_batch.cc:59
#9  0x00007fd08eaa9d87 in leveldb::WriteBatchInternal::InsertInto (b=0x7fd08e212d90, memtable=0x7fd080000f10) at ../deps/leveldb/leveldb-1.17.0/db/write_batch.cc:133
#10 0x00007fd08ea8b1b5 in leveldb::DBImpl::Write (this=0x7fd0800008f0, options=..., my_batch=0x7fd08e212d90) at ../deps/leveldb/leveldb-1.17.0/db/db_impl.cc:1201
#11 0x00007fd08ea8c133 in leveldb::DB::Put (this=0x7fd0800008f0, opt=..., key=..., value=...) at ../deps/leveldb/leveldb-1.17.0/db/db_impl.cc:1434
#12 0x00007fd08ea8ae25 in leveldb::DBImpl::Put (this=0x7fd0800008f0, o=..., key=..., val=...) at ../deps/leveldb/leveldb-1.17.0/db/db_impl.cc:1155
#13 0x00007fd08ea77c8f in leveldown::Database::PutToDatabase (this=0x251c0c0, options=0x251c250, key=..., value=...) at ../src/database.cc:54
#14 0x00007fd08ea7e8c1 in leveldown::WriteWorker::Execute (this=0x251c170) at ../src/database_async.cc:191
#15 0x00007fd08ea757f9 in NanAsyncExecute (req=0x251c178) at ../node_modules/nan/nan.h:1581
#16 0x0000000000cf90ac in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:91
#17 0x0000000000d06d29 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
#18 0x00007fd0918bd182 in start_thread (arg=0x7fd08e213700) at pthread_create.c:312
#19 0x00007fd0915ea47d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

@braydonf
Copy link
Contributor

braydonf commented May 8, 2015

Implemented a writing guard for LevelUP here: Level/levelup#338

@braydonf
Copy link
Contributor

btw: #174 ended up not fixing the bug

@juliangruber juliangruber reopened this Jun 16, 2015
@snowyu
Copy link

snowyu commented Jul 20, 2015

The sync option do not make the put function synchronous, it just let the leveldb flush the data to disk immediately. So this is not bug, it is a wrong use. If you really wanna synchronous function to work, you have to try my fork snowyu/node-nosql-leveldb, discussion #136 and Level/abstract-leveldown#49

@ralphtheninja
Copy link
Member

I just ran

var levelup = require('levelup');
var leveldown = require('leveldown');
var db = levelup('./test.db');

db.put('foo', 'bar', { sync: true }, function (err) {
  console.log('back from put', err);
});

db.close(function (err) {
  console.log('closed the db', err);
});

on 1.7.1 and can confirm we still have problems with that version

@huan
Copy link
Contributor

huan commented Jun 17, 2018

I ran into segmentation fault when I'm using leveldown v2.0.

Try to upgrade to leveldown v4 to see if it will still exist.

Related to:

@huan
Copy link
Contributor

huan commented Aug 6, 2018

After I switched to the nosql-leveldb, which is a fork version of leveldown by @snowyu, the Segfault problem seems had gone and did not appear for a month.

@snowyu Would you like to merge back what you did for fix this issue back to the leveldown repository? Because I believe it will be very valuable for all the users.

@snowyu
Copy link

snowyu commented Aug 6, 2018

In fact I had already pull-requested it (synchronus methods) as early as 4 years ago #136 .
But they(the level community) strongly opposed the synchronization methods. See the discussion Level/abstract-leveldown#49. So I have to fork it as nosql-leveldb.

@huan
Copy link
Contributor

huan commented Aug 6, 2018

Oh, it's a very old and long thread, it takes some time to read.

It seems the community is not interested in the sync api, however, a segment fault fix should be welcome very much.

To make it simple, could you please just pull request the segment fault fix so that we can see if we can fix level down without changes other api?

It would be awesome if we could, thank you very much.

@snowyu
Copy link

snowyu commented Aug 6, 2018

Nohope, I've changed the low-level core totally using the sync methods only in it. In my opinion. KISS is more important. Now it is clear and easy to read and debug. And nodejs has supported the worker threads since version 10.5.0 (--experimental-worker). This will make the external thread harder to write(mutex etc needed) and debug.

Why not focus on the database business with all your heart and soul? Let the nodejs to do other things(coroute/thread/process schedule). Is the sync way real dangerous performance killer?

@huan
Copy link
Contributor

huan commented Aug 6, 2018

Understood. So your fix was based on the Sync modification right?

That will be a pity that we can not get this fixed merged.

And about the sync API: I have no bias about each side. However, I have to agree that if we use a Sync API, the event loop has to wait for the sync API to finish before it can do any other jobs. I believe that's why the people do not like the sync API.

@snowyu
Copy link

snowyu commented Aug 6, 2018

Yes.

I do not think you understand my words. First you can run it in a worker thread(or generator?) to avoid the block.
Second do you think this really make you get into trouble after running a month?

@huan
Copy link
Contributor

huan commented Aug 7, 2018

nosql-leveldb works like a charm in my environment, I'm using it for my Wechat Bot as local file cache behind Protocol Payloads.

About the worker thread(or generator): could you please provide example code for the node to demonstrate that a Sync operation will not block event loop? I'm not familiar with the worker thread with the node(nor generator), but I'm very interested in this technic because sometimes I have to deal with lots of data with a heavy loop which will block the event loop for seconds.

@snowyu
Copy link

snowyu commented Aug 7, 2018

The concept is put it(a task) into another process/thread/coroutine(fiber) to run.

@huan
Copy link
Contributor

huan commented Aug 8, 2018

Thanks for the information.

However, it seems that the Generator/Fibers(coroutine) will not solve the problem when we have an event-loop blocker function; and the Fork/Worker module will be too complicated.

Let's make the problem simple:

If we have the following codes to run, how can we guarantee the setInterval will be called every 100 ms, at the same time of the block() function is running?

function block () {
  let i = 0;
  for (i = 0; i < 10000000000; i++) {
    let j = i
    j = (i + j) / (i * j)
  }
}

function main () {
  setInterval(() => { console.log('setInterval runned') }, 100)
  block()
}

main()

The above code will wait around 10 seconds before output setInterval runned.

@snowyu
Copy link

snowyu commented Aug 8, 2018

The thread way is the leveldown internal used(a new thread created each async operation). The cost of thread/process switching is quite expensive. This is why the performance of synchronization is better than asynchronous here(Asynchronous is roughly twice as slow as synchronization). You can run the benchmark in the node-nosql-leveldb/bench.

The coroutine/fiber is a light-weight implementation of task schedule in one thread/process. It uses cooperative scheduling. Assuming that the thread switching is 10 units of time, the coroutine overhead may be just 1. I could yield it when using disk I/O If I completely write the leveldb with js. But it is just an adapter for leveldb library.

And I do not think the leveldb is disk I/O intensive in the general scenario. The table, block cache and write buffer are used in leveldb. So unless the very special scene makes the cache miss. You should test(benchmark) your user scene to determine what's your bottleneck.

I do not know your idea. if you know wanna howto write block task in generator, you can read this: https://jiayihu.github.io/sinergia/

@huan
Copy link
Contributor

huan commented Aug 8, 2018

If I understand right, we are simulating Async operations for leveldown because internally leveldb is sync.

Thanks for your explanation, I'll look into it later.

@ralphtheninja ralphtheninja removed their assignment Aug 10, 2018
@vweevers vweevers changed the title Segfault using level v0.18.0 and Node v0.12.1 Segfault on close with pending put Mar 2, 2019
@vweevers vweevers removed the help wanted Extra attention is needed label Mar 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working segfault
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants