-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Put request hangs on 3.11.0 #696
Comments
Can you please send a couple of more last LOG lines? All of these LOG lines are just file deletions. It would be good to get the full output for JOB 134. |
Here you go. 2015/08/20-01:42:14.041963 ff547ff080 EVENT_LOG_v1 {"time_micros": 1440034934041934, "job": 132, "event": "flush_started", "num_memtables": 1, "num_entries": 28252, "num_deletes": 0, "memory_usage": 4091941} |
Hitting this issue on GET path as well. Thread 48 (Thread 0xffd57fe080 (LWP 1669)): |
Your GET call seems to be stuck on reading blocks from files. That shouldn't wait for any lock. Are you running on a normal filesystem? Can you explain your reproduction steps? Also, since you're able to reproduce this reliably, can you check if it reproduces with 3.13? BTW Your configuration seems suboptimal. You're memtable size is 4MB, but your L1 size is configured to be ~640MB. Usually we recommend to set L1 size to be similar to L0 size (which is 4 times the memtable size, or 16MB). You can see from the LOG that your write-amplification is 23 (grep for "write-amplify") just for L0->L1 compaction. We usually see write amp of ~15 for the total database. |
Sorry if my comments caused confusion. GET path hang was hit only once. |
If Get was stuck on a call to read in the kernel/filesystem then I suspect On Mon, Aug 24, 2015 at 5:46 PM, Sandeep notifications@github.com wrote:
Mark Callaghan |
@lookforsandy We did have some hang bugs in the past, but they only happened when RocksDB's stall mechanism kicked in. When stalls are triggered, we actually stop the writes until compaction finishes -- we had a bug where compaction wasn't started and writes were waiting until it finishes. Ooops. However, this doesn't seem like the case here. From your LOG files, it doesn't seem like stalls were triggered at all. Because stalls weren't triggered, nothing should stop the write. It's interesting that Get() was stuck in the call to the FS. We have never seen this before, and we run quite a big deployment at FB. What storage solution do you use (if you're allowed to share)? |
@igorcanadi @rven1 do you still remember the bug we found about missing scheduled flush/compaction? I don't remember the details. Can that be related? |
@siying I remember the bug. It only triggered when stall happened. Also, when more than one column families were used. None of this is the case here. |
@lookforsandy just to confirm, by "hang", you mean hanging forever, or stopping for very long and recovering later? |
@siying It is a complete hang. |
@igorcanadi @siying In which version the "missing-scheduled-flush/compaction-bug" fixed? I think I hit the similar problem: the Rocksdb (3.13.1 in Samza on Yarn) totally hanged, the "Stopping writes" triggered but no compaction started. Below is my last few lines of
|
Can you please send us the full LOG? |
@igorcanadi I can send you the full LOG but it's 13MB, what's the proper way to send it to you? Below is the first and last 1k lines of it. |
@igorcanadi I've tried using another version of Samza(0.9.0) with Rocksdb(3.5.1) on another Yarn(2.4.1), but the HANGING still were there. The 2.4 Yarn cluster is mix of bare metal with CentOS 5 and CentOS 6, the hanged Rocksdb is always on CentOS 6 in all my 3 test rounds, it's wired. I'll do more tests tomorrow to see if the OS is relevant to the issue. The LOG this time is smaller so I paste all of it: http://pastebin.com/8i1h9XGs. BTW: There is |
In this particular LOG there is no mention of stalling anywhere, so I don't think this was a problem with RocksDB. Can you find a stack trace of the hanged threads? |
jstack: http://pastebin.com/b8inYP1h It's on spinning disk, so I did some tuning this time which like larger sst file size, but it's no help... And the iostat showed low workload as below, please advice.
|
We started seeing this recently at LinkedIn with a Samza job running RocksDB version: 3.13.1 with TTL enabled. The performance degraded by an order of magnitude (not a complete hang) for around 23hrs and then magically fixed itself. The log is 47MB compressed, much too large for paste bin. So here's what I've noticed:
I did some visual (inexact) analysis. Before the performance issue, there are around 60 stalls for every 1 stop. During the performance issue that ratio decreases to about 11 to 1. I'm attaching a file with the config and some samples of the stats. Any help would be appreciated! -Jake |
Looking at the compaction stats, you flushed a total of 23GB with 177321 flushes, or 140 KB per flush on average. Why do you flush a memtable with only 140 KB? Are you calling db->Flush() too often? |
You can also see this here:
18 L0 files, but total size is only 1MB. |
Thanks, @igorcanadi
That's entirely possible. It looks like Samza's cache in front of RocksDB has been liberal with the flushes. Since this job is doing a stream join, I wouldn't expect the access pattern to change much, so I'm a little surprised that the performance was so binary (great/terrible) and that it was able to recover. Do the stats provide any explanation for this? |
If you overload RocksDB with work (i.e. do bunch of writes really fast, or in your case, bunch of small flushes), it will begin stalling writes while the compactions (deferred work) completes. An interesting thing with RocksDB and LSM architecture is that the more behind you are on compactions, the more expensive the compactions are (due to increased write amplifications and single-threadness of L0->L1 compaction). So our write stalls have to be tuned exactly right for RocksDB to behave well with extremely high write rate. |
@jmakes which version did you upgrade from? |
Thanks again, Igor! @siying The job was running samza 10.0, we're upgrading them essentially to HEAD, which is a LinkedIn prerelease of version 10.1. |
Closing this via automation due to lack of activity. If discussion is still needed here, please re-open or create a new/updated issue. |
We are experiencing PUT request hang on 3.11.0.
Below is the stack trace:
Thread 58 (Thread 0xfff126f080 (LWP 1332)):
#0 __pthread_cond_wait (cond=0x1010e048, mutex=0x1010e020) at pthread_cond_wait.c:158
#1 0x00000062882e76d8 in ?? () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062879cba4c in start_thread (arg=0xfff126f080) at pthread_create.c:310
#3 0x00000062878fa21c in __thread_start () from /lib64/libc.so.6
Thread 57 (Thread 0xfff0a6f080 (LWP 1333)):
#0 __pthread_cond_wait (cond=0x1010e048, mutex=0x1010e020) at pthread_cond_wait.c:158
#1 0x00000062882e76d8 in ?? () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062879cba4c in start_thread (arg=0xfff0a6f080) at pthread_create.c:310
#3 0x00000062878fa21c in __thread_start () from /lib64/libc.so.6
Thread 56 (Thread 0xffebfff080 (LWP 1334)):
#0 __pthread_cond_wait (cond=0x1010e048, mutex=0x1010e020) at pthread_cond_wait.c:158
#1 0x00000062882e76d8 in ?? () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062879cba4c in start_thread (arg=0xffebfff080) at pthread_create.c:310
#3 0x00000062878fa21c in __thread_start () from /lib64/libc.so.6
Thread 55 (Thread 0xffeb7ff080 (LWP 1335)):
#0 __pthread_cond_wait (cond=0x1010e048, mutex=0x1010e020) at pthread_cond_wait.c:158
#1 0x00000062882e76d8 in ?? () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062879cba4c in start_thread (arg=0xffeb7ff080) at pthread_create.c:310
#3 0x00000062878fa21c in __thread_start () from /lib64/libc.so.6
Thread 52 (Thread 0xffdcffe080 (LWP 1368)):
#0 __pthread_cond_wait (cond=0xffdcffcd08, mutex=0x1013a888) at pthread_cond_wait.c:158
#1 0x0000006288296d08 in rocksdb::port::CondVar::Wait() () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062882ffce0 in rocksdb::InstrumentedCondVar::WaitInternal() () from /usr/lib64/librocksdb.so.3.11
#3 0x00000062882ffef4 in rocksdb::InstrumentedCondVar::Wait() () from /usr/lib64/librocksdb.so.3.11
#4 0x00000062882957b8 in rocksdb::WriteThread::EnterWriteThread(rocksdb::WriteThread::Writer*, unsigned long) () from /usr/lib64/librocksdb.so.3.11
#5 0x000000628820cca8 in rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, rocksdb::WriteBatch*) () from /usr/lib64/librocksdb.so.3.11
#6 0x00000062881fd030 in rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&) () from /usr/lib65/librocksdb.so.3.11
#7 0x00000062881fd0c0 in rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&) () from /usr/lib64/librocksdb.so.3.11
#8 0x0000006288214678 in ?? () from /usr/lib64/librocksdb.so.3.11
#9 0x00000062881bea70 in rocksdb_put () from /usr/lib64/librocksdb.so.3.11
Thread 8 (Thread 0xff76fff080 (LWP 1419)):
#0 __pthread_cond_wait (cond=0xff76ffddf8, mutex=0x1013a888) at pthread_cond_wait.c:158
#1 0x0000006288296d08 in rocksdb::port::CondVar::Wait() () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062882ffce0 in rocksdb::InstrumentedCondVar::WaitInternal() () from /usr/lib64/librocksdb.so.3.11
#3 0x00000062882ffef4 in rocksdb::InstrumentedCondVar::Wait() () from /usr/lib64/librocksdb.so.3.11
#4 0x00000062882957b8 in rocksdb::WriteThread::EnterWriteThread(rocksdb::WriteThread::Writer*, unsigned long) () from /usr/lib64/librocksdb.so.3.11
#5 0x000000628820cca8 in rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, rocksdb::WriteBatch*) () from /usr/lib64/librocksdb.so.3.11
#6 0x00000062881fd030 in rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&) () from /usr/lib64/librocksdb.so.3.11
#7 0x00000062881fd0c0 in rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&) () from /usr/lib64/librocksdb.so.3.11
#8 0x0000006288214678 in ?? () from /usr/lib64/librocksdb.so.3.11
#9 0x00000062881bea70 in rocksdb_put () from /usr/lib64/librocksdb.so.3.11
Thread 6 (Thread 0xff75fff080 (LWP 1421)):
#0 __pthread_cond_wait (cond=0xff75ffdd08, mutex=0x1013a888) at pthread_cond_wait.c:158
#1 0x0000006288296d08 in rocksdb::port::CondVar::Wait() () from /usr/lib64/librocksdb.so.3.11
#2 0x00000062882ffce0 in rocksdb::InstrumentedCondVar::WaitInternal() () from /usr/lib64/librocksdb.so.3.11
#3 0x00000062882ffef4 in rocksdb::InstrumentedCondVar::Wait() () from /usr/lib64/librocksdb.so.3.11
#4 0x00000062882957b8 in rocksdb::WriteThread::EnterWriteThread(rocksdb::WriteThread::Writer*, unsigned long) () from /usr/lib64/librocksdb.so.3.11
#5 0x000000628820cca8 in rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, rocksdb::WriteBatch*) () from /usr/lib64/librocksdb.so.3.11
#6 0x00000062881fd030 in rocksdb::DB::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&) () from /usr/lib64/librocksdb.so.3.11
#7 0x00000062881fd0c0 in rocksdb::DBImpl::Put(rocksdb::WriteOptions const&, rocksdb::ColumnFamilyHandle*, rocksdb::Slice const&, rocksdb::Slice const&) () from /usr/lib64/librocksdb.so.3.11
#8 0x0000006288214678 in ?? () from /usr/lib64/librocksdb.so.3.11
#9 0x00000062881bea70 in rocksdb_put () from /usr/lib64/librocksdb.so.3.11
Note: All other threads have stack trace similar to Thread 6 or Thread 58.
Configuration details are as below:
2015/08/19-23:56:47.814093 fff1d93000 RocksDB version: 3.11.0
2015/08/19-23:56:47.814288 fff1d93000 Git sha rocksdb_build_git_sha:812c461c96869ebcd8e629da8f01e1cea01c00ca
2015/08/19-23:56:47.814315 fff1d93000 Compile date Jul 30 2015
2015/08/19-23:56:47.814326 fff1d93000 DB SUMMARY
2015/08/19-23:56:47.815267 fff1d93000 CURRENT file: CURRENT
2015/08/19-23:56:47.815290 fff1d93000 IDENTITY file: IDENTITY
2015/08/19-23:56:47.815399 fff1d93000 MANIFEST file: MANIFEST-000804 size: 1578 Bytes
2015/08/19-23:56:47.815414 fff1d93000 SST files in /rocksdb/user-container-1 dir, Total Num: 7, files: 000184.sst 000186.sst 000187.sst 000189.sst 000191.sst 000192.sst 000194.sst
2015/08/19-23:56:47.815428 fff1d93000 Write Ahead Log file in /rocksdb/user-container-1: 000805.log size: 0
2015/08/19-23:56:47.815529 fff1d93000 Options.error_if_exists: 0
2015/08/19-23:56:47.815543 fff1d93000 Options.create_if_missing: 0
2015/08/19-23:56:47.815553 fff1d93000 Options.paranoid_checks: 1
2015/08/19-23:56:47.815564 fff1d93000 Options.env: 0x62884704e0
2015/08/19-23:56:47.815574 fff1d93000 Options.info_log: 0x10117010
2015/08/19-23:56:47.815585 fff1d93000 Options.max_open_files: 5000
2015/08/19-23:56:47.815595 fff1d93000 Options.max_total_wal_size: 16777192
2015/08/19-23:56:47.815606 fff1d93000 Options.disableDataSync: 0
2015/08/19-23:56:47.815616 fff1d93000 Options.use_fsync: 0
2015/08/19-23:56:47.815627 fff1d93000 Options.max_log_file_size: 0
2015/08/19-23:56:47.815637 fff1d93000 Options.max_manifest_file_size: 18446744073709551615
2015/08/19-23:56:47.815648 fff1d93000 Options.log_file_time_to_roll: 0
2015/08/19-23:56:47.815658 fff1d93000 Options.keep_log_file_num: 1000
2015/08/19-23:56:47.815668 fff1d93000 Options.allow_os_buffer: 1
2015/08/19-23:56:47.815679 fff1d93000 Options.allow_mmap_reads: 0
2015/08/19-23:56:47.815689 fff1d93000 Options.allow_mmap_writes: 0
2015/08/19-23:56:47.815700 fff1d93000 Options.create_missing_column_families: 0
2015/08/19-23:56:47.815710 fff1d93000 Options.db_log_dir:
2015/08/19-23:56:47.815721 fff1d93000 Options.wal_dir: /rocksdb/user-container-1
2015/08/19-23:56:47.815731 fff1d93000 Options.table_cache_numshardbits: 4
2015/08/19-23:56:47.815742 fff1d93000 Options.delete_obsolete_files_period_micros: 21600000000
2015/08/19-23:56:47.815752 fff1d93000 Options.max_background_compactions: 4
2015/08/19-23:56:47.815763 fff1d93000 Options.max_background_flushes: 1
2015/08/19-23:56:47.815773 fff1d93000 Options.WAL_ttl_seconds: 0
2015/08/19-23:56:47.815784 fff1d93000 Options.WAL_size_limit_MB: 0
2015/08/19-23:56:47.815815 fff1d93000 Options.manifest_preallocation_size: 4194304
2015/08/19-23:56:47.815826 fff1d93000 Options.allow_os_buffer: 1
2015/08/19-23:56:47.815836 fff1d93000 Options.allow_mmap_reads: 0
2015/08/19-23:56:47.815847 fff1d93000 Options.allow_mmap_writes: 0
2015/08/19-23:56:47.815857 fff1d93000 Options.is_fd_close_on_exec: 1
2015/08/19-23:56:47.815867 fff1d93000 Options.stats_dump_period_sec: 3600
2015/08/19-23:56:47.815878 fff1d93000 Options.advise_random_on_open: 1
2015/08/19-23:56:47.815888 fff1d93000 Options.db_write_buffer_size: 0
2015/08/19-23:56:47.815899 fff1d93000 Options.access_hint_on_compaction_start: NORMAL
2015/08/19-23:56:47.815910 fff1d93000 Options.use_adaptive_mutex: 0
2015/08/19-23:56:47.815920 fff1d93000 Options.rate_limiter: (nil)
2015/08/19-23:56:47.815931 fff1d93000 Options.bytes_per_sync: 0
2015/08/19-23:56:47.815942 fff1d93000 Options.wal_bytes_per_sync: 0
2015/08/19-23:56:47.816004 fff1d93000 Options.enable_thread_tracking: 0
2015/08/19-23:56:47.816022 fff1d93000 Compression algorithms supported:
2015/08/19-23:56:47.816032 fff1d93000 Snappy supported: 1
2015/08/19-23:56:47.816042 fff1d93000 Zlib supported: 1
2015/08/19-23:56:47.816053 fff1d93000 Bzip supported: 1
2015/08/19-23:56:47.816063 fff1d93000 LZ4 supported: 0
2015/08/19-23:56:47.816334 fff1d93000 Recovering from manifest file: MANIFEST-000804
2015/08/19-23:56:47.816448 fff1d93000 --------------- Options for column family [default]:
2015/08/19-23:56:47.816475 fff1d93000 Options.error_if_exists: 0
2015/08/19-23:56:47.816485 fff1d93000 Options.create_if_missing: 0
2015/08/19-23:56:47.816496 fff1d93000 Options.paranoid_checks: 1
2015/08/19-23:56:47.816506 fff1d93000 Options.env: 0x62884704e0
2015/08/19-23:56:47.816517 fff1d93000 Options.info_log: 0x10117010
2015/08/19-23:56:47.816527 fff1d93000 Options.max_open_files: 5000
2015/08/19-23:56:47.816537 fff1d93000 Options.max_total_wal_size: 16777192
2015/08/19-23:56:47.816548 fff1d93000 Options.disableDataSync: 0
2015/08/19-23:56:47.816558 fff1d93000 Options.use_fsync: 0
2015/08/19-23:56:47.816569 fff1d93000 Options.max_log_file_size: 0
2015/08/19-23:56:47.816579 fff1d93000 Options.max_manifest_file_size: 18446744073709551615
2015/08/19-23:56:47.816590 fff1d93000 Options.log_file_time_to_roll: 0
2015/08/19-23:56:47.816600 fff1d93000 Options.keep_log_file_num: 1000
2015/08/19-23:56:47.816611 fff1d93000 Options.allow_os_buffer: 1
2015/08/19-23:56:47.816621 fff1d93000 Options.allow_mmap_reads: 0
2015/08/19-23:56:47.816631 fff1d93000 Options.allow_mmap_writes: 0
2015/08/19-23:56:47.816642 fff1d93000 Options.create_missing_column_families: 0
2015/08/19-23:56:47.816652 fff1d93000 Options.db_log_dir:
2015/08/19-23:56:47.816662 fff1d93000 Options.wal_dir: /rocksdb/user-container-1
2015/08/19-23:56:47.816673 fff1d93000 Options.table_cache_numshardbits: 4
2015/08/19-23:56:47.816683 fff1d93000 Options.delete_obsolete_files_period_micros: 21600000000
2015/08/19-23:56:47.816694 fff1d93000 Options.max_background_compactions: 4
2015/08/19-23:56:47.816704 fff1d93000 Options.max_background_flushes: 1
2015/08/19-23:56:47.816715 fff1d93000 Options.WAL_ttl_seconds: 0
2015/08/19-23:56:47.816725 fff1d93000 Options.WAL_size_limit_MB: 0
2015/08/19-23:56:47.816757 fff1d93000 Options.manifest_preallocation_size: 4194304
2015/08/19-23:56:47.816767 fff1d93000 Options.allow_os_buffer: 1
2015/08/19-23:56:47.816777 fff1d93000 Options.allow_mmap_reads: 0
2015/08/19-23:56:47.816788 fff1d93000 Options.allow_mmap_writes: 0
2015/08/19-23:56:47.816798 fff1d93000 Options.is_fd_close_on_exec: 1
2015/08/19-23:56:47.816808 fff1d93000 Options.stats_dump_period_sec: 3600
2015/08/19-23:56:47.816819 fff1d93000 Options.advise_random_on_open: 1
2015/08/19-23:56:47.816830 fff1d93000 Options.db_write_buffer_size: 0
2015/08/19-23:56:47.816840 fff1d93000 Options.access_hint_on_compaction_start: NORMAL
2015/08/19-23:56:47.816851 fff1d93000 Options.use_adaptive_mutex: 0
2015/08/19-23:56:47.816861 fff1d93000 Options.rate_limiter: (nil)
2015/08/19-23:56:47.816872 fff1d93000 Options.bytes_per_sync: 0
2015/08/19-23:56:47.816882 fff1d93000 Options.wal_bytes_per_sync: 0
2015/08/19-23:56:47.816892 fff1d93000 Options.enable_thread_tracking: 0
2015/08/19-23:56:47.816903 fff1d93000 Options.comparator: rocksdb.InternalKeyComparator:internal
2015/08/19-23:56:47.816914 fff1d93000 Options.merge_operator: None
2015/08/19-23:56:47.816925 fff1d93000 Options.compaction_filter: None
2015/08/19-23:56:47.816936 fff1d93000 Options.compaction_filter_factory: DefaultCompactionFilterFactory
2015/08/19-23:56:47.816947 fff1d93000 Options.compaction_filter_factory_v2: DefaultCompactionFilterFactoryV2
2015/08/19-23:56:47.817011 fff1d93000 Options.memtable_factory: SkipListFactory
2015/08/19-23:56:47.817028 fff1d93000 Options.table_factory: BlockBasedTable
2015/08/19-23:56:47.817077 fff1d93000 table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x101101c0)
cache_index_and_filter_blocks: 1
index_type: 0
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x1011bd18
block_cache_size: 100000
block_cache_compressed: (nil)
block_size: 16384
block_size_deviation: 10
block_restart_interval: 8
filter_policy: nullptr
format_version: 0
2015/08/19-23:56:47.817092 fff1d93000 Options.write_buffer_size: 4194304
2015/08/19-23:56:47.817102 fff1d93000 Options.max_write_buffer_number: 3
2015/08/19-23:56:47.817114 fff1d93000 Options.compression[0]: NoCompression
2015/08/19-23:56:47.817125 fff1d93000 Options.compression[1]: NoCompression
2015/08/19-23:56:47.817135 fff1d93000 Options.compression[2]: NoCompression
2015/08/19-23:56:47.817146 fff1d93000 Options.compression[3]: NoCompression
2015/08/19-23:56:47.817157 fff1d93000 Options.prefix_extractor: nullptr
2015/08/19-23:56:47.817167 fff1d93000 Options.num_levels: 7
2015/08/19-23:56:47.817178 fff1d93000 Options.min_write_buffer_number_to_merge: 1
2015/08/19-23:56:47.817188 fff1d93000 Options.purge_redundant_kvs_while_flush: 1
2015/08/19-23:56:47.817199 fff1d93000 Options.compression_opts.window_bits: -14
2015/08/19-23:56:47.817209 fff1d93000 Options.compression_opts.level: -1
2015/08/19-23:56:47.817220 fff1d93000 Options.compression_opts.strategy: 0
2015/08/19-23:56:47.817230 fff1d93000 Options.level0_file_num_compaction_trigger: 8
2015/08/19-23:56:47.817241 fff1d93000 Options.level0_slowdown_writes_trigger: 20
2015/08/19-23:56:47.817251 fff1d93000 Options.level0_stop_writes_trigger: 24
2015/08/19-23:56:47.817261 fff1d93000 Options.max_mem_compaction_level: 2
2015/08/19-23:56:47.817272 fff1d93000 Options.target_file_size_base: 67108864
2015/08/19-23:56:47.817282 fff1d93000 Options.target_file_size_multiplier: 1
2015/08/19-23:56:47.817293 fff1d93000 Options.max_bytes_for_level_base: 671088640
2015/08/19-23:56:47.817303 fff1d93000 Options.level_compaction_dynamic_level_bytes: 0
2015/08/19-23:56:47.817314 fff1d93000 Options.max_bytes_for_level_multiplier: 10
2015/08/19-23:56:47.817324 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[0]: 1
2015/08/19-23:56:47.817336 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[1]: 1
2015/08/19-23:56:47.817346 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[2]: 1
2015/08/19-23:56:47.817357 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[3]: 1
2015/08/19-23:56:47.817368 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[4]: 1
2015/08/19-23:56:47.817379 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[5]: 1
2015/08/19-23:56:47.817390 fff1d93000 Options.max_bytes_for_level_multiplier_addtl[6]: 1
2015/08/19-23:56:47.817401 fff1d93000 Options.max_sequential_skip_in_iterations: 8
2015/08/19-23:56:47.817411 fff1d93000 Options.expanded_compaction_factor: 25
2015/08/19-23:56:47.817421 fff1d93000 Options.source_compaction_factor: 1
2015/08/19-23:56:47.817432 fff1d93000 Options.max_grandparent_overlap_factor: 10
2015/08/19-23:56:47.817442 fff1d93000 Options.arena_block_size: 419430
2015/08/19-23:56:47.817453 fff1d93000 Options.soft_rate_limit: 0.00
2015/08/19-23:56:47.817472 fff1d93000 Options.hard_rate_limit: 0.00
2015/08/19-23:56:47.817484 fff1d93000 Options.rate_limit_delay_max_milliseconds: 1000
2015/08/19-23:56:47.817495 fff1d93000 Options.disable_auto_compactions: 0
2015/08/19-23:56:47.817505 fff1d93000 Options.purge_redundant_kvs_while_flush: 1
2015/08/19-23:56:47.817557 fff1d93000 Options.filter_deletes: 0
2015/08/19-23:56:47.817572 fff1d93000 Options.verify_checksums_in_compaction: 0
2015/08/19-23:56:47.817583 fff1d93000 Options.compaction_style: 0
2015/08/19-23:56:47.817593 fff1d93000 Options.compaction_options_universal.size_ratio: 1
2015/08/19-23:56:47.817604 fff1d93000 Options.compaction_options_universal.min_merge_width: 2
2015/08/19-23:56:47.817614 fff1d93000 Options.compaction_options_universal.max_merge_width: 4294967295
2015/08/19-23:56:47.817625 fff1d93000 Options.compaction_options_universal.max_size_amplification_percent: 200
2015/08/19-23:56:47.817635 fff1d93000 Options.compaction_options_universal.compression_size_percent: -1
2015/08/19-23:56:47.817646 fff1d93000 Options.compaction_options_fifo.max_table_files_size: 1073741824
2015/08/19-23:56:47.817657 fff1d93000 Options.table_properties_collectors:
2015/08/19-23:56:47.817668 fff1d93000 Options.inplace_update_support: 0
2015/08/19-23:56:47.817678 fff1d93000 Options.inplace_update_num_locks: 10000
2015/08/19-23:56:47.817689 fff1d93000 Options.min_partial_merge_operands: 2
2015/08/19-23:56:47.817699 fff1d93000 Options.memtable_prefix_bloom_bits: 0
2015/08/19-23:56:47.817710 fff1d93000 Options.memtable_prefix_bloom_probes: 6
2015/08/19-23:56:47.817720 fff1d93000 Options.memtable_prefix_bloom_huge_page_tlb_size: 0
2015/08/19-23:56:47.817730 fff1d93000 Options.bloom_locality: 0
2015/08/19-23:56:47.817741 fff1d93000 Options.max_successive_merges: 0
2015/08/19-23:56:47.817751 fff1d93000 Options.optimize_fllters_for_hits: 0
2015/08/19-23:56:47.818983 fff1d93000 Recovered from manifest file:/rocksdb/user-container-1/MANIFEST-000804 succeeded,manifest_file_number is 804, next_file_number is 806, last_sequence is 2380539, log_number is 0,prev_log_number is 0,max_column_family is 0
2015/08/19-23:56:47.819020 fff1d93000 Column family [default](ID 0), log number is 803
2015/08/19-23:56:47.819935 fff1d93000 EVENT_LOG_v1 {"time_micros": 1440028607819914, "job": 1, "event": "recovery_started", "log_files": [805]}
2015/08/19-23:56:47.820000 fff1d93000 Recovering log #805
2015/08/19-23:56:47.820137 fff1d93000 Creating manifest 807
2015/08/19-23:56:47.878036 fff1d93000 Deleting manifest 804 current manifest 807
2015/08/19-23:56:47.878335 fff1d93000 EVENT_LOG_v1 {"time_micros": 1440028607878319, "job": 1, "event": "recovery_finished"}
2015/08/19-23:56:47.880348 fff1d93000 [DEBUG] [JOB 2] Delete /rocksdb/user-container-1//000805.log type=0 #805 -- OK
2015/08/19-23:56:47.909102 fff1d93000 DB pointer 0x1011bfe0
Last few lines of the LOG are as below:
2015/08/20-01:44:13.806745 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053806729, "job": 134, "event": "table_file_deletion", "file_number": 1097}
2015/08/20-01:44:13.808517 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001097.sst type=2 #1097 -- OK
2015/08/20-01:44:13.808627 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053808612, "job": 134, "event": "table_file_deletion", "file_number": 1095}
2015/08/20-01:44:13.810338 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001095.sst type=2 #1095 -- OK
2015/08/20-01:44:13.810443 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053810427, "job": 134, "event": "table_file_deletion", "file_number": 1093}
2015/08/20-01:44:13.867749 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001093.sst type=2 #1093 -- OK
2015/08/20-01:44:13.867893 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053867874, "job": 134, "event": "table_file_deletion", "file_number": 1092}
2015/08/20-01:44:13.869657 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001092.sst type=2 #1092 -- OK
2015/08/20-01:44:13.869766 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053869751, "job": 134, "event": "table_file_deletion", "file_number": 1090}
2015/08/20-01:44:13.930074 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001090.sst type=2 #1090 -- OK
2015/08/20-01:44:13.930218 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053930200, "job": 134, "event": "table_file_deletion", "file_number": 1089}
2015/08/20-01:44:13.932096 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001089.sst type=2 #1089 -- OK
2015/08/20-01:44:13.932215 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053932199, "job": 134, "event": "table_file_deletion", "file_number": 1087}
2015/08/20-01:44:13.933964 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001087.sst type=2 #1087 -- OK
2015/08/20-01:44:13.934084 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053934069, "job": 134, "event": "table_file_deletion", "file_number": 1085}
2015/08/20-01:44:13.994680 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001085.sst type=2 #1085 -- OK
2015/08/20-01:44:13.994830 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053994812, "job": 134, "event": "table_file_deletion", "file_number": 1084}
2015/08/20-01:44:13.996628 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001084.sst type=2 #1084 -- OK
2015/08/20-01:44:13.996743 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053996727, "job": 134, "event": "table_file_deletion", "file_number": 1082}
2015/08/20-01:44:13.998485 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001082.sst type=2 #1082 -- OK
2015/08/20-01:44:13.998598 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035053998583, "job": 134, "event": "table_file_deletion", "file_number": 1080}
2015/08/20-01:44:14.060166 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001080.sst type=2 #1080 -- OK
2015/08/20-01:44:14.060312 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035054060293, "job": 134, "event": "table_file_deletion", "file_number": 1079}
2015/08/20-01:44:14.062291 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001079.sst type=2 #1079 -- OK
2015/08/20-01:44:14.062582 ffedd91080 EVENT_LOG_v1 {"time_micros": 1440035054062563, "job": 134, "event": "table_file_deletion", "file_number": 1077}
2015/08/20-01:44:14.124123 ffedd91080 [DEBUG] [JOB 134] Delete /rocksdb/user-container-1/001077.sst type=2 #1077 -- OK
The text was updated successfully, but these errors were encountered: