Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watchman leaks open files (Linux) #27

Closed
dturner-tw opened this issue Mar 4, 2014 · 11 comments
Closed

Watchman leaks open files (Linux) #27

dturner-tw opened this issue Mar 4, 2014 · 11 comments

Comments

@dturner-tw
Copy link
Contributor

I believe that this is actually a Linux kernel bug that watchman is triggering. I'm reporting it here because maybe there's a way to work around it in watchman. I'm also going to report it to LKML; I'll comment with a link to the thread once it's posted.

Here's a script:
https://gist.github.com/dturner-tw/9336584

If you run it on Linux, you'll soon (after a few hundred lines of output) see something like:
{u'version': u'2.9.3', u'error': u'unable to resolve root /tmp/tmpL0H6OT: watch(/tmp/tmpL0H6OT): inotify_init error: Too many open files'}

Here's where it gets really fun: I kill the script. Then I kill watchman. Then I delete my /tmp/.watchman.{username}* files. So I should be ready to go again. But no, next time I run the script, I get the "Too many open files" message much more quickly. And in fact, inotify is messed up generally (not just for watchman):
$ tail -f /etc/hosts >/dev/null
tail: inotify cannot be used, reverting to polling: Too many open files

(I sometimes need to do that in two or three separate windows to trigger the error)

Only rebooting will make inotify work correctly again.

Some other diagnostics:
$ cat /proc/sys/fs/inotify/max_user_instances
128
$ ls -l /proc//fd/ 2>/dev/null | grep -c anon_inode:inotify
18
$ sudo ls -l /proc/
/fd/ 2>/dev/null | grep -c anon_inode:inotify
32

System info:
uname -a
Linux stross 3.11.0-17-generic #18 31~precise1-Ubuntu SMP Tue Feb 4 21:25:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

I also tried on an Ubuntu 3.8.0-36-generic kernel, and on a stock 3.13.5 that I built from source (this one in virtualbox).

@dturner-tw
Copy link
Contributor Author

@sunshowers
Copy link
Contributor

Are there any processes being triggered? Watchman could potentially be leaking fds to children, though I think we use O_CLOEXEC or equivalent everywhere.

@sunshowers
Copy link
Contributor

The sudo ls -l /proc/*/fd/* 2>/dev/null | grep -c anon_inode:inotify you ran -- what processes actually held inotify descriptors?

FWIW, we run with a max_user_instances of 128 and a max_user_watches of 1000000.

@dturner-tw
Copy link
Contributor Author

(I'm going to answer your questions wrt 3.13.5 on virtualbox, because it's annoying to reboot my dev machine).

There are no triggers.

What's holding descriptors:
ps ax|egrep sudo ls -l /proc/*/fd/* 2>/dev/null|cut -d / -f 3|sort -u |grep -v self|tr '\n '|'|sed 's/\(.*\)./\1/'

1519 ? Ss 0:00 init --user
1613 ? Ss 0:00 dbus-daemon --fork --session --address=unix:abstract=/tmp/dbus-EeYjp63CNa
1619 ? Ss 0:00 upstart-event-bridge
1626 ? Ss 0:00 /usr/lib/x86_64-linux-gnu/hud/window-stack-bridge
1633 ? S 0:00 upstart-file-bridge --daemon --user
1635 ? S 0:00 upstart-dbus-bridge --daemon --session --user --bus-name session
1637 ? S 0:00 upstart-dbus-bridge --daemon --system --user --bus-name system
1640 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/bamf/bamfdaemon
1641 ? Ssl 0:08 /usr/bin/ibus-daemon --daemonize --xim
1658 ? Ssl 0:00 /usr/lib/gnome-settings-daemon/gnome-settings-daemon
1664 ? Ssl 0:00 /usr/lib/x86_64-linux-gnu/hud/hud-service
1665 ? Sl 0:00 /usr/lib/gvfs/gvfsd
1667 ? Ssl 0:00 /usr/lib/at-spi2-core/at-spi-bus-launcher --launch-immediately
1668 ? Ssl 0:00 gnome-session --session=ubuntu
1673 ? Ssl 0:00 /usr/lib/unity/unity-panel-service
1678 ? S 0:00 /bin/dbus-daemon --config-file=/etc/at-spi2/accessibility.conf --nofork --print-address 3
1685 ? Sl 0:00 /usr/lib/at-spi2-core/at-spi2-registryd --use-gnome-session
1688 ? Sl 0:00 /usr/lib/ibus/ibus-dconf
1689 ? Sl 0:01 /usr/lib/ibus/ibus-ui-gtk3
1693 ? Sl 0:00 /usr/lib/gvfs//gvfsd-fuse -f /run/user/1000/gvfs
1696 ? Sl 0:00 /usr/lib/ibus/ibus-x11 --kill-daemon
1736 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-printers-service
1745 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-power/indicator-power-service
1746 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-bluetooth/indicator-bluetooth-service
1751 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-application-service
1752 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-sound/indicator-sound-service
1753 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-sync/indicator-sync-service
1758 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-messages/indicator-messages-service
1762 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-session/indicator-session-service
1765 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-datetime-service
1770 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/indicator-keyboard-service --use-gtk --use-bamf
1776 ? S<l 0:00 /usr/bin/pulseaudio --start --log-target=syslog
1788 ? Sl 0:02 /usr/lib/ibus/ibus-engine-simple
1830 ? Sl 0:00 /usr/bin/gnome-screensaver --no-daemon
1851 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/notify-osd
1853 ? Sl 0:00 /usr/lib/dconf/dconf-service
1857 ? Sl 0:00 /usr/lib/evolution/evolution-source-registry
1858 ? Rl 1:00 compiz
1871 ? Sl 0:00 nautilus -n
1880 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-helper
1892 ? Sl 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1
1903 ? Sl 0:00 nm-applet
1940 ? Sl 0:00 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
1943 ? Sl 0:00 /usr/lib/evolution/evolution-calendar-factory
1948 ? S 0:00 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2
1965 ? Sl 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
1978 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
1983 ? Sl 0:00 /usr/lib/gvfs/gvfs-mtp-volume-monitor
1994 ? Sl 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.7 /org/gtk/gvfs/exec_spaw/0
2003 ? Sl 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.7 /org/gtk/gvfs/exec_spaw/1
2007 ? Ss 0:00 /bin/sh -c /usr/bin/gtk-window-decorator
2008 ? Sl 0:00 /usr/bin/gtk-window-decorator
2012 ? Sl 0:00 telepathy-indicator
2026 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/unity-scope-home/unity-scope-home
2029 ? Sl 0:00 zeitgeist-datahub
2038 ? Sl 0:00 /usr/bin/zeitgeist-daemon
2045 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/zeitgeist-fts
2053 ? S 0:00 /bin/cat
2068 ? Sl 0:00 /usr/bin/unity-scope-loader applications/applications.scope applications/scopes.scope commands.scope
2070 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/unity-lens-files/unity-files-daemon
2095 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/unity-lens-music/unity-music-daemon
2154 ? Sl 0:00 /usr/bin/python3 /usr/share/unity-scopes/flickr/unity_flickr_daemon.py
2156 ? Sl 0:00 /usr/bin/python3 /usr/share/unity-scopes/picasa/unity_picasa_daemon.py
2157 ? Sl 0:00 /usr/bin/python3 /usr/share/unity-scopes/facebook/unity_facebook_daemon.py
2159 ? Sl 0:10 gnome-terminal
2166 pts/1 Ss 0:00 bash
2271 ? Sl 0:00 /usr/lib/gvfs/gvfsd-http --spawner :1.7 /org/gtk/gvfs/exec_spaw/2
5504 ? Sl 0:00 update-notifier
5545 pts/0 Ss+ 0:00 bash
5634 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/deja-dup/deja-dup-monitor
6058 pts/1 S+ 0:00 bash
6059 pts/1 S+ 0:00 bash

@sunshowers
Copy link
Contributor

That seems like you aren't doing any filtering based on the kind of descriptor -- I'm particularly interested in processes that are holding inotify descriptors open. Basically the PIDs you get when you run sudo ls -l /proc/*/fd/* 2>/dev/null | grep anon_inode:inotify.

@dturner-tw
Copy link
Contributor Author

Sorry about that -- this is the list if I add the grep anon_inode:inotify at the appropriate place in the previous command.

1577 ? Ss 0:00 init --user
1671 ? Ss 0:00 dbus-daemon --fork --session --address=unix:abstract=/tmp/dbus-cRInMU4p2A
1693 ? S 0:00 upstart-file-bridge --daemon --user
1697 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/bamf/bamfdaemon
1699 ? Ssl 0:01 /usr/bin/ibus-daemon --daemonize --xim
1720 ? Ssl 0:00 /usr/lib/gnome-settings-daemon/gnome-settings-daemon
1727 ? S 0:00 /bin/dbus-daemon --config-file=/etc/at-spi2/accessibility.conf --nofork --print-address 3
1731 ? Ssl 0:00 gnome-session --session=ubuntu
1750 ? Sl 0:00 /usr/lib/ibus/ibus-dconf
1754 ? Sl 0:00 /usr/lib/ibus/ibus-ui-gtk3
1768 ? Sl 0:00 /usr/lib/ibus/ibus-x11 --kill-daemon
1877 ? S<l 0:00 /usr/bin/pulseaudio --start --log-target=syslog
1892 ? Sl 0:00 /usr/lib/ibus/ibus-engine-simple
1936 ? Sl 0:00 /usr/bin/gnome-screensaver --no-daemon
1965 ? Sl 0:00 /usr/lib/evolution/evolution-source-registry
1970 ? Rl 0:15 compiz
1980 ? Sl 0:00 nautilus -n
2050 ? Sl 0:00 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
2077 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor
2110 ? Sl 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.5 /org/gtk/gvfs/exec_spaw/0
2123 ? Ss 0:00 /bin/sh -c /usr/bin/gtk-window-decorator
2124 ? Sl 0:00 /usr/bin/gtk-window-decorator
2141 ? Sl 0:00 /usr/bin/unity-scope-loader applications/applications.scope applications/scopes.scope commands.scope
2143 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/unity-lens-files/unity-files-daemon
2160 ? Sl 0:00 zeitgeist-datahub
2185 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/unity-lens-music/unity-music-daemon
2214 ? Sl 0:01 gnome-terminal

@dturner-tw
Copy link
Contributor Author

While running other simultaneous tests with watchman, I noticed the tests suddenly freezing up. I'm not quite sure of the precise sequence of events (because automated testing), but the tests were doing (watch, since query) one or more times, then deleting the watched directory. Now watchman is in a funny state: there appear to be 8 watchman processes running (I think usually there is only one). watchman watch-list hangs attempting to read from the socket, as does watchman watch. However, providing bad syntax (eg "watchman since /tmp/foo") doesn't hang, indicating that the problem isn't the socket but whatever's going on inside watchman.

And it's definitely a kernel bug -- I found this in my dmesg:

[152513.914195] watchman[4963]: segfault at 7ff04ddb09d0 ip 00007ff05b831f60 sp 00007ff04c5acce8 error 4 in libpthread-2.15.so[7ff05b825000+18000]
[152516.577861] watchman[6138]: segfault at 7f1962e099d0 ip 00007f1970489f60 sp 00007f1962406ce8 error 4 in libpthread-2.15.so[7f197047d000+18000]
[153010.703990] BUG: unable to handle kernel NULL pointer dereference at (null)
[153010.704036] IP: < (null)>
[153010.704060] PGD 1b1b4e067 PUD 1cc1f1067 PMD 0
[153010.704084] Oops: 0010 [#1] SMP
[153010.704103] Modules linked in: btrfs raid6_pq zlib_deflate xor ufs qnx4 hfsplus hfs minix ntfs msdos jfs xfs reiserfs usb_storage cdc_acm joydev pci_stub vboxpci(OF) vboxnetadp(OF) vboxnetflt(OF) vboxdrv(OF) bnep rfcomm bluetooth parport_pc ppdev uvcvideo videobuf2_core binfmt_misc videodev snd_hda_codec_hdmi snd_hda_codec_conexant videobuf2_vmalloc videobuf2_memops snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi arc4 snd_rawmidi iwldvm mac80211 snd_seq_midi_event snd_seq psmouse thinkpad_acpi snd_timer snd_seq_device iwlwifi nvram serio_raw snd tpm_tis cfg80211 soundcore snd_page_alloc mac_hid mei_me mei lpc_ich lp ext2 parport dm_crypt i915 drm_kms_helper e1000e wmi drm ptp pps_core ahci libahci sdhci_pci sdhci i2c_algo_bit video
[153010.704453] CPU: 1 PID: 3586 Comm: watchman Tainted: GF W O 3.11.0-17-generic #31~precise1-Ubuntu
[153010.704493] Hardware name: LENOVO 4177Q5U/4177Q5U, BIOS 83ET76WW (1.46 ) 07/05/2013
[153010.704529] task: ffff8801b6ae0000 ti: ffff88009ed30000 task.ti: ffff88009ed30000
[153010.704564] RIP: 0010:[<0000000000000000>] < (null)>
[153010.704600] RSP: 0018:ffff88009ed31dc0 EFLAGS: 00010246
[153010.704624] RAX: 00000000b98ab901 RBX: ffff88015a9b5228 RCX: 00000000000188d0
[153010.704655] RDX: 000000000000b98a RSI: ffff880100b95c00 RDI: ffff88015a9b5228
[153010.704686] RBP: ffff88009ed31dd8 R08: 0000000000000001 R09: ffffea0000d85640
[153010.704718] R10: ffffffff811f9628 R11: 0000000000000000 R12: ffff88015a9b5228
[153010.704750] R13: ffff880100b95ca0 R14: 00000000ffffffff R15: ffff880100b95c00
[153010.704783] FS: 00007f5b19087700(0000) GS:ffff88021e240000(0000) knlGS:0000000000000000
[153010.704819] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[153010.704846] CR2: 0000000000000000 CR3: 00000001dd9d1000 CR4: 00000000000427e0
[153010.704877] Stack:
[153010.704888] ffffffff811f7120 ffff88015a9b5228 ffff8801e460c7f0 ffff88009ed31e28
[153010.704926] ffffffff811f7847 ffff88009ed31e18 ffff880100b95c70 ffff88009ed31f50
[153010.704965] ffff880100b95c00 0000000000000010 ffff8802120433c0 ffff8802120433c0
[153010.705005] Call Trace:
[153010.705024] [] ? fsnotify_put_mark+0x30/0x40
[153010.705054] [] fsnotify_clear_marks_by_group_flags+0x87/0xb0
[153010.705088] [] fsnotify_clear_marks_by_group+0x13/0x20
[153010.705119] [] fsnotify_destroy_group+0x16/0x40
[153010.705150] [] inotify_release+0x26/0x50
[153010.705177] [] __fput+0xba/0x240
[153010.705201] [] ____fput+0xe/0x10
[153010.705226] [] task_work_run+0xc8/0xf0
[153010.706464] [] do_notify_resume+0xac/0xc0
[153010.707730] [] int_signal+0x12/0x17
[153010.708933] Code: Bad RIP value.
[153010.710089] RIP < (null)>
[153010.711251] RSP
[153010.712319] CR2: 0000000000000000
[153010.718669] ---[ end trace 17ed2927fe522cd1 ]---

@dturner-tw
Copy link
Contributor Author

Killing all of the watchman processes appears to restore order to the universe, mostly, but watchman is now rather fragile -- I get a fair number of "synchronization failed: Connection timed out" errors when doing queries, which I normally don't; killing watchman makes them go away, but only for a little while.

@wez
Copy link
Contributor

wez commented Mar 7, 2014

I don't think it is especially productive to try sanity check watchman behavior on a system with a broken inotify implementation :-/ Watchman relies on the filesystem notification layer of the system on which it runs; if that isn't working then watchman is going to have a hard time telling you what you want to know.

If you can't get things running on a working kernel, then my suggestions would be to try running your tests on a different filesystem (maybe this is a filesystem specific kernel bug?)

Regarding the blocking/stuck behavior you mentioned, it would be useful to see a gstack of the watchman server process and compare gstacks of the other processes that you saw (assuming that those are acting in client mode. It's possible that they are just the threads of the server process that are showing up in whatever tool you're using to inspect the system).

@wez
Copy link
Contributor

wez commented Mar 31, 2014

I'm going to close this out because it really seems to be a kernel problem and not something we can solve in watchman. Sorry!

@wez wez closed this as completed Mar 31, 2014
@dturner-tw
Copy link
Contributor Author

FWIW, this seems to be fixed on 3.14-rc5.

On Sun, Mar 30, 2014 at 10:25 PM, Wez Furlong notifications@git.luolix.topwrote:

Closed #27 #27.

Reply to this email directly or view it on GitHubhttps://github.com//issues/27
.

facebook-github-bot pushed a commit that referenced this issue Jul 16, 2020
Summary:
Pull Request resolved: facebook/sapling#27

Pull Request resolved: facebookexperimental/rust-shed#9

Original diffs: D22417488 (f4de30f), D22528869 (d90ddb5)

Reviewed By: markbt

Differential Revision: D22571972

fbshipit-source-id: c6f013565680a757b642dd79e647207fce3351ec
facebook-github-bot pushed a commit that referenced this issue Oct 7, 2022
Summary:
We have seen deadlock running `terminationHandler` -> `hasSubscribers` in 2 threads.
It's unclear which other thread is holding the lock.

To make things easier to debug next time, let's change terminationHandler (and
also main.cpp) to bypass the logging lock and write to stderr directly.

Related threads (all threads in P536343453):

  Thread 11 (LWP 3275661):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7f1c149f8928, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002eca798 in watchman::Log::log<char const (&) [39], char const*, char const (&) [3]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9ba7 in watchman::log<char const (&) [39], char const*, char const (&) [3]> (level=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:159
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000001c38c8b in __clang_call_terminate ()
  #22 0x0000000003284c9e in folly::detail::terminate_with_<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:93
  #23 0x0000000003281bae in folly::terminate_with<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:123
  #24 folly::SingletonVault::fireShutdownTimer (this=<optimized out>) at fbcode/folly/Singleton.cpp:499
  #25 0x0000000003281ad9 in folly::(anonymous namespace)::fireShutdownSignalHelper (sigval=...) at fbcode/folly/Singleton.cpp:454
  #26 0x00007f1c3b42b939 in timer_sigev_thread (arg=<optimized out>) at ../sysdeps/unix/sysv/linux/timer_routines.c:55
  #27 0x00007f1c3b41fc0f in start_thread (arg=<optimized out>) at pthread_create.c:434
  #28 0x00007f1c3b4b21dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

  ...

  Thread 1 (LWP 3201992):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7ffd2d5be968, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002ecac20 in watchman::Log::log<char const (&) [59]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9b24 in watchman::log<char const (&) [59]> (level=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:165
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000002d8cde1 in std::thread::~thread (this=0x7f1c3ac2ef90) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:152
  #22 0x00007f1c3b3cc8f8 in __run_exit_handlers (status=1, listp=0x7f1c3b598658 <__exit_funcs>, run_list_atexit=<optimized out>, run_dtors=<optimized out>) at exit.c:113
  #23 0x00007f1c3b3cca0a in __GI_exit (status=<optimized out>) at exit.c:143
  #24 0x00007f1c3b3b165e in __libc_start_call_main (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78) at ../sysdeps/nptl/libc_start_call_main.h:74
  #25 0x00007f1c3b3b1718 in __libc_start_main_impl (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffd2d5bec68) at ../csu/libc-start.c:409
  #26 0x0000000002d0e181 in _start () at ../sysdeps/x86_64/start.S:116

Reviewed By: xavierd

Differential Revision: D40166374

fbshipit-source-id: 7017e20234e5e0a9532eb61a63ac49ac0020d443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants