-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backport: bitcoin#19160, #21663, #21669, #21732, #21738, #21750, #21775, #21812 #6143
Conversation
This pull request has conflicts, please rebase. |
b738155 ci: Fix macOS brew install command (Hennadii Stepanov) Pull request description: A solution for https://bintray.com shutdown. Details: Homebrew/discussions#691. ACKs for top commit: jamesob: ACK bitcoin@b738155 Tree-SHA512: 4570c297421ae66043348acbe2f1e50b000e700406b38bc813c80714d4f109c6cadecff571ecbd8c2f0094ebdc959486fc55022ba6545bd592683a421e9c0655
…ons by bumping to clang-12 fadea0b Revert "test: Add tsan supp for leveldb::DBImpl::DeleteObsoleteFiles" (MarcoFalke) fadbd99 test: Remove spurious double lock tsan suppressions by bumping to clang-12 (MarcoFalke) Pull request description: The double lock warnings appeared in bitcoin#19041, but they didn't make any sense. Also, our sync module would detect double locks, if there were any. Bumping to clang-12 allows us to remove the spurious suppressions needed to run the tests, so do that. ACKs for top commit: practicalswift: cr ACK fadea0b assuming CI passes and more specifically that newer Clang agrees that these TSan suppressions are no longer needed. Tree-SHA512: c411221a4b74d0af6ca8d686639b4f40b41c15906ccbb6647e8d569d6ab088264faafe075e1ac9523d5c0024b85f15a597bb3eedc7f07d4f5816245f75cfc08b
This code is not supposed to be ever added in dash, but it is removed in bitcoin#21009 which is DNM. Let's do it just here
They run multiprocess build for ALL PRs in travis since that commit ```diff - if: type != pull_request OR commit_message =~ /depends:|multiprocess:/ # Skip on non-depends, non-multiprocess PRs ```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, few notes
84934bf multiprocess: Add echoipc RPC method and test (Russell Yanofsky) 7d76cf6 multiprocess: Add comments and documentation (Russell Yanofsky) ddf7ecc multiprocess: Add bitcoin-node process spawning support (Russell Yanofsky) 10afdf0 multiprocess: Add Ipc interface implementation (Russell Yanofsky) 745c9ce multiprocess: Add Ipc and Init interface definitions (Russell Yanofsky) 5d62d7f Update libmultiprocess library (Russell Yanofsky) Pull request description: This PR is part of the [process separation project](https://github.com/bitcoin/bitcoin/projects/10). --- This PR adds basic process spawning and IPC method call support to `bitcoin-node` executables built with `--enable-multiprocess`[*]. These changes are used in bitcoin#10102 to let node, gui, and wallet functionality run in different processes, and extended in bitcoin#19460 and bitcoin#19461 after that to allow gui and wallet processes to be started and stopped independently and connect to the node over a socket. These changes can also be used to implement new functionality outside the `bitcoin-node` process like external indexes or pluggable transports (bitcoin#18988). The `Ipc::spawnProcess` and `Ipc::serveProcess` methods added here are entry points for spawning a child process and serving a parent process, and being able to make bidirectional, multithreaded method calls between the processes. A simple example of this is implemented in commit "Add echoipc RPC method and test." Changes in this PR aside from the echo test were originally part of bitcoin#10102, but have been split and moved here for easier review, and so they can be used for other applications like external plugins. Additional notes about this PR can be found at https://bitcoincore.reviews/19160 [*] Note: the `--enable-multiprocess` feature is still experimental, and not enabled by default, and not yet supported on windows. More information can be found in [doc/multiprocess.md](https://github.com/bitcoin/bitcoin/blob/master/doc/multiprocess.md) ACKs for top commit: fjahr: re-ACK 84934bf ariard: ACK 84934bf. Changes since last ACK fixes the silent merge conflict about `EnsureAnyNodeContext()`. Rebuilt and checked again debug command `echoipc`. Tree-SHA512: 52a948b5e18a26d7d7a09b83003eaae9b1ed2981978c36c959fe9a55abf70ae6a627c4ff913a3428be17400a3dace30c58b5057fa75c319662c3be98f19810c6
…sion fa00bb2 test: Add missing shift-base:nanobench.h suppression (MarcoFalke) 0000456 ci: Use clang-12 for asan task (MarcoFalke) Pull request description: ACKs for top commit: fanquake: ACK fa00bb2 Tree-SHA512: fe7cd1ad9f3e73c09f7f84dfb0f276d0cda603c4d591b9338a0914bf1276b0247fd2faee7052f5962c3ae3280e7fa8b72f5b773b84c2a8882a89ed1f8c08256c
9096b13 net: remove unnecessary check of CNode::cs_vSend (Vasil Dimov) Pull request description: It is not possible to have a node in `CConnman::vNodesDisconnected` and its reference count to be incremented - all `CNode::AddRef()` are done either before the node is added to `CConnman::vNodes` or while holding `CConnman::cs_vNodes` and the object being in `CConnman::vNodes`. So, the object being in `CConnman::vNodesDisconnected` and its reference count being zero means that it is not and will not start to be used by other threads. So, the lock of `CNode::cs_vSend` in `CConnman::DisconnectNodes()` will always succeed and is not necessary. Indeed all locks of `CNode::cs_vSend` are done either when the reference count is >0 or under the protection of `CConnman::cs_vNodes` and the node being in `CConnman::vNodes`. ACKs for top commit: MarcoFalke: review ACK 9096b13 🏧 jnewbery: utACK 9096b13 Tree-SHA512: 910899cdcdc8934642eb0c40fcece8c3b01b7e20a0b023966b9d6972db6a885cb3a9a04e9562bae14d5833967e45e2ecb3687b94d495060c3da4b1f2afb0ac8f
fac96d0 p2p: Limit m_block_inv_mutex (MarcoFalke) Pull request description: Keeping the lock longer than needed is confusing to reviewers and thread analysis. For example, keeping the lock while appending tx-invs, which requires the mempool lock, will tell thread analysis tools an incorrect lock order of `(1) m_block_inv_mutex, (2) pool.cs`. ACKs for top commit: Crypt-iQ: crACK fac96d0 jnewbery: utACK fac96d0 theStack: Code-Review ACK fac96d0 Tree-SHA512: fcfac0f1f8b16df7522513abf716b2eed3d2fc9153f231c8cb61f451e342f29c984a5c872deca6bab3e601e5d651874cc229146c9370e46811b4520747a21f2b
fa44f51 ci: Clarify that previous_releases task is using DEBUG (MarcoFalke) fad0f21 ci: Use clang in multiprocess task to avoid OOM (MarcoFalke) faeabef ci: Enable D_GLIBCXX_DEBUG for multiprocess task (MarcoFalke) Pull request description: Enable `-D_GLIBCXX_DEBUG` via the depends `DEBUG` flag. Also `--enable-debug` to get debug symbols in traces. ACKs for top commit: hebasto: ACK fa44f51, I have reviewed the code and it looks OK, I agree it can be merged, and CI is green. Tree-SHA512: ab2a216bb44ee462f9dd181ec9025962502bd4201a1118ff52b6a193398e7ea3ca465a45a5eb341e308758fc3ef34ea3521f8a1f85ed64478ef3c1f6c1b8b141
net_processing should not lock mempool directly, otherwise it causes wrong-order-lock: Assertion failed: detected inconsistent lock order for 'cs' in txmempool.cpp:1179 (in thread 'msghand'), details in debug log. Posix Signal: Aborted 0#: (0x60322D700254) stl_vector.h:115 - std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data::_M_copy_data(std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data const&) 1#: (0x60322D700254) stl_vector.h:127 - std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data::_M_swap_data(std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data&) 2#: (0x60322D700254) stl_vector.h:1959 - std::vector<unsigned long, std::allocator<unsigned long> >::_M_move_assign(std::vector<unsigned long, std::allocator<unsigned long> >&&, std::integral_constant<bool, true>) 3#: (0x60322D700254) stl_vector.h:768 - std::vector<unsigned long, std::allocator<unsigned long> >::operator=(std::vector<unsigned long, std::allocator<unsigned long> >&&) 4#: (0x60322D700254) stacktraces.cpp:777 - HandlePosixSignal 5#: (0x7FF820442990) libc_sigaction.c - ??? 6#: (0x7FF820499A1B) pthread_kill.c:44 - __pthread_kill_implementation 7#: (0x7FF820499A1B) pthread_kill.c:78 - __pthread_kill_internal 8#: (0x7FF820499A1B) pthread_kill.c:89 - __GI___pthread_kill 9#: (0x7FF8204428E6) raise.c:27 - __GI_raise 10#: (0x7FF8204268B7) abort.c:81 - __GI_abort 11#: (0x60322D705D56) basic_string.h:390 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_check_length(unsigned long, unsigned long, char const*) const 12#: (0x60322D705D56) basic_string.h:1461 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::append(char const*) 13#: (0x60322D705D56) basic_string.h:1365 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::operator+=(char const*) 14#: (0x60322D705D56) sync.cpp:114 - potential_deadlock_detected 15#: (0x60322D70C7DE) sync.cpp:188 - push_lock<std::recursive_mutex> 16#: (0x60322D70C7DE) sync.cpp:212 - void EnterCritical<std::recursive_mutex>(char const*, char const*, int, std::recursive_mutex*, bool) 17#: (0x60322CFAA98C) unique_lock.h:150 - std::unique_lock<std::recursive_mutex>::try_lock() 18#: (0x60322CFAA98C) sync.h:162 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::Enter(char const*, char const*, int) 19#: (0x60322D243A99) hasher.h:22 - SaltedTxidHasher::operator()(uint256 const&) const 20#: (0x60322D243A99) hashed_index.hpp:1605 - boost::multi_index::detail::hashed_index_iterator<boost::multi_index::detail::hashed_index_node<boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::index_node_base<CTxMemPoolEntry, std::allocator<CTxMemPoolEntry> > > > > >, boost::multi_index::detail::bucket_array<std::allocator<CTxMemPoolEntry> >, boost::multi_index::detail::hashed_unique_tag, boost::multi_index::detail::hashed_index_global_iterator_tag> boost::multi_index::detail::hashed_index<mempoolentry_txid, SaltedTxidHasher, std::equal_to<uint256>, boost::multi_index::detail::nth_layer<1, CTxMemPoolEntry, boost::multi_index::indexed_by<boost::multi_index::hashed_unique<mempoolentry_txid, SaltedTxidHasher, mpl_::na, mpl_::na>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<descendant_score, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByDescendantScore>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<entry_time, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByEntryTime>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<ancestor_score, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByAncestorFee>, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, std::allocator<CTxMemPoolEntry> >, boost::mpl::vector0<mpl_::na>, boost::multi_index::detail::hashed_unique_tag>::find<uint256, SaltedTxidHasher, std::equal_to<uint256> >(uint256 const&, SaltedTxidHasher const&, std::equal_to<uint256> const&, mpl_::bool_<false>) const 21#: (0x60322D243A99) hashed_index.hpp:541 - boost::multi_index::detail::hashed_index_iterator<boost::multi_index::detail::hashed_index_node<boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::index_node_base<CTxMemPoolEntry, std::allocator<CTxMemPoolEntry> > > > > >, boost::multi_index::detail::bucket_array<std::allocator<CTxMemPoolEntry> >, boost::multi_index::detail::hashed_unique_tag, boost::multi_index::detail::hashed_index_global_iterator_tag> boost::multi_index::detail::hashed_index<mempoolentry_txid, SaltedTxidHasher, std::equal_to<uint256>, boost::multi_index::detail::nth_layer<1, CTxMemPoolEntry, boost::multi_index::indexed_by<boost::multi_index::hashed_unique<mempoolentry_txid, SaltedTxidHasher, mpl_::na, mpl_::na>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<descendant_score, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByDescendantScore>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<entry_time, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByEntryTime>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<ancestor_score, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByAncestorFee>, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, std::allocator<CTxMemPoolEntry> >, boost::mpl::vector0<mpl_::na>, boost::multi_index::detail::hashed_unique_tag>::find<uint256, SaltedTxidHasher, std::equal_to<uint256> >(uint256 const&, SaltedTxidHasher const&, std::equal_to<uint256> const&) const 22#: (0x60322D243A99) hashed_index.hpp:531 - boost::multi_index::detail::hashed_index_iterator<boost::multi_index::detail::hashed_index_node<boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::ordered_index_node<boost::multi_index::detail::null_augment_policy, boost::multi_index::detail::index_node_base<CTxMemPoolEntry, std::allocator<CTxMemPoolEntry> > > > > >, boost::multi_index::detail::bucket_array<std::allocator<CTxMemPoolEntry> >, boost::multi_index::detail::hashed_unique_tag, boost::multi_index::detail::hashed_index_global_iterator_tag> boost::multi_index::detail::hashed_index<mempoolentry_txid, SaltedTxidHasher, std::equal_to<uint256>, boost::multi_index::detail::nth_layer<1, CTxMemPoolEntry, boost::multi_index::indexed_by<boost::multi_index::hashed_unique<mempoolentry_txid, SaltedTxidHasher, mpl_::na, mpl_::na>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<descendant_score, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByDescendantScore>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<entry_time, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByEntryTime>, boost::multi_index::ordered_non_unique<boost::multi_index::tag<ancestor_score, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, boost::multi_index::identity<CTxMemPoolEntry>, CompareTxMemPoolEntryByAncestorFee>, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, std::allocator<CTxMemPoolEntry> >, boost::mpl::vector0<mpl_::na>, boost::multi_index::detail::hashed_unique_tag>::find<uint256>(uint256 const&) const 23#: (0x60322D243A99) txmempool.cpp:1180 - CTxMemPool::CompareDepthAndScore(uint256 const&, uint256 const&) 24#: (0x60322CFFE799) stl_heap.h:140 - __adjust_heap<__gnu_cxx::__normal_iterator<std::_Rb_tree_const_iterator<uint256>*, std::vector<std::_Rb_tree_const_iterator<uint256> > >, long int, std::_Rb_tree_const_iterator<uint256>, __gnu_cxx::__ops::_Iter_comp_iter<(anonymous namespace)::CompareInvMempoolOrder> > 25#: (0x60322D0241D7) stl_heap.h:358 - __make_heap<__gnu_cxx::__normal_iterator<std::_Rb_tree_const_iterator<uint256>*, std::vector<std::_Rb_tree_const_iterator<uint256> > >, __gnu_cxx::__ops::_Iter_comp_iter<(anonymous namespace)::CompareInvMempoolOrder> > 26#: (0x60322D0241D7) stl_heap.h:413 - make_heap<__gnu_cxx::__normal_iterator<std::_Rb_tree_const_iterator<uint256>*, std::vector<std::_Rb_tree_const_iterator<uint256> > >, (anonymous namespace)::CompareInvMempoolOrder> 27#: (0x60322D0241D7) net_processing.cpp:5728 - SendMessages 28#: (0x60322CFD00C3) sync.h:199 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::~UniqueLock() 29#: (0x60322CFD00C3) net.cpp:3181 - CConnman::ThreadMessageHandler() 30#: (0x60322D75C3D5) basic_string.h:639 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string<std::allocator<char> >(char const*, std::allocator<char> const&) 31#: (0x60322D75C3D5) thread.cpp:19 - util::TraceThread(char const*, std::function<void ()>) 32#: (0x60322CFC2C7C) std_function.h:243 - std::_Function_base::~_Function_base() 33#: (0x60322CFC2C7C) std_function.h:334 - std::function<void ()>::~function() 34#: (0x60322CFC2C7C) invoke.h:61 - __invoke_impl<void, void (*)(char const*, std::function<void()>), char const*, CConnman::Start(CDeterministicMNManager&, CMasternodeMetaMan&, CMasternodeSync&, CScheduler&, const Options&)::<lambda()> > 35#: (0x60322CFC2C7C) invoke.h:96 - __invoke<void (*)(char const*, std::function<void()>), char const*, CConnman::Start(CDeterministicMNManager&, CMasternodeMetaMan&, CMasternodeSync&, CScheduler&, const Options&)::<lambda()> > 36#: (0x60322CFC2C7C) std_thread.h:292 - _M_invoke<0, 1, 2> 37#: (0x60322CFC2C7C) std_thread.h:299 - operator() 38#: (0x60322CFC2C7C) std_thread.h:244 - _M_run 39#: (0x7FF8208E6333) <unknown-file> - ???
It fixes potentiall deadlock: Assertion failed: detected inconsistent lock order for 'peer.m_tx_relay->m_tx_inventory_mutex' in net_processing.cpp:971 (in thread 'httpworker.0'), details in debug log. Posix Signal: Aborted 0#: (0x5BE0A9B78F04) stl_vector.h:115 - std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data::_M_copy_data(std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data const&) 1#: (0x5BE0A9B78F04) stl_vector.h:127 - std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data::_M_swap_data(std::_Vector_base<unsigned long, std::allocator<unsigned long> >::_Vector_impl_data&) 2#: (0x5BE0A9B78F04) stl_vector.h:1959 - std::vector<unsigned long, std::allocator<unsigned long> >::_M_move_assign(std::vector<unsigned long, std::allocator<unsigned long> >&&, std::integral_constant<bool, true>) 3#: (0x5BE0A9B78F04) stl_vector.h:768 - std::vector<unsigned long, std::allocator<unsigned long> >::operator=(std::vector<unsigned long, std::allocator<unsigned long> >&&) 4#: (0x5BE0A9B78F04) stacktraces.cpp:777 - HandlePosixSignal 5#: (0x733859C42990) libc_sigaction.c - ??? 6#: (0x733859C99A1B) pthread_kill.c:44 - __pthread_kill_implementation 7#: (0x733859C99A1B) pthread_kill.c:78 - __pthread_kill_internal 8#: (0x733859C99A1B) pthread_kill.c:89 - __GI___pthread_kill 9#: (0x733859C428E6) raise.c:27 - __GI_raise 10#: (0x733859C268B7) abort.c:81 - __GI_abort 11#: (0x5BE0A9B7EA06) basic_string.h:390 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_check_length(unsigned long, unsigned long, char const*) const 12#: (0x5BE0A9B7EA06) basic_string.h:1461 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::append(char const*) 13#: (0x5BE0A9B7EA06) basic_string.h:1365 - std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::operator+=(char const*) 14#: (0x5BE0A9B7EA06) sync.cpp:114 - potential_deadlock_detected 15#: (0x5BE0A9B8548E) sync.cpp:188 - push_lock<std::recursive_mutex> 16#: (0x5BE0A9B8548E) sync.cpp:212 - void EnterCritical<std::recursive_mutex>(char const*, char const*, int, std::recursive_mutex*, bool) 17#: (0x5BE0A935C582) unique_lock.h:150 - std::unique_lock<std::recursive_mutex>::try_lock() 18#: (0x5BE0A935C582) sync.h:162 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::Enter(char const*, char const*, int) 19#: (0x5BE0A935C582) sync.h:183 - UniqueLock<AnnotatedMixin<std::recursive_mutex>, std::unique_lock<std::recursive_mutex> >::UniqueLock(AnnotatedMixin<std::recursive_mutex>&, char const*, char const*, int, bool) 20#: (0x5BE0A9487A92) net_processing.cpp:972 - PushInv 21#: (0x5BE0A94896E5) shared_ptr_base.h:1070 - std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count() 22#: (0x5BE0A94896E5) shared_ptr_base.h:1524 - ~__shared_ptr 23#: (0x5BE0A94896E5) shared_ptr.h:175 - ~shared_ptr 24#: (0x5BE0A94896E5) net_processing.cpp:2276 - operator() 25#: (0x5BE0A94896E5) net.h:1051 - ForEachNode<CConnman::CFullyConnectedOnly, (anonymous namespace)::PeerManagerImpl::RelayInv(CInv&, int)::<lambda(CNode*)>&> 26#: (0x5BE0A94896E5) net.h:1058 - ForEachNode<(anonymous namespace)::PeerManagerImpl::RelayInv(CInv&, int)::<lambda(CNode*)> > 27#: (0x5BE0A94896E5) net_processing.cpp:2269 - RelayInv 28#: (0x5BE0A98B7C03) blockprocessor.cpp:683 - llmq::CQuorumBlockProcessor::AddMineableCommitment(llmq::CFinalCommitment const&) 29#: (0x5BE0A98BE2E1) blockprocessor.cpp:338 - llmq::CQuorumBlockProcessor::UndoBlock(CBlock const&, gsl::not_null<CBlockIndex const*>) 30#: (0x5BE0A9809EA9) specialtxman.cpp:264 - CSpecialTxProcessor::UndoSpecialTxsInBlock(CBlock const&, CBlockIndex const*, std::optional<MNListUpdates>&) 31#: (0x5BE0A96FFB84) validation.cpp:1693 - CChainState::DisconnectBlock(CBlock const&, CBlockIndex const*, CCoinsViewCache&) 32#: (0x5BE0A9701481) validation.cpp:2726 - CChainState::DisconnectTip(BlockValidationState&, DisconnectedBlockTransactions*)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK f4cb0fb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK f4cb0fb
… with Mutex, avoid data race, partial bitcoin#22868 87d775d partial bitcoin#22868: Call load handlers without cs_wallet locked (Kittywhiskers Van Gogh) c602ca1 coinjoin: protect `m_wallet_manager_map` with `cs_wallet_manager_map` (Kittywhiskers Van Gogh) Pull request description: ## Additional Information This pull request aims to deal with regressions ([build](https://gitlab.com/dashpay/dash/-/jobs/7550859495)) spotted in `develop` after the merger of [dash#6143](#6143), namely, a failure to build the multiprocess variant and two data races, one involving `ChainstateManager::m_best_header` and another involving `CoinJoinWalletManager::m_wallet_manager_map`. * Fixes for the build failure and the first data race (b136742 and 9e7c685), have been spun off into [dash#6199](#6199) and merged * The second data race is being avoided by protected `m_wallet_manager_map` with a new `RecursiveMutex`, `cs_wallet_manager_map` (the contents of this PR). ~~A version of these changes are available using a regular `Mutex` but prove far more cumbersome ([source](b89457d)) when taking practicalities into account ([comment](#6192 (comment) A `Mutex` version is now available courtesy of UdjinM6 (see [patch](ecf5c1b)), thanks! ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas **(note: N/A)** - [x] I have added or updated relevant unit/integration/functional/e2e tests - [x] I have made corresponding changes to the documentation **(note: N/A)** - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ ACKs for top commit: UdjinM6: light-ACK 87d775d Tree-SHA512: 52545a1547357a45fb6bf4d3948fc75a5e88f1d86e9809b1060007cd74dd383249691d8d9777f647c7534d458fe126ae818fa2b47f0e5b0cee78a469af1ed0c9
… builds 27d9763 fix: add `linux64_multiprocess` `BUILD_TARGET` to matrix, mend C(XX) (Kittywhiskers Van Gogh) 26cc5a1 ci: use underscore to separate variant name from target triple (Kittywhiskers Van Gogh) d0131a5 trivial: sort `BUILD_TARGET` on GitHub and in `matrix.sh` alphabetically (Kittywhiskers Van Gogh) 4f1b5c1 merge bitcoin#28954: Reduce use of bash -c (Kittywhiskers Van Gogh) a49162f merge bitcoin#27314: Fix handling of `CXX=clang++` when building `qt` package (Kittywhiskers Van Gogh) Pull request description: ## Additional Information * [bitcoin#27314](bitcoin#27314) has been backported in this PR as [bitcoin#25838](bitcoin#25838) (backported in [dash#6384](#6384)) broke Clang depends builds. * [bitcoin#28954](bitcoin#28954) has been backported to fix a problem associated with multiprocess runs ([build](https://gitlab.com/dashpay/dash/-/jobs/8396677312#L2921)). * Support for multiprocess builds were enabled _proper_ in [dash#6143](#6143) but unfortunately, the configuration params for multiprocess builds were not processed by CI as the build variant was not added to `matrix.sh` ([source](https://github.com/dashpay/dash/blob/6a51ab271dd5b1b839d754337abbf58a99cbdd21/ci/dash/matrix.sh)). This is evident by comparing two variants with Boost::Process enablement (`--with-boost-process`), `linux64_fuzz` ([source](https://github.com/dashpay/dash/blob/6a51ab271dd5b1b839d754337abbf58a99cbdd21/ci/test/00_setup_env_native_fuzz.sh#L19)) and `linux64_multiprocess` ([source](https://github.com/dashpay/dash/blob/6a51ab271dd5b1b839d754337abbf58a99cbdd21/ci/test/00_setup_env_native_multiprocess.sh#L13)). Looking at a `develop` (6a51ab2) build, the fuzz build has it enabled ([source](https://gitlab.com/dashpay/dash/-/jobs/8394892905#L737)) while the multiprocess build doesn't ([source](https://gitlab.com/dashpay/dash/-/jobs/8394892909#L1524)) despite both scripts having the enablement argument. ## Breaking Changes None expected. ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas **(note: N/A)** - [x] I have added or updated relevant unit/integration/functional/e2e tests **(note: N/A)** - [x] I have made corresponding changes to the documentation **(note: N/A)** - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ ACKs for top commit: PastaPastaPasta: utACK 27d9763 UdjinM6: utACK 27d9763 Tree-SHA512: 3e2fb72d4211875a162d3aecb994c5bd43b2f6d9fea0804d7e00a38a034672730f9351ceb9256ace38e32f7ef81527c8a034a870e5099a277dfd06f9fa54b480
Issue being fixed or feature implemented
Just regular backports from v22
What was done?
See commits for backports.
Also there're 2 bugs are fixed which became visible after backporting bitcoin#21775 - both are related to possible deadlocks in net_processing
How Has This Been Tested?
Run unit and functional tests. Enabled multiprocess builds on CI
Breaking Changes
N/A
Checklist: