-
Notifications
You must be signed in to change notification settings - Fork 12.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[libc++] Fix bug in atomic_ref's calculation of lock_free-ness #99570
[libc++] Fix bug in atomic_ref's calculation of lock_free-ness #99570
Conversation
@llvm/pr-subscribers-libcxx Author: Louis Dionne (ldionne) ChangesThe builtin __atomic_always_lock_free takes into account the type of the pointer provided as the second argument. Because we were passing void*, rather than T*, the calculation failed. This meant that atomic_ref<T>::is_always_lock_free was only true for char & bool. This bug exists elsewhere in the atomic library (when using GCC, we fail to pass a pointer at all, and we fail to correctly align the atomic like _Atomic would). This bug was not initially caught because we don't ever actually expect a given value for Further work is needed to clean up: (A) Our detection of has-64-bit-atomics, which uses std::atomic to determine if std::atomic is supported... (the type (B) The use of __atomic_is_always_lock_free in the GCC atomic implementation, where we lie about wether a type is always lock free, when the alignment for the std::atomic<T> is much smaller than required. For example, struct Counter {int x; int y; };, which _Atomic Counter aligns to 8 bytes, but our std::atomic<Counter> under GCC only aligns to 4, but still reports that the type is always lock free. (C) std::atomic_ref<T>::required_alignment should often times be larger than the natural alignment of the type if the sizeof(T) > alignof(T) and sizeof(T) 2, 4, 8, or 16. (See the Counter example). In failing to do so we make many types (again, see Counter), non-lock free even when there are atomic instructions on the host that support types of that size. (D) We need to actually test against hard coded values throughout our atomic tests to avoid these sorts of bugs in the future. This probably means auditing the entire atomic test suite. This change attempts to start sorting out the testing difficulties by using the _GCC_ATOMIC(CHAR|SHORT|INT|LONG|LLONG|POINTER)_IS_LOCK_FREE predefined macros to establish an expected value for Using these compiler pre-defines we can actually validate that certain types, like char and int, are actually always lock free like they are on every platform in the wild(*). (*) At least for every platform we care about. Fixing (B) reqires an ABI break where we bump the alignment on the type std::atomic<T> to match that of _Atomic T (were we under clang). Fixing (C) also requires an ABI break, but atomic_ref is new enough that we should consider it ASAP. (Though fixing (C) is arguably more of a QoI detail, but it's a big one, since we don't want the runtime alignment of memory to determine the locking behavior of the atomic). Patch is 24.35 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/99570.diff 5 Files Affected:
diff --git a/libcxx/include/__atomic/atomic_ref.h b/libcxx/include/__atomic/atomic_ref.h
index 156f1961151c1..2849b82e1a3dd 100644
--- a/libcxx/include/__atomic/atomic_ref.h
+++ b/libcxx/include/__atomic/atomic_ref.h
@@ -42,6 +42,19 @@ _LIBCPP_BEGIN_NAMESPACE_STD
#if _LIBCPP_STD_VER >= 20
+// These types are required to make __atomic_is_always_lock_free work across GCC and Clang.
+// The purpose of this trick is to make sure that we provide an object with the correct alignment
+// to __atomic_is_always_lock_free, since that answer depends on the alignment.
+template <size_t _Alignment>
+struct __alignment_checker_type {
+ alignas(_Alignment) char __data;
+};
+
+template <size_t _Alignment>
+struct __get_aligner_instance {
+ static constexpr __alignment_checker_type<_Alignment> __instance{};
+};
+
template <class _Tp>
struct __atomic_ref_base {
protected:
@@ -105,7 +118,7 @@ struct __atomic_ref_base {
// that the pointer is going to be aligned properly at runtime because that is a (checked) precondition
// of atomic_ref's constructor.
static constexpr bool is_always_lock_free =
- __atomic_always_lock_free(sizeof(_Tp), reinterpret_cast<void*>(-required_alignment));
+ __atomic_always_lock_free(sizeof(_Tp), &__get_aligner_instance<required_alignment>::__instance);
_LIBCPP_HIDE_FROM_ABI bool is_lock_free() const noexcept { return __atomic_is_lock_free(sizeof(_Tp), __ptr_); }
diff --git a/libcxx/test/std/atomics/atomics.lockfree/is_always_lock_free.cpp b/libcxx/test/std/atomics/atomics.lockfree/is_always_lock_free.cpp
new file mode 100644
index 0000000000000..2dc7f5c765419
--- /dev/null
+++ b/libcxx/test/std/atomics/atomics.lockfree/is_always_lock_free.cpp
@@ -0,0 +1,165 @@
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// UNSUPPORTED: c++03, c++11, c++14
+
+// <atomic>
+//
+// template <class T>
+// class atomic;
+//
+// static constexpr bool is_always_lock_free;
+
+#include <atomic>
+#include <cassert>
+#include <cstddef>
+
+#include "test_macros.h"
+#include "atomic_helpers.h"
+
+template <typename T>
+void check_always_lock_free(std::atomic<T> const& a) {
+ using InfoT = LockFreeStatusInfo<T>;
+
+ constexpr std::same_as<const bool> decltype(auto) is_always_lock_free = std::atomic<T>::is_always_lock_free;
+
+ // If we know the status of T for sure, validate the exact result of the function.
+ if constexpr (InfoT::status_known) {
+ constexpr LockFreeStatus known_status = InfoT::value;
+ if constexpr (known_status == LockFreeStatus::always) {
+ static_assert(is_always_lock_free, "is_always_lock_free is inconsistent with known lock-free status");
+ assert(a.is_lock_free() && "is_lock_free() is inconsistent with known lock-free status");
+ } else if constexpr (known_status == LockFreeStatus::never) {
+ static_assert(!is_always_lock_free, "is_always_lock_free is inconsistent with known lock-free status");
+ assert(!a.is_lock_free() && "is_lock_free() is inconsistent with known lock-free status");
+ } else {
+ assert(a.is_lock_free() || !a.is_lock_free()); // This is kinda dumb, but we might as well call the function once.
+ }
+ }
+
+ // In all cases, also sanity-check it based on the implication always-lock-free => lock-free.
+ if (is_always_lock_free) {
+ std::same_as<bool> decltype(auto) is_lock_free = a.is_lock_free();
+ assert(is_lock_free);
+ }
+ ASSERT_NOEXCEPT(a.is_lock_free());
+}
+
+#define CHECK_ALWAYS_LOCK_FREE(T) \
+ do { \
+ typedef T type; \
+ type obj{}; \
+ std::atomic<type> a(obj); \
+ check_always_lock_free(a); \
+ } while (0)
+
+void test() {
+ char c = 'x';
+ check_always_lock_free(std::atomic<char>(c));
+
+ int i = 0;
+ check_always_lock_free(std::atomic<int>(i));
+
+ float f = 0.f;
+ check_always_lock_free(std::atomic<float>(f));
+
+ int* p = &i;
+ check_always_lock_free(std::atomic<int*>(p));
+
+ CHECK_ALWAYS_LOCK_FREE(bool);
+ CHECK_ALWAYS_LOCK_FREE(char);
+ CHECK_ALWAYS_LOCK_FREE(signed char);
+ CHECK_ALWAYS_LOCK_FREE(unsigned char);
+#if TEST_STD_VER > 17 && defined(__cpp_char8_t)
+ CHECK_ALWAYS_LOCK_FREE(char8_t);
+#endif
+ CHECK_ALWAYS_LOCK_FREE(char16_t);
+ CHECK_ALWAYS_LOCK_FREE(char32_t);
+ CHECK_ALWAYS_LOCK_FREE(wchar_t);
+ CHECK_ALWAYS_LOCK_FREE(short);
+ CHECK_ALWAYS_LOCK_FREE(unsigned short);
+ CHECK_ALWAYS_LOCK_FREE(int);
+ CHECK_ALWAYS_LOCK_FREE(unsigned int);
+ CHECK_ALWAYS_LOCK_FREE(long);
+ CHECK_ALWAYS_LOCK_FREE(unsigned long);
+ CHECK_ALWAYS_LOCK_FREE(long long);
+ CHECK_ALWAYS_LOCK_FREE(unsigned long long);
+ CHECK_ALWAYS_LOCK_FREE(std::nullptr_t);
+ CHECK_ALWAYS_LOCK_FREE(void*);
+ CHECK_ALWAYS_LOCK_FREE(float);
+ CHECK_ALWAYS_LOCK_FREE(double);
+ CHECK_ALWAYS_LOCK_FREE(long double);
+#if __has_attribute(vector_size) && defined(_LIBCPP_VERSION)
+ CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(1 * sizeof(int)))));
+ CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(2 * sizeof(int)))));
+ CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(4 * sizeof(int)))));
+ CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(16 * sizeof(int)))));
+ CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(32 * sizeof(int)))));
+ CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(1 * sizeof(float)))));
+ CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(2 * sizeof(float)))));
+ CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(4 * sizeof(float)))));
+ CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(16 * sizeof(float)))));
+ CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(32 * sizeof(float)))));
+ CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(1 * sizeof(double)))));
+ CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(2 * sizeof(double)))));
+ CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(4 * sizeof(double)))));
+ CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(16 * sizeof(double)))));
+ CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(32 * sizeof(double)))));
+#endif // __has_attribute(vector_size) && defined(_LIBCPP_VERSION)
+ CHECK_ALWAYS_LOCK_FREE(struct Empty{});
+ CHECK_ALWAYS_LOCK_FREE(struct OneInt { int i; });
+ CHECK_ALWAYS_LOCK_FREE(struct IntArr2 { int i[2]; });
+ CHECK_ALWAYS_LOCK_FREE(struct FloatArr3 { float i[3]; });
+ CHECK_ALWAYS_LOCK_FREE(struct LLIArr2 { long long int i[2]; });
+ CHECK_ALWAYS_LOCK_FREE(struct LLIArr4 { long long int i[4]; });
+ CHECK_ALWAYS_LOCK_FREE(struct LLIArr8 { long long int i[8]; });
+ CHECK_ALWAYS_LOCK_FREE(struct LLIArr16 { long long int i[16]; });
+ CHECK_ALWAYS_LOCK_FREE(struct Padding {
+ char c; /* padding */
+ long long int i;
+ });
+ CHECK_ALWAYS_LOCK_FREE(union IntFloat {
+ int i;
+ float f;
+ });
+ CHECK_ALWAYS_LOCK_FREE(enum class CharEnumClass : char{foo});
+
+ // C macro and static constexpr must be consistent.
+ enum class CharEnumClass : char { foo };
+ static_assert(std::atomic<bool>::is_always_lock_free == (2 == ATOMIC_BOOL_LOCK_FREE), "");
+ static_assert(std::atomic<char>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
+ static_assert(std::atomic<CharEnumClass>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
+ static_assert(std::atomic<signed char>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
+ static_assert(std::atomic<unsigned char>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
+#if TEST_STD_VER > 17 && defined(__cpp_char8_t)
+ static_assert(std::atomic<char8_t>::is_always_lock_free == (2 == ATOMIC_CHAR8_T_LOCK_FREE), "");
+#endif
+ static_assert(std::atomic<char16_t>::is_always_lock_free == (2 == ATOMIC_CHAR16_T_LOCK_FREE), "");
+ static_assert(std::atomic<char32_t>::is_always_lock_free == (2 == ATOMIC_CHAR32_T_LOCK_FREE), "");
+ static_assert(std::atomic<wchar_t>::is_always_lock_free == (2 == ATOMIC_WCHAR_T_LOCK_FREE), "");
+ static_assert(std::atomic<short>::is_always_lock_free == (2 == ATOMIC_SHORT_LOCK_FREE), "");
+ static_assert(std::atomic<unsigned short>::is_always_lock_free == (2 == ATOMIC_SHORT_LOCK_FREE), "");
+ static_assert(std::atomic<int>::is_always_lock_free == (2 == ATOMIC_INT_LOCK_FREE), "");
+ static_assert(std::atomic<unsigned int>::is_always_lock_free == (2 == ATOMIC_INT_LOCK_FREE), "");
+ static_assert(std::atomic<long>::is_always_lock_free == (2 == ATOMIC_LONG_LOCK_FREE), "");
+ static_assert(std::atomic<unsigned long>::is_always_lock_free == (2 == ATOMIC_LONG_LOCK_FREE), "");
+ static_assert(std::atomic<long long>::is_always_lock_free == (2 == ATOMIC_LLONG_LOCK_FREE), "");
+ static_assert(std::atomic<unsigned long long>::is_always_lock_free == (2 == ATOMIC_LLONG_LOCK_FREE), "");
+ static_assert(std::atomic<void*>::is_always_lock_free == (2 == ATOMIC_POINTER_LOCK_FREE), "");
+ static_assert(std::atomic<std::nullptr_t>::is_always_lock_free == (2 == ATOMIC_POINTER_LOCK_FREE), "");
+
+#if TEST_STD_VER >= 20
+ static_assert(std::atomic_signed_lock_free::is_always_lock_free, "");
+ static_assert(std::atomic_unsigned_lock_free::is_always_lock_free, "");
+#endif
+}
+
+int main(int, char**) {
+ test();
+ return 0;
+}
diff --git a/libcxx/test/std/atomics/atomics.lockfree/isalwayslockfree.pass.cpp b/libcxx/test/std/atomics/atomics.lockfree/isalwayslockfree.pass.cpp
deleted file mode 100644
index 6d6e6477bc251..0000000000000
--- a/libcxx/test/std/atomics/atomics.lockfree/isalwayslockfree.pass.cpp
+++ /dev/null
@@ -1,120 +0,0 @@
-//===----------------------------------------------------------------------===//
-//
-// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-// See https://llvm.org/LICENSE.txt for license information.
-// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
-//
-//===----------------------------------------------------------------------===//
-//
-// UNSUPPORTED: c++03, c++11, c++14
-
-// <atomic>
-
-// static constexpr bool is_always_lock_free;
-
-#include <atomic>
-#include <cassert>
-#include <cstddef>
-
-#include "test_macros.h"
-
-template <typename T>
-void checkAlwaysLockFree() {
- if (std::atomic<T>::is_always_lock_free) {
- assert(std::atomic<T>().is_lock_free());
- }
-}
-
-void run()
-{
-// structs and unions can't be defined in the template invocation.
-// Work around this with a typedef.
-#define CHECK_ALWAYS_LOCK_FREE(T) \
- do { \
- typedef T type; \
- checkAlwaysLockFree<type>(); \
- } while (0)
-
- CHECK_ALWAYS_LOCK_FREE(bool);
- CHECK_ALWAYS_LOCK_FREE(char);
- CHECK_ALWAYS_LOCK_FREE(signed char);
- CHECK_ALWAYS_LOCK_FREE(unsigned char);
-#if TEST_STD_VER > 17 && defined(__cpp_char8_t)
- CHECK_ALWAYS_LOCK_FREE(char8_t);
-#endif
- CHECK_ALWAYS_LOCK_FREE(char16_t);
- CHECK_ALWAYS_LOCK_FREE(char32_t);
- CHECK_ALWAYS_LOCK_FREE(wchar_t);
- CHECK_ALWAYS_LOCK_FREE(short);
- CHECK_ALWAYS_LOCK_FREE(unsigned short);
- CHECK_ALWAYS_LOCK_FREE(int);
- CHECK_ALWAYS_LOCK_FREE(unsigned int);
- CHECK_ALWAYS_LOCK_FREE(long);
- CHECK_ALWAYS_LOCK_FREE(unsigned long);
- CHECK_ALWAYS_LOCK_FREE(long long);
- CHECK_ALWAYS_LOCK_FREE(unsigned long long);
- CHECK_ALWAYS_LOCK_FREE(std::nullptr_t);
- CHECK_ALWAYS_LOCK_FREE(void*);
- CHECK_ALWAYS_LOCK_FREE(float);
- CHECK_ALWAYS_LOCK_FREE(double);
- CHECK_ALWAYS_LOCK_FREE(long double);
-#if __has_attribute(vector_size) && defined(_LIBCPP_VERSION)
- CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(1 * sizeof(int)))));
- CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(2 * sizeof(int)))));
- CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(4 * sizeof(int)))));
- CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(16 * sizeof(int)))));
- CHECK_ALWAYS_LOCK_FREE(int __attribute__((vector_size(32 * sizeof(int)))));
- CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(1 * sizeof(float)))));
- CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(2 * sizeof(float)))));
- CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(4 * sizeof(float)))));
- CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(16 * sizeof(float)))));
- CHECK_ALWAYS_LOCK_FREE(float __attribute__((vector_size(32 * sizeof(float)))));
- CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(1 * sizeof(double)))));
- CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(2 * sizeof(double)))));
- CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(4 * sizeof(double)))));
- CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(16 * sizeof(double)))));
- CHECK_ALWAYS_LOCK_FREE(double __attribute__((vector_size(32 * sizeof(double)))));
-#endif // __has_attribute(vector_size) && defined(_LIBCPP_VERSION)
- CHECK_ALWAYS_LOCK_FREE(struct Empty {});
- CHECK_ALWAYS_LOCK_FREE(struct OneInt { int i; });
- CHECK_ALWAYS_LOCK_FREE(struct IntArr2 { int i[2]; });
- CHECK_ALWAYS_LOCK_FREE(struct FloatArr3 { float i[3]; });
- CHECK_ALWAYS_LOCK_FREE(struct LLIArr2 { long long int i[2]; });
- CHECK_ALWAYS_LOCK_FREE(struct LLIArr4 { long long int i[4]; });
- CHECK_ALWAYS_LOCK_FREE(struct LLIArr8 { long long int i[8]; });
- CHECK_ALWAYS_LOCK_FREE(struct LLIArr16 { long long int i[16]; });
- CHECK_ALWAYS_LOCK_FREE(struct Padding { char c; /* padding */ long long int i; });
- CHECK_ALWAYS_LOCK_FREE(union IntFloat { int i; float f; });
- CHECK_ALWAYS_LOCK_FREE(enum class CharEnumClass : char { foo });
-
- // C macro and static constexpr must be consistent.
- enum class CharEnumClass : char { foo };
- static_assert(std::atomic<bool>::is_always_lock_free == (2 == ATOMIC_BOOL_LOCK_FREE), "");
- static_assert(std::atomic<char>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
- static_assert(std::atomic<CharEnumClass>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
- static_assert(std::atomic<signed char>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
- static_assert(std::atomic<unsigned char>::is_always_lock_free == (2 == ATOMIC_CHAR_LOCK_FREE), "");
-#if TEST_STD_VER > 17 && defined(__cpp_char8_t)
- static_assert(std::atomic<char8_t>::is_always_lock_free == (2 == ATOMIC_CHAR8_T_LOCK_FREE), "");
-#endif
- static_assert(std::atomic<char16_t>::is_always_lock_free == (2 == ATOMIC_CHAR16_T_LOCK_FREE), "");
- static_assert(std::atomic<char32_t>::is_always_lock_free == (2 == ATOMIC_CHAR32_T_LOCK_FREE), "");
- static_assert(std::atomic<wchar_t>::is_always_lock_free == (2 == ATOMIC_WCHAR_T_LOCK_FREE), "");
- static_assert(std::atomic<short>::is_always_lock_free == (2 == ATOMIC_SHORT_LOCK_FREE), "");
- static_assert(std::atomic<unsigned short>::is_always_lock_free == (2 == ATOMIC_SHORT_LOCK_FREE), "");
- static_assert(std::atomic<int>::is_always_lock_free == (2 == ATOMIC_INT_LOCK_FREE), "");
- static_assert(std::atomic<unsigned int>::is_always_lock_free == (2 == ATOMIC_INT_LOCK_FREE), "");
- static_assert(std::atomic<long>::is_always_lock_free == (2 == ATOMIC_LONG_LOCK_FREE), "");
- static_assert(std::atomic<unsigned long>::is_always_lock_free == (2 == ATOMIC_LONG_LOCK_FREE), "");
- static_assert(std::atomic<long long>::is_always_lock_free == (2 == ATOMIC_LLONG_LOCK_FREE), "");
- static_assert(std::atomic<unsigned long long>::is_always_lock_free == (2 == ATOMIC_LLONG_LOCK_FREE), "");
- static_assert(std::atomic<void*>::is_always_lock_free == (2 == ATOMIC_POINTER_LOCK_FREE), "");
- static_assert(std::atomic<std::nullptr_t>::is_always_lock_free == (2 == ATOMIC_POINTER_LOCK_FREE), "");
-
-#if TEST_STD_VER >= 20
- static_assert(std::atomic_signed_lock_free::is_always_lock_free, "");
- static_assert(std::atomic_unsigned_lock_free::is_always_lock_free, "");
-#endif
-}
-
-int main(int, char**) { run(); return 0; }
diff --git a/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp b/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
index 94f65e3b4b669..acdbf63a24d85 100644
--- a/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
+++ b/libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
@@ -9,7 +9,10 @@
// UNSUPPORTED: c++03, c++11, c++14, c++17
// <atomic>
-
+//
+// template <class T>
+// class atomic_ref;
+//
// static constexpr bool is_always_lock_free;
// bool is_lock_free() const noexcept;
@@ -18,10 +21,29 @@
#include <concepts>
#include "test_macros.h"
+#include "atomic_helpers.h"
template <typename T>
-void check_always_lock_free(std::atomic_ref<T> const a) {
- std::same_as<const bool> decltype(auto) is_always_lock_free = std::atomic_ref<T>::is_always_lock_free;
+void check_always_lock_free(std::atomic_ref<T> const& a) {
+ using InfoT = LockFreeStatusInfo<T>;
+
+ constexpr std::same_as<const bool> decltype(auto) is_always_lock_free = std::atomic_ref<T>::is_always_lock_free;
+
+ // If we know the status of T for sure, validate the exact result of the function.
+ if constexpr (InfoT::status_known) {
+ constexpr LockFreeStatus known_status = InfoT::value;
+ if constexpr (known_status == LockFreeStatus::always) {
+ static_assert(is_always_lock_free, "is_always_lock_free is inconsistent with known lock-free status");
+ assert(a.is_lock_free() && "is_lock_free() is inconsistent with known lock-free status");
+ } else if constexpr (known_status == LockFreeStatus::never) {
+ static_assert(!is_always_lock_free, "is_always_lock_free is inconsistent with known lock-free status");
+ assert(!a.is_lock_free() && "is_lock_free() is inconsistent with known lock-free status");
+ } else {
+ assert(a.is_lock_free() || !a.is_lock_free()); // This is kinda dumb, but we might as well call the function once.
+ }
+ }
+
+ // In all cases, also sanity-check it based on the implication always-lock-free => lock-free.
if (is_always_lock_free) {
std::same_as<bool> decltype(auto) is_lock_free = a.is_lock_free();
assert(is_lock_free);
@@ -33,10 +55,14 @@ void check_always_lock_free(std::atomic_ref<T> const a) {
do { \
typedef T type; \
type obj{}; \
- check_always_lock_free(std::atomic_ref<type>(obj)); \
+ std::atomic_ref<type> a(obj); \
+ check_always_lock_free(a); \
} while (0)
void test() {
+ char c = 'x';
+ check_always_lock_free(std::atomic_ref<char>(c));
+
int i = 0;
check_always_lock_free(std::atomic_ref<int>(i));
diff --git a/libcxx/test/support/atomic_helpers.h b/libcxx/test/support/atomic_helpers.h
index 0266a0961067b..e5d07bab9f616 100644
--- a/libcxx/test/support/atomic_helpers.h
+++ b/libcxx/test/support/atomic_helpers.h
@@ -11,9 +11,102 @@
#include <cassert>
#include <cstdint>
+#include <cstddef>
+#include <type_traits>
#include "test_macros.h"
+#if defined(TEST_COMPILER_CLANG)
+# define TEST_ATOMIC_CHAR_LOCK_FREE __CLANG_ATOMIC_CHAR_LOCK_FREE
+# def...
[truncated]
|
@dalg24 We should also consider changing the required alignment of Note: I said consider, not actually change. IDK if that's actually a good idea / better QOI, but we should think about it and consider the tradeoffs, which I haven't done so far. |
I had not put much thought into it when I implemented it. |
Related: fixing |
@rprichard I am seeing a failure on Android x86 and I am unable to explain it: https://buildkite.com/llvm-project/libcxx-ci/builds/36496#0190c7ba-3ef7-4ebf-9368-95561243603a Do you think you could take a look? It should be extremely easy to reproduce by pulling this patch, it's a compile-time failure in |
I agree that we should definitely increase the required alignment of small types to allow them to be used lock-free.
These are nearly the same thing. You cannot legitimately align a The other difference is whether to require 16-byte alignment for a 16-byte object on a 32-bit platform. I don't think that matters much, so I'd just go with libstdc++'s choice. |
Re the android failure -- this isn't an android-specific issue, it for any x86-32 platform. The issue is that So, then, the test thinks that any 8-byte but 4-byte-aligned value is lock-free. But it's not. The macro is, (misleadingly, considering the name), not promising that operations on |
Thanks, this is super useful. So IIUC the problem is basically in our test suite in https://github.com/llvm/llvm-project/pull/99570/files#diff-0c1e385ae099192c3c58151d22e7bf27e60b32e257d5cd24bd179f424c7bc7b7R61 when we do: enum class LockFreeStatus : int { unknown = -1, never = 0, sometimes = 1, always = 2 };
#define COMPARE_TYPES(T1, T2) (sizeof(T1) == sizeof(T2) && TEST_ALIGNOF(T1) >= TEST_ALIGNOF(T2))
template <class T>
struct LockFreeStatusInfo {
static const LockFreeStatus value = LockFreeStatus(
COMPARE_TYPES(T, char)
? TEST_ATOMIC_CHAR_LOCK_FREE
: (COMPARE_TYPES(T, short)
? TEST_ATOMIC_SHORT_LOCK_FREE
: (COMPARE_TYPES(T, int)
? TEST_ATOMIC_INT_LOCK_FREE
: (COMPARE_TYPES(T, long)
? TEST_ATOMIC_LONG_LOCK_FREE
: (COMPARE_TYPES(T, long long)
? TEST_ATOMIC_LLONG_LOCK_FREE
: (COMPARE_TYPES(T, void*) ? TEST_ATOMIC_POINTER_LOCK_FREE : -1))))));
static const bool status_known = LockFreeStatusInfo::value != LockFreeStatus::unknown;
}; For
we then conclude that the However, that is incorrect, since we would need If so, how the hell are we supposed to use this :-). |
Yes, you got that right. The issue is that After we make the change to upgrade required-alignment of atomic_ref, then this test should start working, because |
The builtin __atomic_always_lock_free takes into account the type of the pointer provided as the second argument. Because we were passing void*, rather than T*, the calculation failed. This meant that atomic_ref<T>::is_always_lock_free was only true for char & bool. This bug exists elsewhere in the atomic library (when using GCC, we fail to pass a pointer at all, and we fail to correctly align the atomic like _Atomic would). This bug was not initially caught because we don't ever actually expect a given value for `is_always_lock_free`. This problem is common throughout atomic, where the tests have been written to assert that _the value under test_ IS _the value under test_. Which leads to the admission of bugs like this. Further work is needed to clean up: (A) Our detection of has-64-bit-atomics, which uses std::atomic to determine if std::atomic is supported... (the type `LargeType` may be 64 bits in size, but it's required alignment is only 1 byte). This configuration test was never intended to provide that information. (B) The use of __atomic_is_always_lock_free in the GCC atomic implementation, where we lie about wether a type is always lock free, when the alignment for the std::atomic<T> is much smaller than required. For example, struct Counter {int x; int y; };, which _Atomic Counter aligns to 8 bytes, but our std::atomic<Counter> under GCC only aligns to 4, but still reports that the type is always lock free. (C) std::atomic_ref<T>::required_alignment should often times be larger than the natural alignment of the type if the sizeof(T) > alignof(T) and sizeof(T) 2, 4, 8, or 16. (See the Counter example). In failing to do so we make many types (again, see Counter), non-lock free even when there are atomic instructions on the host that support types of that size. (D) We need to actually test against hard coded values throughout our atomic tests to avoid these sorts of bugs in the future. This probably means auditing the entire atomic test suite. This change attempts to start sorting out the testing difficulties by using the __GCC_ATOMIC_(CHAR|SHORT|INT|LONG|LLONG|POINTER)_IS_LOCK_FREE predefined macros to establish an expected value for `is_always_lock_free` and `is_lock_free` for the respective types, as well as types with matching sizes and compatible alignment values (Where compatible alignment meants alignof(T) >= alignof(char|short|int|long|long long) for the matching sized type). Using these compiler pre-defines we can actually validate that certain types, like char and int, are actually always lock free like they are on every platform in the wild(*). (*) At least for every platform we care about. Fixing (B) reqires an ABI break where we bump the alignment on the type std::atomic<T> to match that of _Atomic T (were we under clang). Fixing (C) also requires an ABI break, but atomic_ref is new enough that we should consider it ASAP. (Though fixing (C) is arguably more of a QoI detail, but it's a big one, since we don't want the runtime alignment of memory to determine the locking behavior of the atomic).
e0eb3d0
to
af791c9
Compare
This passed on Android x86 now. Merging and cherry-picking. Thanks everyone. |
/cherry-pick cc1dfb3 |
…99570) The builtin __atomic_always_lock_free takes into account the type of the pointer provided as the second argument. Because we were passing void*, rather than T*, the calculation failed. This meant that atomic_ref<T>::is_always_lock_free was only true for char & bool. This bug exists elsewhere in the atomic library (when using GCC, we fail to pass a pointer at all, and we fail to correctly align the atomic like _Atomic would). This change also attempts to start sorting out testing difficulties with this function that caused the bug to exist by using the __GCC_ATOMIC_(CHAR|SHORT|INT|LONG|LLONG|POINTER)_IS_LOCK_FREE predefined macros to establish an expected value for `is_always_lock_free` and `is_lock_free` for the respective types, as well as types with matching sizes and compatible alignment values. Using these compiler pre-defines we can actually validate that certain types, like char and int, are actually always lock free like they are on every platform in the wild. Note that this patch was actually authored by Eric Fiselier but I picked up the patch and GitHub won't let me set Eric as the primary author. Co-authored-by: Eric Fiselier <eric@efcs.ca> (cherry picked from commit cc1dfb3)
/pull-request #100792 |
…99570) The builtin __atomic_always_lock_free takes into account the type of the pointer provided as the second argument. Because we were passing void*, rather than T*, the calculation failed. This meant that atomic_ref<T>::is_always_lock_free was only true for char & bool. This bug exists elsewhere in the atomic library (when using GCC, we fail to pass a pointer at all, and we fail to correctly align the atomic like _Atomic would). This change also attempts to start sorting out testing difficulties with this function that caused the bug to exist by using the __GCC_ATOMIC_(CHAR|SHORT|INT|LONG|LLONG|POINTER)_IS_LOCK_FREE predefined macros to establish an expected value for `is_always_lock_free` and `is_lock_free` for the respective types, as well as types with matching sizes and compatible alignment values. Using these compiler pre-defines we can actually validate that certain types, like char and int, are actually always lock free like they are on every platform in the wild. Note that this patch was actually authored by Eric Fiselier but I picked up the patch and GitHub won't let me set Eric as the primary author. Co-authored-by: Eric Fiselier <eric@efcs.ca> (cherry picked from commit cc1dfb3)
…9654) This patch increases the alignment requirement for std::atomic_ref such that we can guarantee lockfree operations more often. Specifically, we require types that are 1, 2, 4, 8, or 16 bytes in size to be aligned to at least their size to be used with std::atomic_ref. This is the case for most types, however a notable exception is `long long` on x86, which is 8 bytes in length but has an alignment of 4. As a result of this patch, one has to be more careful about the alignment of objects used with std::atomic_ref. Failure to provide a properly-aligned object to std::atomic_ref is a precondition violation and is technically UB. On the flipside, this allows us to provide an atomic_ref that is actually lockfree more often, which is an important QOI property. More information in the discussion at #99570 (comment). Co-authored-by: Louis Dionne <ldionne.2@gmail.com>
…vm#99654) This patch increases the alignment requirement for std::atomic_ref such that we can guarantee lockfree operations more often. Specifically, we require types that are 1, 2, 4, 8, or 16 bytes in size to be aligned to at least their size to be used with std::atomic_ref. This is the case for most types, however a notable exception is `long long` on x86, which is 8 bytes in length but has an alignment of 4. As a result of this patch, one has to be more careful about the alignment of objects used with std::atomic_ref. Failure to provide a properly-aligned object to std::atomic_ref is a precondition violation and is technically UB. On the flipside, this allows us to provide an atomic_ref that is actually lockfree more often, which is an important QOI property. More information in the discussion at llvm#99570 (comment). Co-authored-by: Louis Dionne <ldionne.2@gmail.com> (cherry picked from commit 59ca618)
…vm#99654) This patch increases the alignment requirement for std::atomic_ref such that we can guarantee lockfree operations more often. Specifically, we require types that are 1, 2, 4, 8, or 16 bytes in size to be aligned to at least their size to be used with std::atomic_ref. This is the case for most types, however a notable exception is `long long` on x86, which is 8 bytes in length but has an alignment of 4. As a result of this patch, one has to be more careful about the alignment of objects used with std::atomic_ref. Failure to provide a properly-aligned object to std::atomic_ref is a precondition violation and is technically UB. On the flipside, this allows us to provide an atomic_ref that is actually lockfree more often, which is an important QOI property. More information in the discussion at llvm#99570 (comment). Co-authored-by: Louis Dionne <ldionne.2@gmail.com> (cherry picked from commit 59ca618)
…vm#99654) This patch increases the alignment requirement for std::atomic_ref such that we can guarantee lockfree operations more often. Specifically, we require types that are 1, 2, 4, 8, or 16 bytes in size to be aligned to at least their size to be used with std::atomic_ref. This is the case for most types, however a notable exception is `long long` on x86, which is 8 bytes in length but has an alignment of 4. As a result of this patch, one has to be more careful about the alignment of objects used with std::atomic_ref. Failure to provide a properly-aligned object to std::atomic_ref is a precondition violation and is technically UB. On the flipside, this allows us to provide an atomic_ref that is actually lockfree more often, which is an important QOI property. More information in the discussion at llvm#99570 (comment). Co-authored-by: Louis Dionne <ldionne.2@gmail.com>
Followup to #99570. * `TEST_COMPILER_MSVC` must be tested for `defined`ness, as it is everywhere else. + Definition: https://github.com/llvm/llvm-project/blob/52a7116f5c6ada234f47f7794aaf501a3692b997/libcxx/test/support/test_macros.h#L71-L72 + Example usage: https://github.com/llvm/llvm-project/blob/52a7116f5c6ada234f47f7794aaf501a3692b997/libcxx/test/std/utilities/function.objects/func.not_fn/not_fn.pass.cpp#L248 + Fixes: `llvm-project\libcxx\test\support\atomic_helpers.h(33): fatal error C1017: invalid integer constant expression` * Fix bogus return type: `msvc_is_lock_free_macro_value()` returns `2` or `0`, so it needs to return `int`. + Fixes: `llvm-project\libcxx\test\support\atomic_helpers.h(41): warning C4305: 'return': truncation from 'int' to 'bool'` * Clarity improvement: also add parens when mixing bitwise with arithmetic operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this got backported, it's worth noting that this file is apparently not named correctly (missing .pass.cpp
), and thus the test doesn't run, see #105966
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the heads up, I'll fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Followup to llvm#99570. * `TEST_COMPILER_MSVC` must be tested for `defined`ness, as it is everywhere else. + Definition: https://github.com/llvm/llvm-project/blob/52a7116f5c6ada234f47f7794aaf501a3692b997/libcxx/test/support/test_macros.h#L71-L72 + Example usage: https://github.com/llvm/llvm-project/blob/52a7116f5c6ada234f47f7794aaf501a3692b997/libcxx/test/std/utilities/function.objects/func.not_fn/not_fn.pass.cpp#L248 + Fixes: `llvm-project\libcxx\test\support\atomic_helpers.h(33): fatal error C1017: invalid integer constant expression` * Fix bogus return type: `msvc_is_lock_free_macro_value()` returns `2` or `0`, so it needs to return `int`. + Fixes: `llvm-project\libcxx\test\support\atomic_helpers.h(41): warning C4305: 'return': truncation from 'int' to 'bool'` * Clarity improvement: also add parens when mixing bitwise with arithmetic operators.
The builtin __atomic_always_lock_free takes into account the type of the pointer provided as the second argument. Because we were passing void*, rather than T*, the calculation failed. This meant that atomic_ref::is_always_lock_free was only true for char & bool.
This bug exists elsewhere in the atomic library (when using GCC, we fail to pass a pointer at all, and we fail to correctly align the atomic like _Atomic would).
This bug was not initially caught because we don't ever actually expect a given value for
is_always_lock_free
. This problem is common throughout atomic, where the tests have been written to assert that the value under test IS the value under test. Which leads to the admission of bugs like this.Further work is needed to clean up:
(A) Our detection of has-64-bit-atomics, which uses std::atomic to determine if std::atomic is supported... (the type
LargeType
may be 64 bits in size, but it's required alignment is only 1 byte). This configuration test was never intended to provide that information.(B) The use of __atomic_is_always_lock_free in the GCC atomic implementation, where we lie about wether a type is always lock free, when the alignment for the std::atomic is much smaller than required. For example, struct Counter {int x; int y; };, which _Atomic Counter aligns to 8 bytes, but our std::atomic under GCC only aligns to 4, but still reports that the type is always lock free.
(C) std::atomic_ref::required_alignment should often times be larger than the natural alignment of the type if the sizeof(T) > alignof(T) and sizeof(T) 2, 4, 8, or 16. (See the Counter example). In failing to do so we make many types (again, see Counter), non-lock free even when there are atomic instructions on the host that support types of that size.
(D) We need to actually test against hard coded values throughout our atomic tests to avoid these sorts of bugs in the future. This probably means auditing the entire atomic test suite.
This change attempts to start sorting out the testing difficulties by using the _GCC_ATOMIC(CHAR|SHORT|INT|LONG|LLONG|POINTER)_IS_LOCK_FREE predefined macros to establish an expected value for
is_always_lock_free
andis_lock_free
for the respective types, as well as types with matching sizes and compatible alignment values (Where compatible alignment meants alignof(T) >=alignof(char|short|int|long|long long) for the matching sized type).
Using these compiler pre-defines we can actually validate that certain types, like char and int, are actually always lock free like they are on every platform in the wild(*).
(*) At least for every platform we care about.
Fixing (B) reqires an ABI break where we bump the alignment on the type std::atomic to match that of _Atomic T (were we under clang).
Fixing (C) also requires an ABI break, but atomic_ref is new enough that we should consider it ASAP. (Though fixing (C) is arguably more of a QoI detail, but it's a big one, since we don't want the runtime alignment of memory to determine the locking behavior of the atomic).