-
Notifications
You must be signed in to change notification settings - Fork 13.4k
[libc++] Implement C++20 atomic_ref #76647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write If you have received no comments on your PR for a week, you can request a review If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-libcxx Author: Damien L-G (dalg24) ChangesImplement Patch is 99.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/76647.diff 40 Files Affected:
diff --git a/libcxx/include/CMakeLists.txt b/libcxx/include/CMakeLists.txt
index 0fe3ab44d2466e..5d7e4ca98b1f75 100644
--- a/libcxx/include/CMakeLists.txt
+++ b/libcxx/include/CMakeLists.txt
@@ -236,6 +236,7 @@ set(files
__atomic/atomic_flag.h
__atomic/atomic_init.h
__atomic/atomic_lock_free.h
+ __atomic/atomic_ref.h
__atomic/atomic_sync.h
__atomic/check_memory_order.h
__atomic/contention_t.h
diff --git a/libcxx/include/__atomic/atomic_ref.h b/libcxx/include/__atomic/atomic_ref.h
new file mode 100644
index 00000000000000..6f2467cad5a24e
--- /dev/null
+++ b/libcxx/include/__atomic/atomic_ref.h
@@ -0,0 +1,286 @@
+// -*- C++ -*-
+//===----------------------------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+// Kokkos v. 4.0
+// Copyright (2022) National Technology & Engineering
+// Solutions of Sandia, LLC (NTESS).
+//
+// Under the terms of Contract DE-NA0003525 with NTESS,
+// the U.S. Government retains certain rights in this software.
+//
+//===---------------------------------------------------------------------===//
+
+#ifndef _LIBCPP___ATOMIC_ATOMIC_REF_H
+#define _LIBCPP___ATOMIC_ATOMIC_REF_H
+
+#include <__assert>
+#include <__atomic/check_memory_order.h>
+#include <__atomic/cxx_atomic_impl.h>
+#include <__atomic/is_always_lock_free.h>
+#include <__config>
+#include <__memory/addressof.h>
+#include <__type_traits/is_floating_point.h>
+#include <__type_traits/is_function.h>
+#include <__type_traits/is_nothrow_constructible.h>
+#include <__type_traits/is_same.h>
+#include <cinttypes>
+#include <concepts>
+#include <cstddef>
+#include <limits>
+
+#if !defined(_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER)
+# pragma GCC system_header
+#endif
+
+_LIBCPP_PUSH_MACROS
+#include <__undef_macros>
+
+_LIBCPP_BEGIN_NAMESPACE_STD
+
+#if _LIBCPP_STD_VER >= 20
+
+template <class _Tp, bool = is_integral_v<_Tp> && !is_same_v<_Tp, bool>, bool = is_floating_point_v<_Tp>>
+struct __atomic_ref_base {
+ mutable __cxx_atomic_impl<_Tp&> __a_;
+
+ using value_type = _Tp;
+
+ static constexpr size_t required_alignment = alignof(_Tp);
+
+ static constexpr bool is_always_lock_free = __libcpp_is_always_lock_free<_Tp>::__value;
+
+ _LIBCPP_HIDE_FROM_ABI
+ bool is_lock_free() const noexcept { return __cxx_atomic_is_lock_free(sizeof(_Tp)); }
+
+ _LIBCPP_HIDE_FROM_ABI
+ void store(_Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept
+ _LIBCPP_CHECK_STORE_MEMORY_ORDER(__order) {
+ _LIBCPP_ASSERT_UNCATEGORIZED(
+ __order == memory_order::relaxed || __order == memory_order::release || __order == memory_order::seq_cst,
+ "memory order argument to atomic store operation is invalid");
+ __cxx_atomic_store(&__a_, __desired, __order);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator=(_Tp __desired) const noexcept {
+ store(__desired);
+ return __desired;
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp load(memory_order __order = memory_order::seq_cst) const noexcept _LIBCPP_CHECK_LOAD_MEMORY_ORDER(__order) {
+ _LIBCPP_ASSERT_UNCATEGORIZED(
+ __order == memory_order::relaxed || __order == memory_order::consume || __order == memory_order::acquire ||
+ __order == memory_order::seq_cst,
+ "memory order argument to atomic load operation is invalid");
+ return __cxx_atomic_load(&__a_, __order);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ operator _Tp() const noexcept { return load(); }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp exchange(_Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept {
+ return __cxx_atomic_exchange(&__a_, __desired, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ bool
+ compare_exchange_weak(_Tp& __expected, _Tp __desired, memory_order __success, memory_order __failure) const noexcept
+ _LIBCPP_CHECK_EXCHANGE_MEMORY_ORDER(__success, __failure) {
+ _LIBCPP_ASSERT_UNCATEGORIZED(
+ __failure == memory_order::relaxed || __failure == memory_order::consume ||
+ __failure == memory_order::acquire || __failure == memory_order::seq_cst,
+ "failure memory order argument to weak atomic compare-and-exchange operation is invalid");
+ return __cxx_atomic_compare_exchange_weak(&__a_, &__expected, __desired, __success, __failure);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ bool
+ compare_exchange_strong(_Tp& __expected, _Tp __desired, memory_order __success, memory_order __failure) const noexcept
+ _LIBCPP_CHECK_EXCHANGE_MEMORY_ORDER(__success, __failure) {
+ _LIBCPP_ASSERT_UNCATEGORIZED(
+ __failure == memory_order::relaxed || __failure == memory_order::consume ||
+ __failure == memory_order::acquire || __failure == memory_order::seq_cst,
+ "failure memory order argument to strong atomic compare-and-exchange operation is invalid");
+ return __cxx_atomic_compare_exchange_strong(&__a_, &__expected, __desired, __success, __failure);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ bool
+ compare_exchange_weak(_Tp& __expected, _Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept {
+ return __cxx_atomic_compare_exchange_weak(&__a_, &__expected, __desired, __order, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ bool
+ compare_exchange_strong(_Tp& __expected, _Tp __desired, memory_order __order = memory_order::seq_cst) const noexcept {
+ return __cxx_atomic_compare_exchange_strong(&__a_, &__expected, __desired, __order, __order);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ void wait(_Tp __old, memory_order __order = memory_order::seq_cst) const noexcept
+ _LIBCPP_CHECK_WAIT_MEMORY_ORDER(__order) {
+ _LIBCPP_ASSERT_UNCATEGORIZED(
+ __order == memory_order::relaxed || __order == memory_order::consume || __order == memory_order::acquire ||
+ __order == memory_order::seq_cst,
+ "memory order argument to atomic wait operation is invalid");
+ __cxx_atomic_wait(addressof(__a_), __old, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ void notify_one() const noexcept { __cxx_atomic_notify_one(addressof(__a_)); }
+ _LIBCPP_HIDE_FROM_ABI
+ void notify_all() const noexcept { __cxx_atomic_notify_all(addressof(__a_)); }
+
+ _LIBCPP_HIDE_FROM_ABI
+ __atomic_ref_base(_Tp& __obj) : __a_(__obj) {}
+};
+
+template <class _Tp>
+struct __atomic_ref_base<_Tp, /*_IsIntegral=*/true, /*_IsFloatingPoint=*/false>
+ : public __atomic_ref_base<_Tp, false, false> {
+ using __base = __atomic_ref_base<_Tp, false, false>;
+
+ using difference_type = __base::value_type;
+
+ _LIBCPP_HIDE_FROM_ABI
+ __atomic_ref_base(_Tp& __obj) : __base(__obj) {}
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator=(_Tp __desired) const noexcept { return __base::operator=(__desired); }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_add(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_add(&this->__a_, __arg, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_sub(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_sub(&this->__a_, __arg, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_and(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_and(&this->__a_, __arg, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_or(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_or(&this->__a_, __arg, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_xor(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_xor(&this->__a_, __arg, __order);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator++(int) const noexcept { return fetch_add(_Tp(1)); }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator--(int) const noexcept { return fetch_sub(_Tp(1)); }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator++() const noexcept { return fetch_add(_Tp(1)) + _Tp(1); }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator--() const noexcept { return fetch_sub(_Tp(1)) - _Tp(1); }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator+=(_Tp __arg) const noexcept { return fetch_add(__arg) + __arg; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator-=(_Tp __arg) const noexcept { return fetch_sub(__arg) - __arg; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator&=(_Tp __arg) const noexcept { return fetch_and(__arg) & __arg; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator|=(_Tp __arg) const noexcept { return fetch_or(__arg) | __arg; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator^=(_Tp __arg) const noexcept { return fetch_xor(__arg) ^ __arg; }
+};
+
+template <class _Tp>
+struct __atomic_ref_base<_Tp, /*_IsIntegral=*/false, /*_IsFloatingPoint=*/true>
+ : public __atomic_ref_base<_Tp, false, false> {
+ using __base = __atomic_ref_base<_Tp, false, false>;
+
+ using difference_type = __base::value_type;
+
+ _LIBCPP_HIDE_FROM_ABI
+ __atomic_ref_base(_Tp& __obj) : __base(__obj) {}
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator=(_Tp __desired) const noexcept { return __base::operator=(__desired); }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_add(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_add(&this->__a_, __arg, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp fetch_sub(_Tp __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_sub(&this->__a_, __arg, __order);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator+=(_Tp __arg) const noexcept { return fetch_add(__arg) + __arg; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator-=(_Tp __arg) const noexcept { return fetch_sub(__arg) - __arg; }
+};
+
+template <class _Tp>
+struct atomic_ref : public __atomic_ref_base<_Tp> {
+ static_assert(is_trivially_copyable<_Tp>::value, "std::atomic_ref<T> requires that 'T' be a trivially copyable type");
+
+ using __base = __atomic_ref_base<_Tp>;
+
+ _LIBCPP_HIDE_FROM_ABI
+ explicit atomic_ref(_Tp& __obj) : __base(__obj) {
+ _LIBCPP_ASSERT_UNCATEGORIZED((uintptr_t)addressof(__obj) % __base::required_alignment == 0,
+ "atomic_ref ctor: referenced object must be aligned to required_alignment");
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ atomic_ref(const atomic_ref&) noexcept = default;
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp operator=(_Tp __desired) const noexcept { return __base::operator=(__desired); }
+
+ atomic_ref& operator=(const atomic_ref&) = delete;
+};
+
+template <class _Tp>
+struct atomic_ref<_Tp*> : public __atomic_ref_base<_Tp*> {
+ using __base = __atomic_ref_base<_Tp*>;
+
+ using difference_type = ptrdiff_t;
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* fetch_add(ptrdiff_t __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_add(&this->__a_, __arg, __order);
+ }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* fetch_sub(ptrdiff_t __arg, memory_order __order = memory_order_seq_cst) const noexcept {
+ return __cxx_atomic_fetch_sub(&this->__a_, __arg, __order);
+ }
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator++(int) const noexcept { return fetch_add(1); }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator--(int) const noexcept { return fetch_sub(1); }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator++() const noexcept { return fetch_add(1) + 1; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator--() const noexcept { return fetch_sub(1) - 1; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator+=(ptrdiff_t __arg) const noexcept { return fetch_add(__arg) + __arg; }
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator-=(ptrdiff_t __arg) const noexcept { return fetch_sub(__arg) - __arg; }
+
+ _LIBCPP_HIDE_FROM_ABI
+ explicit atomic_ref(_Tp*& __ptr) : __base(__ptr) {}
+
+ _LIBCPP_HIDE_FROM_ABI
+ _Tp* operator=(_Tp* __desired) const noexcept { return __base::operator=(__desired); }
+
+ atomic_ref& operator=(const atomic_ref&) = delete;
+};
+
+#endif // _LIBCPP_STD_VER >= 20
+
+_LIBCPP_END_NAMESPACE_STD
+
+_LIBCPP_POP_MACROS
+
+#endif // _LIBCPP__ATOMIC_ATOMIC_REF_H
diff --git a/libcxx/include/__atomic/check_memory_order.h b/libcxx/include/__atomic/check_memory_order.h
index 3012aec0521b38..536f764a619026 100644
--- a/libcxx/include/__atomic/check_memory_order.h
+++ b/libcxx/include/__atomic/check_memory_order.h
@@ -27,4 +27,8 @@
_LIBCPP_DIAGNOSE_WARNING(__f == memory_order_release || __f == memory_order_acq_rel, \
"memory order argument to atomic operation is invalid")
+#define _LIBCPP_CHECK_WAIT_MEMORY_ORDER(__m) \
+ _LIBCPP_DIAGNOSE_WARNING(__m == memory_order_release || __m == memory_order_acq_rel, \
+ "memory order argument to atomic operation is invalid")
+
#endif // _LIBCPP___ATOMIC_CHECK_MEMORY_ORDER_H
diff --git a/libcxx/include/__atomic/cxx_atomic_impl.h b/libcxx/include/__atomic/cxx_atomic_impl.h
index 1a0b808a0cb1c4..56cd703c258944 100644
--- a/libcxx/include/__atomic/cxx_atomic_impl.h
+++ b/libcxx/include/__atomic/cxx_atomic_impl.h
@@ -15,8 +15,12 @@
#include <__memory/addressof.h>
#include <__type_traits/conditional.h>
#include <__type_traits/is_assignable.h>
+#include <__type_traits/is_pointer.h>
+#include <__type_traits/is_reference.h>
#include <__type_traits/is_trivially_copyable.h>
+#include <__type_traits/is_volatile.h>
#include <__type_traits/remove_const.h>
+#include <__type_traits/remove_reference.h>
#include <cstddef>
#include <cstring>
@@ -58,9 +62,27 @@ struct __cxx_atomic_base_impl {
}
# endif // _LIBCPP_CXX03_LANG
_LIBCPP_CONSTEXPR explicit __cxx_atomic_base_impl(_Tp value) _NOEXCEPT : __a_value(value) {}
+ using __contained_t = _Tp;
_Tp __a_value;
};
+template <typename _Tp, template <typename> class _TemplateTp>
+struct __is_instantiation_of : false_type {};
+
+template <typename _Tp, template <typename> class _TemplateTp>
+struct __is_instantiation_of<_TemplateTp<_Tp>, _TemplateTp> : true_type {};
+
+template <typename _Tp,
+ typename = typename enable_if<__is_instantiation_of<typename remove_volatile<typename _Tp::__base>::type,
+ __cxx_atomic_base_impl>::value,
+ bool>::type>
+struct __cxx_atomic_base_impl_traits {
+ static constexpr bool __is_value_volatile = is_volatile<_Tp>::value;
+ static constexpr bool __is_value_ref = is_reference<typename _Tp::__contained_t>::value;
+ using __underlying_t = typename remove_volatile<typename remove_reference<typename _Tp::__contained_t>::type>::type;
+ static constexpr bool __is_value_pointer = is_pointer<__underlying_t>::value;
+};
+
_LIBCPP_HIDE_FROM_ABI inline _LIBCPP_CONSTEXPR int __to_gcc_order(memory_order __order) {
// Avoid switch statement to make this a constexpr.
return __order == memory_order_relaxed
@@ -87,13 +109,15 @@ _LIBCPP_HIDE_FROM_ABI inline _LIBCPP_CONSTEXPR int __to_gcc_failure_order(memory
: (__order == memory_order_acq_rel ? __ATOMIC_ACQUIRE : __ATOMIC_CONSUME))));
}
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_init(volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp __val) {
+template <typename _Tp, typename enable_if<__cxx_atomic_base_impl_traits<_Tp>::__is_value_volatile, bool>::type = 0>
+_LIBCPP_HIDE_FROM_ABI void
+__cxx_atomic_init(_Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __val) {
__cxx_atomic_assign_volatile(__a->__a_value, __val);
}
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_init(__cxx_atomic_base_impl<_Tp>* __a, _Tp __val) {
+template <typename _Tp, typename enable_if<!__cxx_atomic_base_impl_traits<_Tp>::__is_value_volatile, bool>::type = 0>
+_LIBCPP_HIDE_FROM_ABI void
+__cxx_atomic_init(_Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __val) {
__a->__a_value = __val;
}
@@ -107,77 +131,46 @@ _LIBCPP_HIDE_FROM_ABI inline void __cxx_atomic_signal_fence(memory_order __order
template <typename _Tp>
_LIBCPP_HIDE_FROM_ABI void
-__cxx_atomic_store(volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp __val, memory_order __order) {
+__cxx_atomic_store(_Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __val, memory_order __order) {
__atomic_store(std::addressof(__a->__a_value), std::addressof(__val), __to_gcc_order(__order));
}
template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_store(__cxx_atomic_base_impl<_Tp>* __a, _Tp __val, memory_order __order) {
- __atomic_store(std::addressof(__a->__a_value), std::addressof(__val), __to_gcc_order(__order));
+_LIBCPP_HIDE_FROM_ABI typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t
+__cxx_atomic_load(const _Tp* __a, memory_order __order) {
+ using _Ret = typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t;
+ alignas(alignof(_Ret)) unsigned char __mem[sizeof(_Ret)];
+ __atomic_load(
+ std::addressof(__a->__a_value), std::addressof(*reinterpret_cast<_Ret*>(__mem)), __to_gcc_order(__order));
+ return *reinterpret_cast<_Ret*>(__mem);
}
template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp __cxx_atomic_load(const volatile __cxx_atomic_base_impl<_Tp>* __a, memory_order __order) {
- _Tp __ret;
- __atomic_load(std::addressof(__a->__a_value), std::addressof(__ret), __to_gcc_order(__order));
- return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void
-__cxx_atomic_load_inplace(const volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp* __dst, memory_order __order) {
+_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_load_inplace(
+ const _Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t* __dst, memory_order __order) {
__atomic_load(std::addressof(__a->__a_value), __dst, __to_gcc_order(__order));
}
template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI void
-__cxx_atomic_load_inplace(const __cxx_atomic_base_impl<_Tp>* __a, _Tp* __dst, memory_order __order) {
- __atomic_load(std::addressof(__a->__a_value), __dst, __to_gcc_order(__order));
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp __cxx_atomic_load(const __cxx_atomic_base_impl<_Tp>* __a, memory_order __order) {
- _Tp __ret;
- __atomic_load(std::addressof(__a->__a_value), std::addressof(__ret), __to_gcc_order(__order));
- return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp
-__cxx_atomic_exchange(volatile __cxx_atomic_base_impl<_Tp>* __a, _Tp __value, memory_order __order) {
- _Tp __ret;
+_LIBCPP_HIDE_FROM_ABI typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __cxx_atomic_exchange(
+ _Tp* __a, typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __value, memory_order __order) {
+ using _Ret = typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t;
+ alignas(alignof(_Ret)) unsigned char __mem[sizeof(_Ret)];
__atomic_exchange(
- std::addressof(__a->__a_value), std::addressof(__value), std::addressof(__ret), __to_gcc_order(__order));
- return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI _Tp __cxx_atomic_exchange(__cxx_atomic_base_impl<_Tp>* __a, _Tp __value, memory_order __order) {
- _Tp __ret;
- __atomic_exchange(
- std::addressof(__a->__a_value), std::addressof(__value), std::addressof(__ret), __to_gcc_order(__order));
- return __ret;
-}
-
-template <typename _Tp>
-_LIBCPP_HIDE_FROM_ABI bool __cxx_atomic_compare_exchange_strong(
- volatile __cxx_atomic_base_impl<_Tp>* __a,
- _Tp* __expected,
- _Tp __value,
- memory_order __success,
- memory_order __failure) {
- return __atomic_compare_exchange(
std::addressof(__a->__a_value),
- __expected,
std::addressof(__value),
- false,
- __to_gcc_order(__success),
- __to_gcc_failure_order(__failure));
+ std::addressof(*reinterpret_cast<_Ret*>(__mem)),
+ __to_gcc_order(__order));
+ return *reinterpret_cast<_Ret*>(__mem);
}
template <typename _Tp>
_LIBCPP_HIDE_FROM_ABI bool __cxx_atomic_compare_exchange_strong(
- __cxx_atomic_base_impl<_Tp>* __a, _Tp* __expected, _Tp __value, memory_order __success, memory_order __failure) {
+ _Tp* __a,
+ typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t* __expected,
+ typename __cxx_atomic_base_impl_traits<_Tp>::__underlying_t __value,
+ memory_order __success,
+ memory_order __failure) {
return __atomic_compare_...
[truncated]
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
It's important to note that this PR deletes the codepath in cxx_atomic_impl.h which uses the That's a wonderful simplification, and I would love to see it done. BUT, it changes the ABI for Unfortunately, the current situation is a total mess. Mess 1: _Atomic ABIClang's ABI for _Atomic(T) is complex: it attempts to round up both size and alignment of T, but only up to a per-platform limit, to ensure that a lock-free atomic can be used if possible. That is: given Which means C11 atomics in Clang are ABI-incompatible with GCC. GCC never adds padding, but instead simply increases alignment for objects of size 1, 2, 4, 8, 16 to match their size. Quite unfortunate situation, since usually GCC and Clang are ABI-compatible for C code. So: the ABI for C11 _Atomic differs between Clang and GCC. The ABI for libc++'s std::atomic also changes depending on which compiler you build with (since it changes which underlying implementation strategy is used). It matches C11 _Atomic ABI when built on Clang, but when built on GCC, uses an ABI incompatible with both. GCC's libstdc++ does have a consistent ABI between Clang and GCC, because it always uses GCC's rule, even when built with Clang. But that means it doesn't match C11 _Atomic on Clang, only on GCC. Making everything worse is that, as of C++23's addition of <stdatomic.h>, expectation of ABI-compatibility between C++ This is something that I think we ought to fix -- but it would require an ABI break. A long time ago, I attempted to bring this up as something the psABI documents should define, but didn't manage to push hard enough to get it actually done. My expectation is that if the documents did specify it, they would specify what GCC does. Clang's option is, in principle, nice from a QoI standpoint (it allows more types to be lock-free), but in practice is simply unnecessary complexity. Mess 2: libc++ atomic configuration optionsOK, so, that's a huge minefield. But for atomic_ref, we can actually ignore all of the above complexity, since we cannot change the layout of the underlying type. Thus: we can simply implement atomic_ref directly on the Well, except that libc++ has a third implementation strategy that std::atomic can use. Namely, 2145054 added a custom locked impl under IMO, that is effectively superfluous and incompatible with the goal of C and C++ equivalency, and I don't really understand the rationale for adding it in the first place. If someone wants to use locked-atomics in a freestanding implementation, they should simply provide the requisite libatomic libcalls for their environment, end of story. So, I think we should remove this third option. Or, at least, ignore it when implementing atomic_ref, and do just use the |
|
7dd7930
to
970e5cc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not yet reviewed the code. Just to bring up some issues we encountered previously:
there are some weird downstream setup. IIRC, there is Fushia where it is Linux but without gcc installed (thus cannot -latomic).
there is also Windows clang-cl which lacks of some builtins.
On Mac, do we have the gcc builtin at all? At least, on my old laptop all the code path are going through the c11 branch.
I’d also like to mention that I am working on several atomic related things, including making types with padding bytes in between member and/or after the tail works with atomic. (btw, does your atomic_ref compare_exchange_strong works for those types?) I don’t think we are allowed to touch the padding bits since atomic_ref does not own the object
Also the current c11_compare_exchange is broken for types whose size is not power of two. And I am also working on fixing that.
and I don’t think breaking existing std::atomic ABI is feasible. even though we could Abi tag std::atomic, there is no way we could tag user defined classes that contain std::atomic and appear in Abi boundary.
That doesn't affect whether you can use the
Fortunately, it doesn't lack these.
Yep, they exist on all Clang.
This PR indeed does not implement that part of the spec yet. That'll need to wait until
Recall that long-term it's not only std::atomic whose ABI we need to change, but also C11 _Atomic. However, this PR no longer touches it, so that's moot for the purposes of this PR. |
just to put some related in progress work I have:
I think your patch is orthogonal to all these bugs we have so we can try to land this one first. |
e651c05
to
54d1920
Compare
Just cross-referencing for posterity, there was once an implementation here https://reviews.llvm.org/D72240 that eventually got dropped. There's nothing actionable to do here, I'm just dropping the link for cross-referencing purposes. |
I would support that too. https://reviews.llvm.org/D56913 happened a long time ago, and if it were to be done again, I might push back against that approach as being too ad-hoc. I'm not even certain we use this anywhere, and I'd argue that Freestanding does not equate there being no builtins. So +1 from me for removing it, at least at first glance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the patch! The review doesn't go over the implementation in detail since I think we need to consider the overall state of atomic_wait
primitives and GCC vs Clang atomics before we commit to an implementation strategy here.
libcxx/test/std/atomics/atomics.ref/required_alignment.pass.cpp
Outdated
Show resolved
Hide resolved
libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
Outdated
Show resolved
Hide resolved
libcxx/test/std/atomics/atomics.ref/is_always_lock_free.pass.cpp
Outdated
Show resolved
Hide resolved
so it can be reused to implement atomic_ref. Salvaged/adapted from https://reviews.llvm.org/D72240 Co-Authored-By: Benjamin Trapani
…bility-synchronization_library-missing for wait/notify
…couple places to see if it fixes the Armv7-M picolibc builds
simdutf's `atomic_binary_to_base64` requires `std::atomic_ref`. `std::atomic_ref` is only available in LLVM 19, which is only available in Xcode 16.3, which is only available on macOS 15. Refs: llvm/llvm-project#76647 Refs: v8/v8@e3cddbe
simdutf's `atomic_binary_to_base64` requires `std::atomic_ref`. `std::atomic_ref` is only available in LLVM 19, which is only available in Xcode 16.3, which is only available on macOS 15. Refs: llvm/llvm-project#76647 Refs: v8/v8@e3cddbe
simdutf's `atomic_binary_to_base64` requires `std::atomic_ref`. `std::atomic_ref` is only available in LLVM 19, which is only available in Xcode 16.3, which is only available on macOS 15. Refs: llvm/llvm-project#76647 Refs: v8/v8@e3cddbe
Implement
atomic_ref
class template by reusingatomic_base_impl
.Based on the work from https://reviews.llvm.org/D72240