Data-parallel types (SIMD) (since C++26)
The library provides data-parallel types and operations on these types: portable types for explicitly stating data-parallelism and structuring data through data-parallel execution resources where available, such as SIMD registers and instructions or execution units that are driven by a common instruction decoder.
The set of vectorizable types comprises:
- all standard integers and character types;
- most floating-point types including
float,double, and the selected extended floating-point types:std::float16_t,std::float32_t, andstd::float64_tif defined; and std::complex<T>whereTis a vectorizable floating-point type.
A data-parallel type consists of one or more elements of an underlying vectorizable type, called the element type . The number of elements, called the width , is constant for each data-parallel type.
The data-parallel type refers to all enabled specializations of the class templates basic_simd and basic_simd_mask.
A data-parallel object of data-parallel type behaves analogously to objects of type T. But while T stores and manipulates a single value, the data-parallel type with the element type T stores and manipulates multiple values.
Every operation on a data-parallel object acts element-wise (except for horizontal operations, such as reductions, which are clearly marked as such) applying to each element of the object or to corresponding elements of two objects. Each such application is unsequenced with respect to the others. This simple rule expresses data-parallelism and will be used by the compiler to generate SIMD instructions and/or independent execution streams.
All operations (except non-constexpr math function overloads) on data-parallel objects are constexpr: it is possible to create and use data-parallel objects in the evaluation of a constant expression.
Alias templates simd and simd_mask are defined to allow users to specify the width to a certain size. The default width is determined by the implementation at compile-time.
Defined in namespace |
Main classes
Load and store flags
Load and store operations
Casts
| splits single data-parallel object to multiple ones (function template) [edit] | |
| concatenates multiple data-parallel objects into a single one (function template) [edit] |
Algorithms
Reductions
Traits
Math functions
All functions in <cmath> and <complex> are overloaded for basic_simd.
Bit manipulation functions
All bit manipulation functions in <bit> are overloaded for basic_simd.
Implementation details
The data-parallel types basic_simd and basic_simd_mask are associated with ABI tags . These tags are types that specify the size and binary representation of data-parallel objects. The design intends the size and binary representation to vary based on target architecture and compiler flags. The ABI tag, together with the element type, determines the width.
The ABI tag remains independent of machine instruction set selection. The chosen machine instruction set limits the usable ABI tag types. The ABI tags enable users to safely pass objects of data-parallel type across translation unit boundaries.
Exposition-only entities
|
(1) | (exposition only*) |
|
(2) | (exposition only*) |
|
(3) | (exposition only*) |
|
(4) | (exposition only*) |
|
(5) | (exposition only*) |
|
(6) | (exposition only*) |
|
(7) | (exposition only*) |
1) /*simd-size-type*/ is an alias for for a signed integer type. The implementation is free to choose any signed integer type.
2) /*integer-from*/<Bytes> is an alias for a signed integer type T such that sizeof(T) equals Bytes.
3) /*simd-size-v*/<T, Abi> denotes the width of the enabled specialization basic_simd<T, Abi>, or 0 otherwise.
4) If T denotes std::datapar::basic_simd_mask<Bytes, Abi>, /*mask-element-size*/<T> equals Bytes.
5) The concept /*constexpr-wrapper-like*/ is defined as:
template< class T > concept /*constexpr-wrapper-like*/ = std::convertible_to<T, decltype(T::value)> && std::equality_comparable_with<T, decltype(T::value)> && std::bool_constant<T() == T::value>::value && std::bool_constant<static_cast<decltype(T::value)>(T()) == T::value>::value;
6) Let x be an lvalue of type const T. /*deduced-simd-t*/<T> is an alias equivalent to:
decltype(x + x), if the type ofx + xis an enabled specialization ofbasic_simd; otherwisevoid.
7) Let x be an lvalue of type const T. /*make-compatible-simd-t*/<V, T> is an alias equivalent to:
/*deduced-simd-t*/<T>, if that type is notvoid, otherwisestd::datapar::simd<decltype(x + x), V::size()>.
| Math functions requirements |
||
|
(8) | (exposition only*) |
|
(9) | (exposition only*) |
|
(10) | (exposition only*) |
|
(11) | (exposition only*) |
8) The concept /*simd-floating-point*/ is defined as:
template< class V > concept /*simd-floating-point*/ = std::same_as<V, std::datapar::basic_simd<typename V::value_type, typename V::abi_type>> && std::is_default_constructible_v<V> && std::floating_point<typename V::value_type>;
9) The concept /*math-floating-point*/ is defined as:
template< class... Ts > concept /*math-floating-point*/ = (/*simd-floating-point*/</*deduced-simd-t*/<Ts>> || ...);
10) Let T0 denote Ts...[0], T1 denote Ts...[1], and TRest denote a pack such that T0, T1, TRest... is equivalent to Ts.... Then, /*math-common-simd-t*/<Ts...> is an alias equivalent to:
/*deduced-simd-t*/<T0>, ifsizeof...(Ts) == 1istrue- otherwise,
std::common_type_t</*deduced-simd-t*/<T0>, /*deduced-simd-t*/<T1>>, ifsizeof...(Ts) == 2istrueand/*math-floating-point*/<T0> && /*math-floating-point*/<T1>istrue, - otherwise,
std::common_type_t</*deduced-simd-t*/<T0>, T1>ifsizeof...(Ts) == 2istrueand/*math-floating-point*/<T0>istrue, - otherwise,
std::common_type_t<T0, /*deduced-simd-t*/<T1>>ifsizeof...(Ts) == 2istrue, - otherwise,
std::common_type_t</*math-common-simd-t*/<T0, T1>, TRest...>, if/*math-common-simd-t*/<T0, T1>is a valid type, - otherwise,
std::common_type_t</*math-common-simd-t*/<TRest...>, T0, T1>.
11) The concept /*reduction-binary-operation*/ is defined as:
template< class BinaryOp, class T > concept /*reduction-binary-operation*/ = requires (const BinaryOp binary_op, const std::datapar::simd<T, 1> v) { { binary_op(v, v) } -> std::same_as<std::datapar::simd<T, 1>>; };
/*reduction-binary-operation*/<BinaryOp, T> is modeled only if:
BinaryOpis a binary element-wise operation that is commutative, and- An object of type
BinaryOpis invocable with two arguments of typestd::datapar::basic_simd<T, Abi>for unspecified ABI tagAbithat returns astd::datapar::basic_simd<T, Abi>.
| SIMD ABI tags |
||
|
(12) | (exposition only*) |
|
(13) | (exposition only*) |
12) /*native-abi*/<T> is an implementation-defined alias for an ABI tag. This is the primary ABI tag to use for efficient explicit vectorization. As a result, basic_simd<T, /*native-abi*/<T>> is an enabled specialization.
13) /*deduce-abi-t*/<T, N> is an alias that names an ABI tag type such that:
/*simd-size-v*/<T, /*deduce-abi-t*/<T, N>>equalsN,std::datapar::basic_simd<T, /*deduce-abi-t*/<T, N>>is an enabled specialization, andstd::datapar::basic_simd_mask<sizeof(T), /*deduce-abi-t*/</*integer-from*/<sizeof(T)>, N>>is an enabled specialization.
It is defined only if T is a vectorizable type, and N > 0 && N <= M is true, where M is an implementation-defined maximum that is at least 64 and can differ depending on T.
| Load and store flags |
||
|
(14) | (exposition only*) |
|
(15) | (exposition only*) |
|
(16) | (exposition only*) |
14-16) These tag types are used as a template argument of std::datapar::flags. See load and store flags for their corresponding uses.
Notes
| Feature-test macro | Value | Std | Feature |
|---|---|---|---|
__cpp_lib_simd |
202411L |
(C++26) | Data-parallel types and operations |
Example
#include <iostream> #include <simd> #include <string_view> namespace dp = std::datapar; void println(std::string_view name, auto const& a) { std::cout << name << ": "; for (std::size_t i{}; i != a.size(); ++i) std::cout << a[i] << ' '; std::cout << '\n'; } template<class A> constexpr dp::basic_simd<int, A> my_abs(dp::basic_simd<int, A> x) { return dp::select(x < 0, -x, x); } int main() { constexpr dp::simd<int> a = 1; println("a", a); constexpr dp::simd<int> b([](int i) { return i - 2; }); println("b", b); constexpr auto c = a + b; println("c", c); constexpr auto d = my_abs(c); println("d", d); constexpr auto e = d * d; println("e", e); constexpr auto inner_product = dp::reduce(e); std::cout << "inner product: " << inner_product << '\n'; constexpr dp::simd<double, 16> x([](int i) { return i; }); println("x", x); // overloaded math functions are defined in <simd> println("cos²(x) + sin²(x)", std::pow(std::cos(x), 2) + std::pow(std::sin(x), 2)); }
Output:
a: 1 1 1 1 b: -2 -1 0 1 c: -1 0 1 2 d: 1 0 1 2 e: 1 0 1 4 inner product: 6 x: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 cos²(x) + sin²(x): 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
See also
| numeric arrays, array masks and array slices (class template) [edit] |