Avoid implicit conversions for bitwise operators by mnijhuis-tos · Pull Request #2708 · xtensor-stack/xtensor

The tricky part of this expression is that one of the > is a greater-than sign:

The first argument to std::condition_t is sizeof(std::decay_t<T1>) > sizeof(std::decay_t<T2>). Because of the greater-than sign, I'm using extra parentheses around it. I can probably replace it by std::greater, however, I'm afraid the expression will mainly get longer and doesn't become more readable.

The other arguments to std::condition_t are std::decay_t<T1> and std::decay_t<T2>. So, the conditional selects the largest type of T1 and T2. A (bitwise) operator on two chars will now yield a char, instead of an int. Combining a short and a long yields a long.

Combining two equally-sized unsigned and signed types currently yields the second type, e.g, combining uint16_t and int16_t yields an int16_t return value. For bitwise operations, this shouldn't be a problem. Note that I can easily make it return the first type using >= instead of > when comparing the sizes.

For other operations, like addition or subtraction, I can imagine we have to follow the C++ rules closer. I found the following results when using decltype with g++ 11.3:

  • decltype(int8_t() + int8_t()) -> int

  • decltype(uint8_t() + uint8_t()) -> int`

  • decltype(int8_t() + uint8_t()) -> int

  • decltype(uint8_t() + int8_t()) -> int

  • (u)int16_t: Same result as (u)int8_t: Always int.

  • decltype(int32_t() + int32_t()) -> int32_t

  • decltype(uint32_t() + uint32_t()) -> uint32_t`

  • decltype(int32_t() + uint32_t()) -> uint32_t

  • decltype(uint32_t() + int32_t()) -> uint32_t

  • decltype(int32_t() + int64_t()) -> int64_t

  • decltype(uint32_t() + uint64_t()) -> uint64_t`

  • decltype(int32_t() + uint64_t()) -> uint64_t

  • decltype(uint32_t() + int64_t()) -> int64_t

  • decltype(int32_t() + float()) -> float

  • decltype(uint32_t() + float()) -> float

  • decltype(float() + uint32_t()) -> float

  • decltype(float() + int32_t()) -> float

  • decltype(int64_t() + float()) -> float (even though sizeof(float) (4 bytes) is smaller than sizeof(int64_t)!

  • decltype(uint64_t() + float()) -> float

  • decltype(float() + uint64_t()) -> float

  • decltype(float() + int64_t()) -> float

Perhaps the following stategy will work for all binary operations:

  • If the input types are equal, use that as the return type.
  • If both types are floating point types, return the largest type.
  • If only one of the types is a floating point type, return the floating point type.
  • If the type sizes differ, return the largest type (which can be signed).
  • If the type sizes are equal and they only differ in signedness, return the unsigned type.

Code similar to simd_return_type_impl in XSimd could implement this return type strategy.