[CherryPick] Fix to_tensor bug for bfloat16 list(#76000) by zhangbo9674 · Pull Request #76242 · PaddlePaddle/Paddle
import paddle
import numpy as np
a = [paddle.to_tensor(2,dtype=paddle.bfloat16)]
print(a)
b = paddle.to_tensor(a)
print(b)
a = [paddle.to_tensor(2,dtype=paddle.float16)]
print(a)
b = paddle.to_tensor(a)
print(b)
[Tensor(shape=[], dtype=bfloat16, place=Place(gpu:0), stop_gradient=True,
2.)]
Tensor(shape=[1], dtype=bfloat16, place=Place(gpu:0), stop_gradient=True,
[0.00000000]) #数值错误
[Tensor(shape=[], dtype=float16, place=Place(gpu:0), stop_gradient=True,
2.)]
Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True, #dtype错误
[2.])
a = [paddle.to_tensor(2,dtype=paddle.float16), paddle.to_tensor(2,dtype=paddle.float32)]
print(a)
b = paddle.to_tensor(a)
print(b)
在以上case中,我们需要先把list of Tensor转换为float32的numpy,再构造core.eager.Tensor,此时,numpy的值需要存2.0而非16384。
因此,bfloat16的bug本质是Paddle的bfloat16与numpy转换处理不恰当导致的。
if not dtype:
dtype = data.dtype