addresses the case when shape of upsample tensor contains ITensor by apbose · Pull Request #3841 · pytorch/TensorRT
Comment on lines +63 to +70
| # promote remaining ints to TRT consts before concat | ||
| for i, t in enumerate(trt_tensors): | ||
| if isinstance(t, int): | ||
| const = ctx.net.add_constant((1,), np.array([t], dtype=np.int32)) | ||
| set_layer_name(const, target, f"{name}_static_{i}_const") | ||
| trt_tensors[i] = const.get_output(0) | ||
|
|
||
| concat = ctx.net.add_concatenation(trt_tensors) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If trt_tensors have a mix of scalar integers and ITensors of dtype int64, would this work (because you're casting the scalar integers to int32 explicitly) ?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case of shape tensors int will always be int32, so in that case this should work.
Coming to cat case. concat tensors will be either torch.Tensor or TRTTensor. They cannot be int. So I think the above should cover all the cases. Can you think of any other case?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So my thought is how are we ensuring all trt_tensors have same datatypes explicitly before concatenating here because that will error out ?
This check could either be an assertion check or explicit type promotion of tensors within trt_tensor