upgrade modelopt by lanluo-nvidia · Pull Request #3160 · pytorch/TensorRT
Besides CI failure, I think torchtrt won't need the onnx submodule of modelopt. So instead of nvidia-modelopt[all] you can use nvidia-modelopt[deploy,hf,torch] to save some installation space and time. Also don't forget
TensorRT/tests/py/dynamo/models/test_models_export.py
Lines 254 to 255 in 5719928
| or Version(metadata.version("nvidia-modelopt")) < Version("0.16.1"), | |
| "modelopt 0.16.1 or later is required Int8 quantization is supported in modelopt since 0.16.1 or later", |