chore(deps): bump transformers from 4.48.0 to 4.52.1 in /tests/modules by dependabot[bot] · Pull Request #3670 · pytorch/TensorRT
Patch release v4.51.3
A mix of bugs were fixed in this patch; very exceptionally, we diverge from semantic versioning to merge GLM-4 in this patch release.
Patch Release 4.51.2
This is another round of bug fixes, but they are a lot more minor and outputs were not really affected!
- Fix Llama4 offset (#37414) by
@Cyrilvallez - Attention Quantization with FBGemm & TP (#37384) by
@MekkCyber - use rms_norm_eps for the L2Norm for Llama4 (#37418) by
@danielhanchen - mark llama4 as not supported with fa2 (#37416) by
@winglian
Patch release v4.51.1
Since the release of Llama 4, we have fixed a few issues that we are now releasing in patch v4.51.1
- Fixing flex attention for torch=2.6.0 (#37285)
- more fixes for post-training llama4 (#37329)
- Remove HQQ from caching allocator warmup (#37347)
- fix derived berts _init_weights (#37341)
- Fix init empty weights without accelerate (#37337)
- Fix deepspeed with quantization (#37324)
- fix llama4 training (#37319)
- fix flex attn when optional args aren't passed (#37327)
- Multiple llama4 fixe (#37353)
Thanks all for your patience
v4.51.0: Llama 4, Phi4-Multimodal, DeepSeek-v3, Qwen3
New Model Additions
Llama 4
Llama 4, developed by Meta, introduces a new auto-regressive Mixture-of-Experts (MoE) architecture.This generation includes two models:
- The highly capable Llama 4 Maverick with 17B active parameters out of ~400B total, with 128 experts.
- The efficient Llama 4 Scout also has 17B active parameters out of ~109B total, using just 16 experts.
Both models leverage early fusion for native multimodality, enabling them to process text and image inputs. Maverick and Scout are both trained on up to 40 trillion tokens on data encompassing 200 languages (with specific fine-tuning support for 12 languages including Arabic, Spanish, German, and Hindi).
For deployment, Llama 4 Scout is designed for accessibility, fitting on a single server-grade GPU via on-the-fly 4-bit or 8-bit quantization, while Maverick is available in BF16 and FP8 formats. These models are released under the custom Llama 4 Community License Agreement, available on the model repositories
Getting started with Llama 4 using transformers is straightforward. Make sure you have transformers v4.51.0 or later installed:
pip install -U transformers[hf_xet]
</tr></table>
... (truncated)
