GitHub - DreamLM/Dream-VLX: Dream-VL and Dream-VLA, a diffusion VLM and a diffusion VLA.

Building on the success of Dream 7B, we introduce Dream-VL and Dream-VLA, open VL and VLA models that fully unlock discrete diffusion’s advantages in long-horizon planning, bidirectional reasoning, and parallel action generation for multimodal tasks.

Dream-VLX/
├── vl/          # Dream-VL training and evaluation
├── vla/         # Dream-VLA training and evaluation
└── README.md    # This file
@article{ye2025dreamvla,
  title={Dream-VL & Dream-VLA: Open Vision-Language and Vision-Language-Action Models with Diffusion Language Model Backbone},
  author={Ye, Jiacheng and Gong, Shansan and Gao, Jiahui and Fan, Junming and Wu, Shuang and Bi, Wei and Bai, Haoli and Shang, Lifeng and Kong, Lingpeng},
  journal={arXiv preprint arXiv:2512.22615},
  year={2025}
}