Arrow Research search
Back to NeurIPS

NeurIPS 2025

Scaling Diffusion Transformers Efficiently via $\mu$P

Conference Paper Main Conference Track Artificial Intelligence · Machine Learning

Abstract

Diffusion Transformers have emerged as the foundation for vision generative models, but their scalability is limited by the high cost of hyperparameter (HP) tuning at large scales. Recently, Maximal Update Parametrization ($\mu$P) was proposed for vanilla Transformers, which enables stable HP transfer from small to large language models, and dramatically reduces tuning costs. However, it remains unclear whether $\mu$P of vanilla Transformers extends to diffusion Transformers, which differ architecturally and objectively. In this work, we generalize $\mu$P to diffusion Transformers and validate its effectiveness through large-scale experiments. First, we rigorously prove that $\mu$P of mainstream diffusion Transformers, including DiT, U-ViT, PixArt-$\alpha$, and MMDiT, aligns with that of the vanilla Transformer, enabling the direct application of existing $\mu$P methodologies. Leveraging this result, we systematically demonstrate that DiT-$\mu$P enjoys robust HP transferability. Notably, DiT-XL-2-$\mu$P with transferred learning rate achieves 2. 9$\times$ faster convergence than the original DiT-XL-2. Finally, we validate the effectiveness of $\mu$P on text-to-image generation by scaling PixArt-$\alpha$ from 0. 04B to 0. 61B and MMDiT from 0. 18B to 18B. In both cases, models under $\mu$P outperform their respective baselines while requiring small tuning cost—only 5. 5% of one training run for PixArt-$\alpha$ and 3% of consumption by human experts for MMDiT-18B. \textit{These results establish $\mu$P as a principled and efficient framework for scaling diffusion Transformers}.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Annual Conference on Neural Information Processing Systems
Archive span
1987-2025
Indexed papers
30776
Paper id
77188982265292721