NeurIPS 2025
OmniGen-AR: AutoRegressive Any-to-Image Generation
Abstract
Autoregressive (AR) models have demonstrated strong potential in visual generation, offering competitive performance with simple architectures and optimization objectives. However, existing methods are typically limited to single-modality conditions, \eg, text or category labels, restricting their applicability in real-world scenarios that demand image synthesis from diverse forms of controls. In this work, we present \system, the first unified autoregressive framework for Any-to-Image generation. By discretizing various visual conditions through a shared visual tokenizer and text prompts with a text tokenizer, \system supports a broad spectrum of conditional inputs within a single model, including text (text-to-image generation), spatial signals (segmentation-to-image and depth-to-image), and visual context (image editing, frame prediction, and text-to-video generation). To mitigate the risk of information leakage from condition tokens to content tokens, we introduce Disentangled Causal Attention (DCA), which separates the full-sequence causal mask into condition causal attention and content causal attention. It serves as a training-time regularizer without affecting the standard next-token prediction during inference. With this design, \system achieves new state-of-the-art results across a range of benchmark, \eg, 0. 63 on GenEval and 80. 02 on VBench, demonstrating its effectiveness in flexible and high-fidelity visual generation.
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- Annual Conference on Neural Information Processing Systems
- Archive span
- 1987-2025
- Indexed papers
- 30776
- Paper id
- 1112155084690652238