Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes
Whilst diffusion probabilistic models can generate high quality image
content, key limitations remain in terms of both generating high-resolution
imagery and their associated high computational requirements. Recent
Vector-Quantized image models have overcome this limitation of image resolution
but are prohibitively slow and unidirectional as they generate tokens via
element-wise autoregressive sampling from the prior. By contrast, in this paper
we propose a novel discrete diffusion probabilistic model prior which enables
parallel prediction of Vector-Quantized tokens by using an unconstrained
Transformer architecture as the backbone. During training, tokens are randomly
masked in an order-agnostic manner and the Transformer learns to predict the
original tokens. This parallelism of Vector-Quantized token prediction in turn
facilitates unconditional generation of globally consistent high-resolution and
diverse imagery at a fraction of the computational expense. In this manner, we
can generate image resolutions exceeding that of the original training set
samples whilst additionally provisioning per-image likelihood estimates (in a
departure from generative adversarial approaches). Our approach achieves
state-of-the-art results in terms of Density (LSUN Bedroom: 1.51; LSUN
Churches: 1.12; FFHQ: 1.20) and Coverage (LSUN Bedroom: 0.83; LSUN Churches:
0.73; FFHQ: 0.80), and performs competitively on FID (LSUN Bedroom: 3.64; LSUN
Churches: 4.07; FFHQ: 6.11) whilst offering advantages in terms of both
computation and reduced training set requirements.
Authors
Sam Bond-Taylor, Peter Hessey, Hiroshi Sasaki, Toby P. Breckon, Chris G. Willcocks