Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset.
These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training.
We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data.
With sufficient data and scale, our approach is competitive with previous domain-specificmodels when evaluated in a zero-shot fashion.
Authors
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever