Score-based generative models (SGMs) have recently demonstrated impressive
results in terms of both sample quality and distribution coverage. However,
they are usually applied directly in data space and often require thousands of
network evaluations for sampling. Here, we propose the Latent Score-based
Generative Model (LSGM), a novel approach that trains SGMs in a latent space,
relying on the variational autoencoder framework. Moving from data to latent
space allows us to train more expressive generative models, apply SGMs to
non-continuous data, and learn smoother SGMs in a smaller space, resulting in
fewer network evaluations and faster sampling. To enable training LSGMs
end-to-end in a scalable and stable manner, we (i) introduce a new
score-matching objective suitable to the LSGM setting, (ii) propose a novel
parameterization of the score function that allows SGM to focus on the mismatch
of the target distribution with respect to a simple Normal one, and (iii)
analytically derive multiple techniques for variance reduction of the training
objective. LSGM obtains a state-of-the-art FID score of 2.10 on CIFAR-10,
outperforming all existing generative results on this dataset. On
CelebA-HQ-256, LSGM is on a par with previous SGMs in sample quality while
outperforming them in sampling time by two orders of magnitude. In modeling
binary images, LSGM achieves state-of-the-art likelihood on the binarized
OMNIGLOT dataset.