This work offers a novel theoretical perspective on why, despite numerous
attempts, adversarial approaches to generative modeling (e.g., GANs) have not
been as popular for certain generation tasks, particularly sequential tasks
such as Natural Language Generation, as they have in others, such as Computer
Vision. In particular, on sequential data such as text, maximum-likelihood
approaches are significantly more utilized than GANs. We show that, while it
may seem that maximizing likelihood is inherently different than minimizing
distinguishability, this distinction is largely artificial and only holds for
limited models. We argue that minimizing KL-divergence (i.e., maximizing
likelihood) is a more efficient approach to effectively minimizing the same
distinguishability criteria that adversarial models seek to optimize.
Reductions show that minimizing distinguishability can be seen as simply
boosting likelihood for certain families of models including n-gram models and
neural networks with a softmax output layer. To achieve a full polynomial-time
reduction, a novel next-token distinguishability model is considered.
Authors
David Alvarez-Melis, Vikas Garg, Adam Tauman Kalai