Image Completion via Inference in Deep Generative Models
We consider image completion from the perspective of amortized inference in
an image generative model. We leverage recent state of the art variational
auto-encoder architectures that have been shown to produce photo-realistic
natural images at non-trivial resolutions. Through amortized inference in such
a model we can train neural artifacts that produce diverse, realistic image
completions even when the vast majority of an image is missing. We demonstrate
superior sample quality and diversity compared to prior art on the CIFAR-10 and
FFHQ-256 datasets. We conclude by describing and demonstrating an application
that requires an in-painting model with the capabilities ours exhibits: the use
of Bayesian optimal experimental design to select the most informative sequence
of small field of view x-rays for chest pathology detection.