Geometry-Free View Synthesis: Transformers and no 3D Priors
Is a geometric model required to synthesize novel views from a single image?
Being bound to local convolutions, CNNs need explicit 3D biases to model
geometric transformations. In contrast, we demonstrate that a transformer-based
model can synthesize entirely novel views without any hand-engineered 3D
biases. This is achieved by (i) a global attention mechanism for implicitly
learning long-range 3D correspondences between source and target views, and
(ii) a probabilistic formulation necessary to capture the ambiguity inherent in
predicting novel views from a single image, thereby overcoming the limitations
of previous approaches that are restricted to relatively small viewpoint
changes. We evaluate various ways to integrate 3D priors into a transformer
architecture. However, our experiments show that no such geometric priors are
required and that the transformer is capable of implicitly learning 3D
relationships between images. Furthermore, this approach outperforms the state
of the art in terms of visual quality while covering the full distribution of
possible realizations. Code is available at this https URL