DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model
Recent 3D generative models have achieved remarkable performance in
synthesizing high resolution photorealistic images with view consistency and
detailed 3D shapes, but training them for diverse domains is challenging since
it requires massive training images and their camera distribution information.
Text-guided domain adaptation methods have shown impressive performance on
converting the 2D generative model on one domain into the models on other
domains with different styles by leveraging the CLIP (Contrastive
Language-Image Pre-training), rather than collecting massive datasets for those
domains. However, one drawback of them is that the sample diversity in the
original generative model is not well-preserved in the domain-adapted
generative models due to the deterministic nature of the CLIP text encoder.
Text-guided domain adaptation will be even more challenging for 3D generative
models not only because of catastrophic diversity loss, but also because of
inferior text-image correspondence and poor image quality. Here we propose
DATID-3D, a domain adaptation method tailored for 3D generative models using
text-to-image diffusion models that can synthesize diverse images per text
prompt without collecting additional images and camera information for the
target domain. Unlike 3D extensions of prior text-guided domain adaptation
methods, our novel pipeline was able to fine-tune the state-of-the-art 3D
generator of the source domain to synthesize high resolution, multi-view
consistent images in text-guided targeted domains without additional data,
outperforming the existing text-guided domain adaptation methods in diversity
and text-image correspondence. Furthermore, we propose and demonstrate diverse
3D image manipulations such as one-shot instance-selected adaptation and
single-view manipulated 3D reconstruction to fully enjoy diversity in text.