Text-to-image diffusion models for GAN domain adaptation
Diffusion Guided Domain Adaptation of Image Generators
We show that the classifier-free guidance can be leveraged as a critic and enable generators to distill knowledge from large-scale text-to-image diffusion models to efficiently shift into new domains indicated by text prompts without access to groundtruth samples from target domains.The proposed method is the first attempt at incorporating large-scale pre-trained diffusion models and distillation sampling for text-driven image generator domain adaptation and gives a quality previously beyond possible.We demonstrate the effectiveness and controllability of our method through extensive experiments and extend our work to 3d-aware style-based generators and dreambooth guidance.