SINE: SINgle Image Editing with Text-to-Image Diffusion Models
Recent works on diffusion models have demonstrated a strong capability for
conditioning image generation, e.g., text-guided image synthesis. Such success
inspires many efforts trying to use large-scale pre-trained diffusion models
for tackling a challenging problem--real image editing. Works conducted in this
area learn a unique textual token corresponding to several images containing
the same object. However, under many circumstances, only one image is
available, such as the painting of the Girl with a Pearl Earring. Using
existing works on fine-tuning the pre-trained diffusion models with a single
image causes severe overfitting issues. The information leakage from the
pre-trained diffusion models makes editing can not keep the same content as the
given image while creating new features depicted by the language guidance. This
work aims to address the problem of single-image editing. We propose a novel
model-based guidance built upon the classifier-free guidance so that the
knowledge from the model trained on a single image can be distilled into the
pre-trained diffusion model, enabling content creation even with one given
image. Additionally, we propose a patch-based fine-tuning that can effectively
help the model generate images of arbitrary resolution. We provide extensive
experiments to validate the design choices of our approach and show promising
editing capabilities, including changing style, content addition, and object
manipulation. The code is available for research purposes at
this https URL .