Image Deblurring with Domain Generalizable Diffusion Models
Diffusion Probabilistic Models (DPMs) have recently been employed for image
deblurring. DPMs are trained via a stochastic denoising process that maps
Gaussian noise to the high-quality image, conditioned on the concatenated
blurry input. Despite their high-quality generated samples, image-conditioned
Diffusion Probabilistic Models (icDPM) rely on synthetic pairwise training data
(in-domain), with potentially unclear robustness towards real-world unseen
images (out-of-domain). In this work, we investigate the generalization ability
of icDPMs in deblurring, and propose a simple but effective guidance to
significantly alleviate artifacts, and improve the out-of-distribution
performance. Particularly, we propose to first extract a multiscale
domain-generalizable representation from the input image that removes
domain-specific information while preserving the underlying image structure.
The representation is then added into the feature maps of the conditional
diffusion model as an extra guidance that helps improving the generalization.
To benchmark, we focus on out-of-distribution performance by applying a
single-dataset trained model to three external and diverse test sets. The
effectiveness of the proposed formulation is demonstrated by improvements over
the standard icDPM, as well as state-of-the-art performance on perceptual
quality and competitive distortion metrics compared to existing methods.