ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement
Prior works on improving speech quality with visual input typically study
each type of auditory distortion separately (e.g., separation, inpainting,
video-to-speech) and present tailored algorithms. This paper proposes to unify
these subjects and study Generalized Speech Enhancement, where the goal is not
to reconstruct the exact reference clean signal, but to focus on improving
certain aspects of speech. In particular, this paper concerns intelligibility,
quality, and video synchronization. We cast the problem as audio-visual speech
resynthesis, which is composed of two steps: pseudo audio-visual speech
recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and
P-TTS are connected by discrete units derived from a self-supervised speech
model. Moreover, we utilize self-supervised audio-visual speech model to
initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first
high-quality model for in-the-wild video-to-speech synthesis and achieves
superior performance on all LRS3 audio-visual enhancement tasks with a single
model. To demonstrates its applicability in the real world, ReVISE is also
evaluated on EasyCom, an audio-visual benchmark collected under challenging
acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE
greatly suppresses noise and improves quality. Project page:
this https URL
Authors
Wei-Ning Hsu, Tal Remez, Bowen Shi, Jacob Donley, Yossi Adi