Accent and Speaker Disentanglement in Many-to-many Voice Conversion
Zhichao Wang, Wenshuo Ge, Xiong Wang, Shan Yang, Wendong Gan, Haitao Chen, Hai Li, Lei Xie, Xiulin Li
This paper proposes an interesting voice and accent joint conversion
approach, which can convert an arbitrary source speaker's voice to a target
speaker with non-native accent. This problem is challenging as each target
speaker only has training data in native accent and we need to disentangle
accent and speaker information in the conversion model training and re-combine
them in the conversion stage. In our recognition-synthesis conversion
framework, we manage to solve this problem by two proposed tricks. First, we
use accent-dependent speech recognizers to obtain bottleneck features for
different accented speakers. This aims to wipe out other factors beyond the
linguistic information in the BN features for conversion model training.
Second, we propose to use adversarial training to better disentangle the
speaker and accent information in our encoder-decoder based conversion model.
Specifically, we plug an auxiliary speaker classifier to the encoder, trained
with an adversarial loss to wipe out speaker information from the encoder
output. Experiments show that our approach is superior to the baseline. The
proposed tricks are quite effective in improving accentedness and audio quality
and speaker similarity are well maintained.