WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.1109/jstsp.2022.3188113
· OA: W3209984917
Self-supervised learning (SSL) achieves great success in speech recognition,\nwhile limited exploration has been attempted for other speech processing tasks.\nAs speech signal contains multi-faceted information including speaker identity,\nparalinguistics, spoken content, etc., learning universal representations for\nall speech tasks is challenging. To tackle the problem, we propose a new\npre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM\njointly learns masked speech prediction and denoising in pre-training. By this\nmeans, WavLM does not only keep the speech content modeling capability by the\nmasked speech prediction, but also improves the potential to non-ASR tasks by\nthe speech denoising. In addition, WavLM employs gated relative position bias\nfor the Transformer structure to better capture the sequence ordering of input\nspeech. We also scale up the training dataset from 60k hours to 94k hours.\nWavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and\nbrings significant improvements for various speech processing tasks on their\nrepresentative benchmarks. The code and pre-trained models are available at\nhttps://aka.ms/wavlm.\n