1. 首页
  2. 人工智能
  3. 论文/代码
  4. Applying Wav2vec2.0 to Speech Recognition in Various Low-resource Languages

Applying Wav2vec2.0 to Speech Recognition in Various Low-resource Languages

上传者: 2021-01-24 05:59:01上传 .PDF文件 384.02 KB 热度 220次

Applying Wav2vec2.0 to Speech Recognition in Various Low-resource Languages

There are several domains that own corresponding widely used feature extractors, such as ResNet, BERT, and GPT-x. These models are usually pre-trained on large amounts of unlabeled data by self-supervision and can be effectively applied to downstream tasks.In the speech domain, wav2vec2.0 starts to show its powerful representation ability and feasibility of ultra-low resource speech recognition on the Librispeech corpus, which belongs to the audiobook domain. However, wav2vec2.0 has not been examined on real spoken scenarios and languages other than English. To verify its universality over languages, we apply pre-trained models to solve low-resource speech recognition tasks in various spoken languages. We achieve more than 20% relative improvements in six languages compared with previous work. Among these languages, English achieves a gain of 52.4%. Moreover, using coarse-grained modeling units, such as subword or character, achieves better results than fine-grained modeling units, such as phone or letter.

将Wav2vec2.0应用于多种低资源语言的语音识别

有几个域拥有相应的广泛使用的特征提取器,例如ResNet,BERT和GPT-x。这些模型通常通过自我监督对大量未标记数据进行预训练,并且可以有效地应用于下游任务。.. 在语音领域,wav2vec2.0开始在强大的Librispeech语料库上展现其强大的表示能力和超低资源语音识别的可行性,而Librispeech语料库属于有声书领域。但是,尚未对wav2vec2.0的实际口语和英语以外的语言进行过检查。为了验证其在各种语言中的通用性,我们应用了预先训练的模型来解决各种口语中的低资源语音识别任务。与以前的工作相比,我们在六种语言中实现了20%以上的相对改进。在这些语言中,英语达到了52.4%。此外,与诸如电话或字母之类的细粒度建模单元相比,使用粗粒度的建模单元(如子词或字符)可获得更好的结果。 (阅读更多)

用户评论