Shefali Garg
YOU?
Author Swipe
View article: Improving Speech Recognition for African American English With Audio Classification
Improving Speech Recognition for African American English With Audio Classification Open
Automatic speech recognition (ASR) systems have been shown to have large quality disparities between the language varieties they are intended or expected to recognize. One way to mitigate this is to train or fine-tune models with more repr…
View article: UserLibri: A Dataset for ASR Personalization Using Only Text
UserLibri: A Dataset for ASR Personalization Using Only Text Open
Personalization of speech models on mobile devices (on-device personalization) is an active area of research, but more often than not, mobile devices have more text-only data than paired audio-text data. We explore training a personalized …
View article: Large-scale ASR Domain Adaptation using Self- and Semi-supervised Learning
Large-scale ASR Domain Adaptation using Self- and Semi-supervised Learning Open
Self- and semi-supervised learning methods have been actively investigated to reduce labeled training data or enhance the model performance. However, the approach mostly focus on in-domain performance for public datasets. In this study, we…
View article: Incremental Layer-wise Self-Supervised Learning for Efficient Speech Domain Adaptation On Device
Incremental Layer-wise Self-Supervised Learning for Efficient Speech Domain Adaptation On Device Open
Streaming end-to-end speech recognition models have been widely applied to mobile devices and show significant improvement in efficiency. These models are typically trained on the server using transcribed speech data. However, the server d…
View article: Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment Open
Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related da…
View article: Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment Open
Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related da…