[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

CRDNN-BiLSTM Knowledge Distillation Model Towards Enhancing the Automatic Speech Recognition

Published: 06 March 2024 Publication History

Abstract

Numerous automatic speech recognition (ASR) models have been developed in recent years, but they suffer from the drawback of being large models that take more time to train and are difficult to deploy on devices. Knowledge distillation has been used to reduce the size of current learning models while keeping up the efficiency across a range of applications. As a result, the knowledge distillation for the ASR model has been suggested in this paper to make the training process simpler and faster than the existing model. The knowledge gained from training a teacher acoustic model is transferred to the student acoustic model to improve its performance. With the help of this work, the ASR models can be trained effectively with fewer tiresome tasks. Graphical results show that this framework efficiently trains the audio input. The experimental results inferred that the proposed model employing knowledge distillation is efficient in speech recognition by achieving a Word Error Rate of 1.21% on LibriSpeech Corpus dev-clean and 2.23% on LibriSpeech Corpus test-clean.

References

[1]
Asami T, Masumura R, Yamaguchi Y, Masataki H, Aono Y. Domain adaptation of dnn acoustic models using knowledge distillation. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE; 2017, March. p. 5185–5189.
[2]
Chung J, Gulcehre C, Cho K, Bengio Y. Empirical evaluation of gated recurrent neural networks on sequence modelling 2014. arXiv preprint arXiv:1412.3555.
[3]
Collobert R, Puhrsch C, Synnaeve G. Wav2letter: an end-to-end convnet-based speech recognition system, 2016.
[4]
Fukuda T, Suzuki M, Kurata G, Thomas S, Cui J, Ramabhadran B. Efficient knowledge distillation from an ensemble of teachers. In: Interspeech 2017, August. p. 3697–3701.
[5]
Graves A, Fernández S, Gomez F, Schmidhuber J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd international conference on Machine learning 2006, June. p. 369–376.
[6]
Gudepu PR, Vadisetti GP, Niranjan A, Saranu K, Sarma R, Shaik MAB, Paramasivam P. Whisper augmented end-to-end/hybrid speech recognition system-CycleGAN approach. In: INTERSPEECH; 2020. p. 2302–2306.
[7]
Guo J, Sainath T, RonWeiss. A spelling correction model for end-to-end speech recognition, 05 2019.
[8]
Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network 2015. arXiv preprint arXiv:1503.02531.
[9]
Huang M, You Y, Chen Z, Qian Y, Yu K. Knowledge distillation for sequence model. In: Interspeech 2018, September. p. 3703–3707.
[10]
Hui L, Belkin M. Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks 2020. arXiv preprint arXiv:2006.07322.
[11]
Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. PMLR. 2015, June. p. 448–456.
[12]
Jiang Y, Sharma B, Madhavi M, Li H. Knowledge distillation from BERT transformer to speech transformer for intent classification 2021. arXiv preprint arXiv:2108.02598.
[13]
Kumar LA, Renuka DK, Priya MS. Towards robust speech recognition model using deep learning. In: 2023 International conference on intelligent systems for communication, IoT and security (ICISCoIS) 2023, February. IEEE. p. 253–256.
[14]
Kurata G, Audhkhasi K. Improved knowledge distillation from bi-directional to uni-directional LSTM CTC for end-to-end speech recognition. In: 2018 IEEE spoken language technology workshop (SLT) 2018, December. IEEE. p. 411–417.
[15]
Lee MH, Chang JH. Knowledge distillation from language model to acoustic model: a hierarchical multi-task learning approach. In: ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP) 2022, May. IEEE. p. 8392–8396.
[16]
Li C, Zhu L, Xu S, Gao P, Xu B. Compression of the acoustic model via knowledge distillation and pruning. In: 2018 24th International conference on pattern recognition (ICPR) 2018, August. IEEE. p. 2785–2790.
[17]
Li J, Lavrukhin V, Ginsburg B, Leary R, Kuchaiev O, Cohen JM, Gadde RT. Jasper: an end-to-end convolutional neural acoustic model. 2019. arXiv preprint arXiv:1904.03288.
[18]
Liu Y, Xiong H, He Z, Zhang J, Wu H, Wang H, Zong C. End-to-end speech translation with knowledge distillation 2019. arXiv preprint arXiv:1904.08075.
[19]
Lu KH, Chen KY. A context-aware knowledge transferring strategy for CTC-based ASR 2022. arXiv preprint arXiv:2210.06244.
[20]
Masumura R, Makishima N, Ihori M, Takashima A, Tanaka T, Orihashi S. Hierarchical transformer-based large-context end-to-end asr with large-context knowledge distillation. In: ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE. 2021, June. p. 5879–5883.
[21]
Priya S, Karthika Renuka D, and Ashok Kumar L Towards improving speech recognition model with post-processing spell correction using BERT J Intell Fuzzy Syst 2022 43 4 4873-4882
[22]
Priya MS, Renuka DK, Kumar LA, and Rose SL Multilingual low resource Indian language speech recognition and spell correction using Indic BERT Sādhanā 2022 47 4 227
[23]
Ravanelli M, Parcollet T, Plantinga P, Rouhe A, Cornell S, Lugosch L, Bengio Y. SpeechBrain: a general-purpose speech toolkit 2021. arXiv preprint arXiv:2106.04624
[24]
Rose LS, Kumar LA, and Renuka DK Deep learning using python 2019 Oxford Wiley
[25]
Tian S, Deng K, Li Z, Ye L, Cheng G, Li T, and Yan Y Knowledge distillation For CTC-based speech recognition via consistent acoustic representation learning Proc. Interspeech 2022 2022 2633-2637
[26]
Wang Y, Zhao J. Continuous speech recognition model based on CTC technology. In: 2018 International conference on network, communication, computer engineering (NCCE 2018). Atlantis Press. 2018, May. p. 149–152.
[27]
Yang X, Li Q, Zhang C, Woodland PC. Knowledge distillation from multiple foundation models for end-to-end speech recognition 2023. arXiv preprint arXiv:2303.1091710.4, 11.2 next one
[28]
Yi J, Tao J, Wen Z, Liu B. Distilling knowledge using parallel data for far-field speech recognition 2018. arXiv preprint arXiv:1802.06941.
[29]
Yuan Z, Lyu Z, Li J, Zhou X. An improved hybrid ctc-attention model for speech recognition 2018. arXiv preprint arXiv:1810.12020.
[30]
Zhang W, Chang X, Qian Y, and Watanabe S Improving end-to-end single-channel multi-talker speech recognition IEEE/ACM Trans Audio, Speech Lang Process 2020 28 1385-1394

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image SN Computer Science
SN Computer Science  Volume 5, Issue 3
Mar 2024
750 pages

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 06 March 2024
Accepted: 04 January 2024
Received: 22 December 2022

Author Tags

  1. Knowledge distillation
  2. Connectionist temporal classification
  3. Automatic speech recognition
  4. Acoustic model
  5. Teacher model
  6. Student model

Qualifiers

  • Research-article

Funding Sources

  • DST-ICPS

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media