Abstract
One of the most utilized adaptation techniques is the feature Maximum Likelihood Linear Regression (fMLLR). In comparison with other adaptation methods the number of free parameters to be estimated significantly decreases. Thus, the method is well suited for situations with small amount of adaptation data. However, fMLLR still fails in situations with extremely small data sets. Such situations can be solved through proper initialization of fMLLR estimation adding some a-priori information. In this paper a novel approach is proposed solving the problem of fMLLR initialization involving statistics from speakers acoustically close to the speaker to be adapted. Proposed initialization suitably substitutes missing adaptation data with similar data from a training database, fMLLR estimation becomes well-conditioned, and the accuracy of the recognition system increases even in situations with extremely small data sets.
This research was supported by the Ministry of Education of the Czech Republic project No. MŠMT LC536, by the Grant Agency of the Czech Republic project No. GAČR 102/08/0707, and the grant of The University of West Bohemia project No. SGS-2010-054.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Gales, M.J.F.: Maximum likelihood linear transformations for HMM-based speech recognition. Computer Speech and Language 12, 75–98 (1997)
Chen, K., Liau, W., Wang, H., Lee, L.: Fast speaker adaptation using eigenspace-based maximum likelihood linear regression. In: International Conference on Spoken Language Processing, Beijing, China, pp. 742–745 (2000)
Li, Y., et al.: Incremental on-line feature space MLLR adaptation for telephony speech recognition. In: International Conference on Spoken Language Processing, Denver (2002)
Yoshizawa, S., Baba, A., Matsunami, K., Mera, Y., Yamada, M., Shikano, K.: Unsupervised speaker adaptation based on sufficient HMM statistics of selected speakers. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 341–344 (2001)
Gales, M.J.F.: Cluster adaptive training of hidden Markov models. IEEE Transactions on Speech and Audio Processing, 417–428 (2000)
Vaněk, J., Psutka, J., Zelinka, J., Trmal, J.: Training of speaker-clustered acoustic models for use in real-time recognizers. In: Sigmap 2009, Milan, pp. 131–135 (2009)
Gales, M.J.F.: The generation and use of regression class trees for MLLR adaptation. Cambridge University Engineering Department, Cambridge (1996)
Povey, D., Saon, G.: Feature and model space speaker adaptation with full covariance Gaussians, Interspeech, paper 2050-Tue2BuP.14 (2006)
Uebel, L.F., Woodland, P.C.: Improvements in linear transform based speaker adaptation. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 49–52 (2001)
Reynolds, D. A., Quatieri, T. F., Dunn, R. D.:Speaker Verification Using Adapted Gaussian Mixture Models. Digital Signal Processing, 19–41 (2000)
Pražák, A., Psutka, J., Hoidekr, J., et al.: Automatic online subtitling of the Czech parliament meetings. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2006. LNCS (LNAI), vol. 4188, pp. 501–508. Springer, Heidelberg (2006)
Pollak, P., et al.: SpeechDat(E) - Eastern European Telephone Speech Databases. In: XLDB - Very Large Telephone Speech Databases (ELRA), Paris (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zajíc, Z., Machlica, L., Müller, L. (2011). Initialization of fMLLR with Sufficient Statistics from Similar Speakers. In: Habernal, I., Matoušek, V. (eds) Text, Speech and Dialogue. TSD 2011. Lecture Notes in Computer Science(), vol 6836. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23538-2_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-23538-2_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23537-5
Online ISBN: 978-3-642-23538-2
eBook Packages: Computer ScienceComputer Science (R0)