Abstract
This paper proposes a new method for building dynamic speech decoding graphs for state based spoken human-robot interaction (HRI). The current robotic speech recognition systems are based on either finite state grammar (FSG) or statistical N-gram models or a dual FSG and N-gram using a multi-pass decoding. The proposed method is based on merging both FSG and N-gram into a single decoding graph by converting the FSG rules into a weighted finite state acceptor (WFSA) then composing it with a large N-gram based weighted finite state transducer (WFST). This results in a tiny decoding graph that can be used in a single pass decoding. The proposed method is applied in our speech recognition system (RoboASR) for controlling service robots with limited resources. There are three advantages of the proposed approach. First, it takes the advantage of both FSG and N-gram decoders by composing both of them into a single tiny decoding graph. Second, it is robust, the resulting tiny decoding graph is highly accurate due to it fitness to the HRI state. Third, it has a fast response time in comparison to the current state of the art speech recognition systems. The proposed system has a large vocabulary containing 64K words with more than 69K entries. Experimental results show that the average response time is 0.05% of the utterance length and the average ratio between the true and false positives is 89% when tested on 15 interaction scenarios using live speech.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H., Hagita, N.: Communication robot in a shopping mall. IEEE Transactions on Robotics, 897–913 (2010)
Paliwal, K.K., Yao, K.: Robust speech recognition under noisy ambient conditions. In: Human-Centric Interfaces for Ambient Intelligence. Academic Press, Elsevier (2009)
Alonso-Martin, F., Salichs, M.A.: Integration of a voice recognition system in a social robot. IEEE Transactions on Cybernetics and Systems, 215–245 (2011)
Heinrich, S., Wermter, S.: Towards robust speech recognition for human-robot interaction. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 468–473 (2011)
Doostdar, M., Schiffer, S., Lakemeyer, G.: A Robust Speech Recognition System for Service-Robotics Applications. In: Iocchi, L., Matsubara, H., Weitzenfeld, A., Zhou, C. (eds.) RoboCup 2008. LNCS, vol. 5399, pp. 1–12. Springer, Heidelberg (2009)
Lin, Q., Lubensky, D., Picheny, M., Rao, P.S.: Key-phrase spotting using an integrated language model of N-grams and finite-state grammar. In: Proceedings of the European Conference on Speech Communication and Technology, pp. 255–258 (1997)
Levit, M., Chang, S., Buntschuh, B.: Garbage modeling with decoys for a sequential recognition scenario. In: Proceedings of the IEEE Workshop on Automatic Speech Recognition & Understanding, pp. 468–473 (2009)
Allauzen, C., Schalkwyk, J.: Generalized composition algorithm for weighted finite state transducers. In: Proceedings of the International Speech Communication Association (2009)
Rabinar, L., Juang, B.-H.: Fundamental of speech recognition. Prentice-Hall (1993)
Mohri, M., Pereira, F., Riley, M.: Weighted finite state transducers in speech recognition. Transactions on Computer Speech and Language 16, 69–88 (2002)
Novak, J.R., Minemaysu, N., Hirose, K.: Painless WFST cascade construction for LVCSR-Transducersaurus. In: Proceedings of the International Speech Communication Association (2011)
Broadbent, E., Jayawardena, C., Kerse, N., Stafford, R.Q., MacDonald, B.A.: Human-robot interaction research to improve quality of life in elder care - An approach and issues. In: Proceedings of the Workshop on Human-Robot Interaction in Elder Care, pp. 7–11 (2011)
Abdelhamid, A.A., Abdulla, W.H., MacDonald, B.A.: WFST-based large vocabulary continuous speech decoder for service robots. In: Proceedings of the International Conference on Imaging and Signal Processing for Healthcare and Technology, pp. 150–154 (2012)
Lee, A., Kawahara, T.: Recent development of open-source speech recognition engine Julius. In: Proceedings of the APSIPA, pp. 131–137 (2009)
Rybach, D., Gollan, C., Heigold, G., Hoffmeister, B., Loof, J., Schluter, R., Ney, H.: The RWTH Aachen university open source speech recognition system. In: Proceedings of the International Conference of Speech Communication Association, pp. 2111–2114 (2009)
Young, S., Russell, N., Thornton, J.: Token passing: A simple conceptual model for connected speech recognition systems. Tech. Rep. (1989)
Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X.A., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev, V., Woodland, P.: The HTK book. Cambridge University (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Abdelhamid, A.A., Abdulla, W.H., MacDonald, B.A. (2012). RoboASR: A Dynamic Speech Recognition System for Service Robots. In: Ge, S.S., Khatib, O., Cabibihan, JJ., Simmons, R., Williams, MA. (eds) Social Robotics. ICSR 2012. Lecture Notes in Computer Science(), vol 7621. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34103-8_49
Download citation
DOI: https://doi.org/10.1007/978-3-642-34103-8_49
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34102-1
Online ISBN: 978-3-642-34103-8
eBook Packages: Computer ScienceComputer Science (R0)