Abstract
Slot filling and intent detection are two important tasks in a spoken language understanding (SLU) system, it is becoming a tendency that two tasks are jointing learn in SLU. However, many existing model only conduct join model by share parameters on the surface level rather than bi-directional interaction for slot filling and intent detection tasks. In this paper, we designed a dual interaction model based on the gate mechanism. First, We utilize a Dilated Convolutional Neural Networks (DCNN) block with self-attention to better capture the semantic of utterance. Besides, for the two tasks we adopt gate mechanism to get the interaction information of intent and slot, which can control the passing rate and make fully use of semantic relevance between slot filling and intent detection. Finally, the experiments results show that our model has significantly improved in the slot filling F1, intent detection accuracy on the ATIS and SNIPS datasets and overmatch other prior methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Tur G (2011) Spoken language understanding: Systems for extracting semantic information from speech. Ph.D. Thesis
Qin L, Che W, Li Y, Wen H, Liu T (2019) A stack-propagation framework with token-level intent detection for spoken language understanding. arXiv: Computation and Language
Zhang X, Wang H (2016) A joint model of intent determination and slot filling for spoken language understanding, pp 2993–2999
Liu B, Lane I (2016) Attention-based recurrent neural network models for joint intent detection and slot filling, pp 685–689
Hakkanitur D, Tur G, Celikyilmaz A, Chen Y, Gao J, Deng L, Wang Y (2016) Multi-domain joint semantic frame parsing using bi-directional rnn-lstm, pp 715–719
Goo C, Gao G, Hsu Y, Huo C, Chen T, Hsu K, Chen Y (2018) Slot-gated modeling for joint slot filling and intent prediction 2:753–757
Li C, Li L (2018) A self-attentive model with gate mechanism for spoken language understanding, pp 3824–3833
GORIN A (1997) How may i help you? Speech Comm 23
Haffner P, Tur G, Wright J H (2003) Optimizing svms for complex call classification. 1:632–635
Lai S, Xu L, Liu K, Zhao J (2015) Recurrent convolutional neural networks for text classification, pp 2267–2273
Raymond C, Riccardi G (2007) Generative and discriminative algorithms for spoken language understanding, pp 1605–1608
Yao K, Peng B, Zhang Y, Yu D, Zweig G, Shi Y (2014) Spoken language understanding using long short-term memory neural networks, pp 189–194
Sun C, Lv L, Tian G, Liu T (2020) Deep interactive memory network for aspect-level sentiment analysis. 20(1)
Xu P, Sarikaya R (2013) Convolutional neural network based triangular crf for joint intent detection and slot filling, pp 78–83
Guo D, Tur G, Yih W, Zweig G (2014) Joint semantic utterance classification and slot filling with recursive neural networks, pp 554–559
Wang Y, Shen Y, Jin H (2018) A bi-model based rnn semantic frame parsing model for iintent detection and slot filling 2:309–314
Zhang C, Li Y, Du N, Fan W, Yu P S (2018) Joint slot filling and intent detection via capsule neural networks. arXiv: Computation and Language
Haihong E, Niu P, Chen Z, Song M (2019) A novel bi-directional interrelated model for joint intent detection and slot filling, pp 5467–5471
Liu Y, Meng F, Zhang J, Zhou J, Chen Y, Xu J (2019) Cm-net: A novel collaborative memory network for spoken language understanding
Gehring J, Auli M, Grangier D, Yarats D, Dauphin Y N (2017) Convolutional sequence to sequence learning. arXiv: Computation and Language
Zhong V, Xiong C, Socher R (2018) Global-locally self-attentive encoder for dialogue state tracking. 1:1458–1467
Yin Q, Zhang Y, Zhang W, Liu T, Wang W Y (2018) Deep reinforcement learning for chinese zero pronoun resolution. 1:569–578
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser L, Polosukhin I (2017) Attention is all you need. arXiv: Computation and Language
Yan H, Deng B, Li X, Qiu X TENER: Adapting Transformer Encoder for Named Entity Recognition
Acknowledgments
This work was supported by the National Natural Science Foundation of China under grant nos. 61702305, 11971270, and 61903089, the China Postdoctoral Science Foundation under grant no. 2017M622234 and Science and Technology Support Plan of Youth Innovation Team of Shandong higher School under grant no. 2019KJN2014.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sun, C., Lv, L., Liu, T. et al. A joint model based on interactive gate mechanism for spoken language understanding. Appl Intell 52, 6057–6064 (2022). https://doi.org/10.1007/s10489-021-02544-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-021-02544-7