Abstract
Question generation (QG) is to generate natural and grammatical questions that can be answered by a specific answer for a given context. Previous sequence-to-sequence models suffer from a problem that asking high-quality questions requires commonsense knowledge as backgrounds, which in most cases can not be learned directly from training data, resulting in unsatisfactory questions deprived of knowledge. In this paper, we propose a multi-task learning framework to introduce commonsense knowledge into question generation process. We first retrieve relevant commonsense knowledge triples from mature databases and select triples with the conversion information from source context to question. Based on these informative knowledge triples, we design two auxiliary tasks to incorporate commonsense knowledge into the main QG model, where one task is Concept Relation Classification and the other is Tail Concept Generation. Experimental results on SQuAD show that our proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics, demonstrating that incorporating external commonsense knowledge with multi-task learning can help the model generate human-like and high-quality questions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
We set K to 3*m in our experiments, where m represents the number of content words in each paragraph.
References
Bai, G., He, S., Liu, K., Zhao, J.: Variational attention for commonsense knowledge aware conversation generation. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11838, pp. 3–15. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32233-5_1
Bao, H., et al.: UniLMv2: pseudo-masked language models for unified language model pre-training. ArXiv arXiv:2002.12804 (2020)
Chen, Y., Wu, L., Zaki, M.J.: Natural question generation with reinforcement learning based graph-to-sequence model. ArXiv arXiv:1910.08832 (2019)
Denkowski, M.J., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: WMT@ACL (2014)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)
Dhole, K.D., Manning, C.D.: Syn-QG: syntactic and shallow semantic rules for question generation. In: ACL (2020)
Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. In: NeurIPS (2019)
Du, X., Shao, J., Cardie, C.: Learning to ask: neural question generation for reading comprehension. In: ACL (2017)
Duan, N., Tang, D., Chen, P., Zhou, M.: Question generation for question answering. In: EMNLP (2017)
Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. ArXiv arXiv:1603.06393 (2016)
Guan, J., Wang, Y., Huang, M.: Story ending generation with incremental encoding and commonsense knowledge. In: AAAI (2019)
Heilman, M., Smith, N.A.: Question generation via overgenerating transformations and ranking (2009)
Heilman, M., Smith, N.A.: Good question! Statistical ranking for question generation. In: HLT-NAACL (2010)
Jia, X., Zhou, W., Sun, X., Wu, Y.: EQG-RACE: examination-type question generation. ArXiv arXiv:2012.06106 (2020)
Jia, X., Zhou, W., Sun, X., Wu, Y.: How to ask good questions? Try to leverage paraphrases. In: ACL (2020)
Kim, Y., Lee, H., Shin, J., Jung, K.: Improving neural question generation using answer separation. In: AAAI (2018)
Ko, W.J., Chen, T.Y., Huang, Y., Durrett, G., Li, J.J.: Inquisitive question generation for high level text comprehension. In: EMNLP (2020)
Labutov, I., Basu, S., Vanderwende, L.: Deep questions without deep understanding. In: ACL (2015)
Li, J., Gao, Y., Bing, L., King, I., Lyu, M.R.: Improving question generation with to the point context. ArXiv arXiv:1910.06036 (2019)
Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: ACL 2004 (2004)
Liu, B., Wei, H., Niu, D., Chen, H., He, Y.: Asking questions the human way: scalable question-answer generation from text corpus. In: Proceedings of the Web Conference 2020 (2020)
Luu, A.T., Shah, D.J., Barzilay, R.: Capturing greater context for question generation. In: AAAI (2020)
Miller, G.: WordNet: a lexical database for English. Commun. ACM 38, 39–41 (1995)
Mostafazadeh, N., Misra, I., Devlin, J., Mitchell, M., He, X., Vanderwende, L.: Generating natural questions about an image. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1802–1813. Association for Computational Linguistics, Berlin, August 2016. https://doi.org/10.18653/v1/P16-1170. https://www.aclweb.org/anthology/P16-1170
Nema, P., Mohankumar, A.K., Khapra, M.M., Srinivasan, B.V., Ravindran, B.: Let’s ask again: refine network for automatic question generation. ArXiv arXiv:1909.05355 (2019)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics, Philadelphia, July 2002. https://doi.org/10.3115/1073083.1073135. https://www.aclweb.org/anthology/P02-1040
Peters, M.E., et al.: Deep contextualized word representations. ArXiv arXiv:1802.05365 (2018)
See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: ACL (2017)
Song, L., Wang, Z., Hamza, W., Zhang, Y., Gildea, D.: Leveraging context information for natural question generation. In: NAACL-HLT (2018)
Speer, R., Chin, J., Havasi, C.: ConceptNet 5.5: an open multilingual graph of general knowledge. ArXiv arXiv:1612.03975 (2017)
Sun, X., Liu, J., Lyu, Y., He, W., Ma, Y., Wang, S.: Answer-focused and position-aware neural question generation. In: EMNLP (2018)
Tang, D., Duan, N., Qin, T., Zhou, M.: Question answering and question generation as dual tasks. ArXiv arXiv:1706.02027 (2017)
Wang, S., et al.: PathQG: neural question generation from facts. In: EMNLP (2020)
Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: MiniLM: deep self-attention distillation for task-agnostic compression of pre-trained transformers. ArXiv arXiv:2002.10957 (2020)
Xiao, D., et al.: ERNIE-GEN: an enhanced multi-flow pre-training and fine-tuning framework for natural language generation. ArXiv arXiv:2001.11314 (2020)
Yan, Y., et al.: ProphetNet: predicting future n-gram for sequence-to-sequence pre-training. ArXiv arXiv:2001.04063 (2020)
Yang, P., Li, L., Luo, F., Liu, T., Sun, X.: Enhancing topic-to-essay generation with external commonsense knowledge. In: ACL (2019)
Zhang, S., Bansal, M.: Addressing semantic drift in question generation for semi-supervised question answering. ArXiv arXiv:1909.06356 (2019)
Zhao, Y., Ni, X., Ding, Y., Ke, Q.: Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: EMNLP (2018)
Zhou, H., Young, T., Huang, M., Zhao, H., Xu, J., Zhu, X.: Commonsense knowledge aware conversation generation with graph attention. In: IJCAI (2018)
Zhou, Q., Yang, N., Wei, F., Tan, C., Bao, H., Zhou, M.: Neural question generation from text: a preliminary study. In: Huang, X., Jiang, J., Zhao, D., Feng, Y., Hong, Y. (eds.) NLPCC 2017. LNCS (LNAI), vol. 10619, pp. 662–671. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73618-1_56
Zhou, W., Zhang, M., Wu, Y.: Multi-task learning with language modeling for question generation. ArXiv arXiv:1908.11813 (2019)
Zhou, W., Zhang, M., Wu, Y.: Question-type driven question generation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6032–6037. Association for Computational Linguistics, Hong Kong, November 2019. https://doi.org/10.18653/v1/D19-1622. https://www.aclweb.org/anthology/D19-1622
Acknowledgments
This work is supported by the National Natural Science Foundation of China (62076008, 61773026) and the KeyProject of Natural Science Foundation of China (61936012).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Jia, X., Wang, H., Yin, D., Wu, Y. (2021). Enhancing Question Generation with Commonsense Knowledge. In: Li, S., et al. Chinese Computational Linguistics. CCL 2021. Lecture Notes in Computer Science(), vol 12869. Springer, Cham. https://doi.org/10.1007/978-3-030-84186-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-84186-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-84185-0
Online ISBN: 978-3-030-84186-7
eBook Packages: Computer ScienceComputer Science (R0)