[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Enhancing Question Generation with Commonsense Knowledge

  • Conference paper
  • First Online:
Chinese Computational Linguistics (CCL 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12869))

Included in the following conference series:

  • 1712 Accesses

Abstract

Question generation (QG) is to generate natural and grammatical questions that can be answered by a specific answer for a given context. Previous sequence-to-sequence models suffer from a problem that asking high-quality questions requires commonsense knowledge as backgrounds, which in most cases can not be learned directly from training data, resulting in unsatisfactory questions deprived of knowledge. In this paper, we propose a multi-task learning framework to introduce commonsense knowledge into question generation process. We first retrieve relevant commonsense knowledge triples from mature databases and select triples with the conversion information from source context to question. Based on these informative knowledge triples, we design two auxiliary tasks to incorporate commonsense knowledge into the main QG model, where one task is Concept Relation Classification and the other is Tail Concept Generation. Experimental results on SQuAD show that our proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics, demonstrating that incorporating external commonsense knowledge with multi-task learning can help the model generate human-like and high-quality questions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://stanfordnlp.github.io/CoreNLP/.

  2. 2.

    We set K to 3*m in our experiments, where m represents the number of content words in each paragraph.

References

  1. Bai, G., He, S., Liu, K., Zhao, J.: Variational attention for commonsense knowledge aware conversation generation. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11838, pp. 3–15. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32233-5_1

    Chapter  Google Scholar 

  2. Bao, H., et al.: UniLMv2: pseudo-masked language models for unified language model pre-training. ArXiv arXiv:2002.12804 (2020)

  3. Chen, Y., Wu, L., Zaki, M.J.: Natural question generation with reinforcement learning based graph-to-sequence model. ArXiv arXiv:1910.08832 (2019)

  4. Denkowski, M.J., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: WMT@ACL (2014)

    Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)

    Google Scholar 

  6. Dhole, K.D., Manning, C.D.: Syn-QG: syntactic and shallow semantic rules for question generation. In: ACL (2020)

    Google Scholar 

  7. Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. In: NeurIPS (2019)

    Google Scholar 

  8. Du, X., Shao, J., Cardie, C.: Learning to ask: neural question generation for reading comprehension. In: ACL (2017)

    Google Scholar 

  9. Duan, N., Tang, D., Chen, P., Zhou, M.: Question generation for question answering. In: EMNLP (2017)

    Google Scholar 

  10. Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. ArXiv arXiv:1603.06393 (2016)

  11. Guan, J., Wang, Y., Huang, M.: Story ending generation with incremental encoding and commonsense knowledge. In: AAAI (2019)

    Google Scholar 

  12. Heilman, M., Smith, N.A.: Question generation via overgenerating transformations and ranking (2009)

    Google Scholar 

  13. Heilman, M., Smith, N.A.: Good question! Statistical ranking for question generation. In: HLT-NAACL (2010)

    Google Scholar 

  14. Jia, X., Zhou, W., Sun, X., Wu, Y.: EQG-RACE: examination-type question generation. ArXiv arXiv:2012.06106 (2020)

  15. Jia, X., Zhou, W., Sun, X., Wu, Y.: How to ask good questions? Try to leverage paraphrases. In: ACL (2020)

    Google Scholar 

  16. Kim, Y., Lee, H., Shin, J., Jung, K.: Improving neural question generation using answer separation. In: AAAI (2018)

    Google Scholar 

  17. Ko, W.J., Chen, T.Y., Huang, Y., Durrett, G., Li, J.J.: Inquisitive question generation for high level text comprehension. In: EMNLP (2020)

    Google Scholar 

  18. Labutov, I., Basu, S., Vanderwende, L.: Deep questions without deep understanding. In: ACL (2015)

    Google Scholar 

  19. Li, J., Gao, Y., Bing, L., King, I., Lyu, M.R.: Improving question generation with to the point context. ArXiv arXiv:1910.06036 (2019)

  20. Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: ACL 2004 (2004)

    Google Scholar 

  21. Liu, B., Wei, H., Niu, D., Chen, H., He, Y.: Asking questions the human way: scalable question-answer generation from text corpus. In: Proceedings of the Web Conference 2020 (2020)

    Google Scholar 

  22. Luu, A.T., Shah, D.J., Barzilay, R.: Capturing greater context for question generation. In: AAAI (2020)

    Google Scholar 

  23. Miller, G.: WordNet: a lexical database for English. Commun. ACM 38, 39–41 (1995)

    Article  Google Scholar 

  24. Mostafazadeh, N., Misra, I., Devlin, J., Mitchell, M., He, X., Vanderwende, L.: Generating natural questions about an image. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1802–1813. Association for Computational Linguistics, Berlin, August 2016. https://doi.org/10.18653/v1/P16-1170. https://www.aclweb.org/anthology/P16-1170

  25. Nema, P., Mohankumar, A.K., Khapra, M.M., Srinivasan, B.V., Ravindran, B.: Let’s ask again: refine network for automatic question generation. ArXiv arXiv:1909.05355 (2019)

  26. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics, Philadelphia, July 2002. https://doi.org/10.3115/1073083.1073135. https://www.aclweb.org/anthology/P02-1040

  27. Peters, M.E., et al.: Deep contextualized word representations. ArXiv arXiv:1802.05365 (2018)

  28. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: ACL (2017)

    Google Scholar 

  29. Song, L., Wang, Z., Hamza, W., Zhang, Y., Gildea, D.: Leveraging context information for natural question generation. In: NAACL-HLT (2018)

    Google Scholar 

  30. Speer, R., Chin, J., Havasi, C.: ConceptNet 5.5: an open multilingual graph of general knowledge. ArXiv arXiv:1612.03975 (2017)

  31. Sun, X., Liu, J., Lyu, Y., He, W., Ma, Y., Wang, S.: Answer-focused and position-aware neural question generation. In: EMNLP (2018)

    Google Scholar 

  32. Tang, D., Duan, N., Qin, T., Zhou, M.: Question answering and question generation as dual tasks. ArXiv arXiv:1706.02027 (2017)

  33. Wang, S., et al.: PathQG: neural question generation from facts. In: EMNLP (2020)

    Google Scholar 

  34. Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: MiniLM: deep self-attention distillation for task-agnostic compression of pre-trained transformers. ArXiv arXiv:2002.10957 (2020)

  35. Xiao, D., et al.: ERNIE-GEN: an enhanced multi-flow pre-training and fine-tuning framework for natural language generation. ArXiv arXiv:2001.11314 (2020)

  36. Yan, Y., et al.: ProphetNet: predicting future n-gram for sequence-to-sequence pre-training. ArXiv arXiv:2001.04063 (2020)

  37. Yang, P., Li, L., Luo, F., Liu, T., Sun, X.: Enhancing topic-to-essay generation with external commonsense knowledge. In: ACL (2019)

    Google Scholar 

  38. Zhang, S., Bansal, M.: Addressing semantic drift in question generation for semi-supervised question answering. ArXiv arXiv:1909.06356 (2019)

  39. Zhao, Y., Ni, X., Ding, Y., Ke, Q.: Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: EMNLP (2018)

    Google Scholar 

  40. Zhou, H., Young, T., Huang, M., Zhao, H., Xu, J., Zhu, X.: Commonsense knowledge aware conversation generation with graph attention. In: IJCAI (2018)

    Google Scholar 

  41. Zhou, Q., Yang, N., Wei, F., Tan, C., Bao, H., Zhou, M.: Neural question generation from text: a preliminary study. In: Huang, X., Jiang, J., Zhao, D., Feng, Y., Hong, Y. (eds.) NLPCC 2017. LNCS (LNAI), vol. 10619, pp. 662–671. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73618-1_56

    Chapter  Google Scholar 

  42. Zhou, W., Zhang, M., Wu, Y.: Multi-task learning with language modeling for question generation. ArXiv arXiv:1908.11813 (2019)

  43. Zhou, W., Zhang, M., Wu, Y.: Question-type driven question generation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6032–6037. Association for Computational Linguistics, Hong Kong, November 2019. https://doi.org/10.18653/v1/D19-1622. https://www.aclweb.org/anthology/D19-1622

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (62076008, 61773026) and the KeyProject of Natural Science Foundation of China (61936012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunfang Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jia, X., Wang, H., Yin, D., Wu, Y. (2021). Enhancing Question Generation with Commonsense Knowledge. In: Li, S., et al. Chinese Computational Linguistics. CCL 2021. Lecture Notes in Computer Science(), vol 12869. Springer, Cham. https://doi.org/10.1007/978-3-030-84186-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-84186-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-84185-0

  • Online ISBN: 978-3-030-84186-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics