[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Alignment-Based Graph Network for Judicial Examination Task

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12817))

Abstract

The judicial examination task is to select the correct options for a given question, which is a challenging task and is helpful for legal assistant systems. We argue that by leveraging the potential semantic information between question and option, we can enhance the model’s ability to understand the task’s content and, therefore, propose an Alignment-Based Graph Network (ABGN). Given a question-option pair, ABGN first constructs semantic relation between question and option with an alignment network and then uses a gated graph attention network to utilise all question-option pairs’ global information. The experimental results show that our model achieves competitive performance with standard methods on judicial examination task. Besides, with multi-dimension analyses, we show the effectiveness of some components in our full model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 63.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 79.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bastings, J., Titov, I., Aziz, W., Marcheggiani, D., Sima’an, K.: Graph convolutional encoders for syntax-aware neural machine translation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1957–1967 (2017)

    Google Scholar 

  2. Chen, Y., Wu, L., Zaki, M.J.: Reinforcement learning based graph-to-sequence model for natural question generation. In: 8th International Conference on Learning Representations (2020)

    Google Scholar 

  3. Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., Hu, G.: Revisiting pre-trained models for Chinese natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 657–668 (2020)

    Google Scholar 

  4. Cui, Y., Chen, Z., Wei, S., Wang, S., Liu, T., Hu, G.: Attention-over-Attention neural networks for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 593–602 (2017)

    Google Scholar 

  5. Dai, W., Qiu, L., Wu, A., Qiu, M.: Cloud infrastructure resource allocation for big data applications. IEEE Trans. Big Data 4(3), 313–324 (2018)

    Article  Google Scholar 

  6. De Cao, N., Aziz, W., Titov, I.: Question answering by reasoning across documents with graph convolutional networks. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 2306–2317 (2019)

    Google Scholar 

  7. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  8. Gai, K., Qiu, M.: Reinforcement learning-based content-centric services in mobile sensing. IEEE Network 32(4), 34–39 (2018)

    Article  Google Scholar 

  9. Gai, K., Qiu, M., Zhao, H., Sun, X.: Resource management in sustainable cyber-physical systems using heterogeneous cloud computing. IEEE Trans. Sustain. Comput. 3(2), 60–72 (2018)

    Article  Google Scholar 

  10. Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 729–734 (2005)

    Google Scholar 

  11. He, W., et al.: Dureader: a chinese machine reading comprehension dataset from real-world applications. arXiv preprint arXiv:1711.05073 (2017)

  12. Hermann, K.M., et al.: Teaching machines to read and comprehend In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 1, pp. 1693–1701 (2015)

    Google Scholar 

  13. Ji, L., Wei, Z., Hu, X., Liu, Y., Zhang, Q., Huang, X.-J.: Incorporating argument-level interactions for persuasion comments evaluation using co-attention model. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 3703–3714 (2018)

    Google Scholar 

  14. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: Race: large-scale reading comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794 (2017)

    Google Scholar 

  15. Liu, Y., Luo, X., Yang, X.: Semantics and structure based recommendation of similar legal cases. In: 2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), pp. 388–395 (2019)

    Google Scholar 

  16. Liu, Z., Xiong, C., Sun, M., Liu, Z.: Fine-grained fact verification with kernel graph attention network. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7342–7351 (2020)

    Google Scholar 

  17. Marcheggiani, D., Titov, I.: Encoding sentences with graph convolutional networks for semantic role labeling. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1506–1515 (2017)

    Google Scholar 

  18. Mihaylov, T., Frank, A.: Knowledgeable reader: enhancing cloze-style reading comprehension with external commonsense knowledge. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers, pp. 821–832 (2018)

    Google Scholar 

  19. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016)

    Google Scholar 

  20. Seo, M., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 (2016)

  21. Sordoni, A., Bachman, P., Trischler, A., Bengio, Y.: Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245 (2016)

  22. Sun, H., Dhingra, B., Zaheer, M., Mazaitis, K., Salakhutdinov, R., Cohen, W.: Open domain question answering using early fusion of knowledge bases and text. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4231–4242 (2018)

    Google Scholar 

  23. Vaswani, A.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6000–6010. Curran Associates Inc. (2017)

    Google Scholar 

  24. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representations, pp. 1–12 (2018)

    Google Scholar 

  25. Wang, S., Jiang, J.: Machine comprehension using match-LSTM and answer pointer. arXiv preprint arXiv:1608.07905 (2016)

  26. Wang, S., Yu, M., Jiang, J., Chang, S.: A co-matching model for multi-choice reading comprehension. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 2, pp. 746–751 (2018)

    Google Scholar 

  27. Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: Gated self-matching networks for reading comprehension and question answering. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 189–198 (2017)

    Google Scholar 

  28. Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: R-NET: machine reading comprehension with self-matching networks. Tech. Rep. Nat. Lang. Comput. Group, Microsoft, Asia, Beijing, China, 5 (2017)

    Google Scholar 

  29. Wu, J., Liu, J., Luo, X.: Few-shot legal knowledge question answering system for COVID-19 epidemic. In: 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence, pp. 1–6 (2020)

    Google Scholar 

  30. Xu, K., Wu, L., Wang, Z., Feng, Y., Sheinin, V.: Sql-to-text generation with graph-to-sequence model. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 931–936, (2018)

    Google Scholar 

  31. Xu, K., Wu, L., Wang, Z., Feng, Y., Witbrock, M., Sheinin, V.: Graph2seq: graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823 (2018)

  32. Yang, B., Mitchell, T.: Leveraging knowledge bases in LSTMs for improving machine reading. arXiv preprint arXiv:1902.09091 (2019)

  33. Yin, W., Ebert, S., Schütze, H.: Attention-based convolutional neural network for machine comprehension. In: Proceedings of the Workshop on Human-Computer Question Answering, pp. 15–21 (2016)

    Google Scholar 

  34. Zhang, Y., Qi, P., Manning, C.D.: Graph convolution over pruned dependency trees improves relation extraction. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2205–2215. Association for Computational Linguistics (2018)

    Google Scholar 

  35. Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z., Sun, M.: JEC-QA: a legal-domain question answering dataset. arXiv preprint arXiv:1911.12011 (2019)

  36. Zhou, J., et al.:.Gear: graph-based evidence aggregating and reasoning for fact verification. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 892–901 (2019)

    Google Scholar 

  37. Zhu, H., Wei, F., Qin, B., Liu, T.: Hierarchical attention flow for multiple-choice reading comprehension. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pp. 6077–6085 (2018)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61762016), and a research fund of Guangxi Key Lab of Multi-Source Information Mining & Security (No. 19-A-01-01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xudong Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, J., Luo, X. (2021). Alignment-Based Graph Network for Judicial Examination Task. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, SY. (eds) Knowledge Science, Engineering and Management. KSEM 2021. Lecture Notes in Computer Science(), vol 12817. Springer, Cham. https://doi.org/10.1007/978-3-030-82153-1_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82153-1_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82152-4

  • Online ISBN: 978-3-030-82153-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics