[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Refining ChatGPT for Document-Level Relation Extraction: A Multi-dimensional Prompting Approach

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14877))

Included in the following conference series:

  • 452 Accesses

Abstract

This work explores the efficacy of large language models (LLMs) like ChatGPT and GPT-4 in document-level relation extraction (DocRE). Our work begins with the assessment of the zero-shot capabilities of leading LLMs in DocRE, followed by an in-depth exploration of ChatGPT’s performance through fine-tuning. We introduce Multi-Dimensional-Prompting, a prompting framework inspired by existing symbolic and arithmetic reasoning techniques in LLMs. Our methodology includes: (1) a task decomposition strategy that breaks down DocRE into sequential sub-tasks of entity pair extraction and relation classification; (2) a process decomposition strategy to refine the DocRE logic, enhancing prompts for more efficient processing; and (3) a relation-type decomposition strategy, classifying predefined relation types into categories, each can be processed by specialized models for a comprehensive final outcome. Our methods improve performance on benchmark datasets DocRED and Re-DocRED, with our fine-tuned ChatGPT outperforming current state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 55.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 69.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Wei, Z., Su, J., Wang, Y., Tian, Y., Chang, Y.: A novel cascade binary tagging framework for relational triple extraction. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1476–1488 (2020)

    Google Scholar 

  2. Wang, Y., Yu, B., Zhang, Y., Liu, T., Zhu, H., Sun, L.: Tplinker: Single-stage joint extraction of entities and relations through token pair linking. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 1572–1582 (2020)

    Google Scholar 

  3. Zhong, Z., Chen, D.: A frustratingly easy approach for entity and relation extraction. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 50–61 (2021)

    Google Scholar 

  4. Zhou, W., Huang, K., Ma, T., Huang, J.: Document-level relation extraction with adaptive thresholding and localized context pooling. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol.~35, pp. 14612–14620 (2021)

    Google Scholar 

  5. Zhang, N., Chen, X., et al. (2021). Document-level Relation Extraction as Semantic Segmentation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, {IJCAI 2021}, pp. 3999–4006 (2021)

    Google Scholar 

  6. Ma, Y., Wang, et al.: Dreeam: guiding attention with evidence for improving document-level relation extraction. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. pp. 1963–1975 (2023)

    Google Scholar 

  7. OpenAI: Introducing chatgpt (2022). {https://openai.com/blog/chatgpt}

  8. Achiam, J., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)

  9. Thoppilan, R., et al.: Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)

  10. Tan, Y., Min, D., Li, Y., Li, W., Hu, N., Chen, Y., Qi, G.: Can chatgpt replace traditional kbqa models? an in-depth analysis of the question answering performance of the gpt llm family. In: International Semantic Web Conference, pp. 348--367. Springer (2023)

    Google Scholar 

  11. Xie, T., et al.: Empirical study of zero-shotner with chatgpt. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 7935–7956 (2023)

    Google Scholar 

  12. Peng, H., et al.: When does in-context learning fall short and why? a study on specification-heavy tasks. arXiv preprint arXiv:2311.08993 (2023)

  13. Han, R., et al.: Is information extraction solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors. arXiv preprint arXiv:2305.14450 (2023)

  14. Zhou, D., et al.: Least-to-most prompting enables complex reasoning in large language models. In: The Eleventh International Conference on Learning Representations (2022)

    Google Scholar 

  15. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural. Inf. Process. Syst. 35, 24824–24837 (2022)

    Google Scholar 

  16. Yao, Y., Ye, D., et al.: Docred: a large-scale document-level relation extraction dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 764–777 (2019)

    Google Scholar 

  17. Tan, Q., et al.: Revisiting docred-addressing the false negative problem in relation extraction. In: Proceedings of the 2022 Conference on EMNLP, pp. 8472–8487 (2022)

    Google Scholar 

  18. Li, J., Jia, Z., Zheng, Z.: Semi-automatic data enhancement for document-level relation extraction with distant supervision from large language models. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5495–5505 (2023)

    Google Scholar 

  19. Zeng, S., Xu, R., Chang, B., Li, L.: Double graph based reasoning for document-level relation extraction. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1630–1640 (2020)

    Google Scholar 

  20. Zhang, Y., Feng, B., Gao, H., Zhang, P., Deng, W., Zhang, J.: Dual-enhancement model of entity pronouns and evidence sentence for document-level relation extraction. In: International Conference on Neural Information Processing. pp. 338–349. Springer (2023). https://doi.org/10.1007/978-981-99-8148-9_27

  21. Zhang, L., Min, Z., Su, J., Yu, P., Wang, A., Chen, Y.: Exploring effective inter-encoder semantic interaction for document-level relation extraction. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 5278–5286 (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangfeng Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, W., Wang, X., Chen, X., Luo, X. (2024). Refining ChatGPT for Document-Level Relation Extraction: A Multi-dimensional Prompting Approach. In: Huang, DS., Si, Z., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science(), vol 14877. Springer, Singapore. https://doi.org/10.1007/978-981-97-5669-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5669-8_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5668-1

  • Online ISBN: 978-981-97-5669-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics