[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

MultiMatch: Low-Resource Generalized Entity Matching Using Task-Conditioned Hyperadapters in Multitask Learning

  • Conference paper
  • First Online:
Big Data Analytics and Knowledge Discovery (DaWaK 2024)

Abstract

Generalized Entity Matching (GEM) is a variant of entity matching that identifies whether entity descriptions from diverse data sources with heterogeneous data formats refer to the same real-world entity. State-of-the-art single-task fine-tuning approaches have shown limitations in handling scenarios with entity distribution shifts, particularly in low-resource settings, and can also require significant amounts of computationally expensive fine-tuning when applied to the GEM problem. This paper addresses these challenges by deploying task-conditioned adapters for low-resource GEM. We present MultiMatch, which explores the benefits of sharing knowledge across related tasks while improving the efficiency and accuracy of models used for GEM. Furthermore, we propose a loss composition strategy that leverages the heteroscedastic uncertainty of individual tasks to adjust the loss terms for each task before computing the overall loss. Empirically, we observe regulatory effects on the model’s variance. Lastly, we analyze the carbon impact of fine-tuning different systems. Results are promising: our approach generalizes over eight GEM benchmarking tasks while reducing \(CO_2\) emissions by 85.0%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 103.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 59.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/boscoj2008/MultiMatch.

  2. 2.

    The sequence length is set 512 tokens for all models.

References

  1. Anthony, L.F.W., Kanding, B., Selvan, R.: Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv (2020)

    Google Scholar 

  2. Balsebre, P., Yao, D., Cong, G., Hai, Z.: Geospatial entity resolution. In: Proceedings of the ACM Web Conference (2022). https://doi.org/10.1145/3485447.3512026

  3. Cipolla, R., Gal, Y., Kendall, A.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. pp. 7482–7491 (2018). https://doi.org/10.1109/CVPR.2018.00781

  4. Ebraheem, M., Thirumuruganathan, S., Joty, S., Ouzzani, M., Tang, N.: Distributed representations of tuples for entity resolution. Proc. VLDB Endow. (2018). https://doi.org/10.14778/3236187.3236198

  5. Guo, B., Yu, J., Yang, D., Leng, H., Liao, B.: Energy-efficient database systems: a systematic survey. ACM Comput. Surv. 55(6) (2022). https://doi.org/10.1145/3538225

  6. Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. ArXiv abs/1609.09106 (2016). https://api.semanticscholar.org/CorpusID:208981547

  7. Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. ArXiv abs/1902.00751 (2019). https://api.semanticscholar.org/CorpusID:59599816

  8. Ivison, H., Peters, M.: Hyperdecoders: instance-specific decoders for multi-task NLP, pp. 1715–1730. Association for Computational Linguistics (2022). https://api.semanticscholar.org/CorpusID:247476141

  9. Karimi Mahabadi, R., Ruder, S., Dehghani, M., Henderson, J.: Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks, pp. 565–576. Association for Computational Linguistics (2021). https://api.semanticscholar.org/CorpusID:235309789

  10. Konda, P., et al.: Magellan: toward building entity matching management systems. Proc. VLDB Endow. 9(12), 1197–1208 (2016)

    Article  Google Scholar 

  11. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning, pp. 3045–3059. ACL (2021). https://doi.org/10.18653/v1/2021.emnlp-main.243

  12. Li, Y., Li, J., Suhara, Y., Doan, A., Tan, W.C.: Deep entity matching with pre-trained language models. Proc. VLDB Endow. 14(1), 50–60 (2020). https://doi.org/10.14778/3421424.3421431

    Article  Google Scholar 

  13. Mudgal, S., et al.: Deep learning for entity matching: a design space exploration, pp. 19–34. In: SIGMOD 2018, Association for Computing Machinery (2018)

    Google Scholar 

  14. Mugeni, J.B., Amagasa, T.: A graph-based blocking approach for entity matching using contrastively learned embeddings. SIGAPP Appl. Comput. Rev. 22(4), 37–46 (2023). https://doi.org/10.1145/3584014.3584017

    Article  Google Scholar 

  15. Mugeni, J.B., Lynden, S., Amagasa, T., Matono, A.: Adapterem: pre-trained language model adaptation for generalized entity matching using adapter-tuning, pp. 140–147. IDEAS 2023, Association for Computing Machinery (2023). https://doi.org/10.1145/3589462.3589498

  16. Peeters, R., Bizer, C.: Dual-objective fine-tuning of BERT for entity matching. Proc. VLDB Endow. 14(10), 1913–1921 (2021). https://doi.org/10.14778/3467861.3467878

    Article  Google Scholar 

  17. Peeters, R., Bizer, C.: Supervised contrastive learning for product matching. In: Companion Proceedings of the Web Conference 2022. ACM (2022). https://doi.org/10.1145/3487553.3524254

  18. Peeters, R., Bizer, C.: Entity matching using large language models. arXiv (2024)

    Google Scholar 

  19. Peeters, R., Der, R.C., Bizer, C.: WDC products: a multi-dimensional entity matching benchmark. arXiv (2023)

    Google Scholar 

  20. Pfeiffer, J., et al.: AdapterHub: a framework for adapting transformers, pp. 46–54. Association for Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.emnlp-demos.7

  21. Pilault, J., hattami, A.E., Pal, C.: Conditionally adaptive multi-task learning: Improving transfer learning in NLP using fewer parameters & less data. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=de11dbHzAMF

  22. Primpeli, A., Peeters, R., Bizer, C.: The WDC training dataset and gold standard for large-scale product matching, pp. 381–386. Association for Computing Machinery (2019). https://doi.org/0.1145/3308560.3316609

    Google Scholar 

  23. Tu, J., et al.: Domain adaptation for deep entity resolution, p. 443–457. In: SIGMOD 2022, Association for Computing Machinery (2022). https://doi.org/10.1145/3514221.3517870

  24. Üstün, A., Bisazza, A., Bouma, G., van Noord, G., Ruder, S.: Hyper-X: a unified hypernetwork for multi-task multilingual transfer. In: Goldberg, Y., Kozareva, Z., Zhang, Y. (eds.) Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 7934–7949. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (2022). https://doi.org/10.18653/v1/2022.emnlp-main.541

  25. Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6000–6010 (2017). http://arxiv.org/abs/1706.03762

  26. Verdecchia, R., Sallou, J., Cruz, L.: A systematic review of green AI. Wiley Interdisc. Rev. Data Min. Knowl. Disc. 13(4), e1507 (2023)

    Article  Google Scholar 

  27. Wang, J., Li, Y., Hirota, W.: Machamp: a generalized entity matching benchmark, pp. 4633–4642. In: CIKM 2021, Association for Computing Machinery (2021). https://doi.org/10.1145/3459637.3482008

  28. Wang, P., et al.: PromptEM: prompt-tuning for low-resource generalized entity matching. Proc. VLDB Endow. 16(2), 369–378 (2022). https://doi.org/10.14778/3565816.3565836

    Article  Google Scholar 

  29. Wang, T., et al.: Bridging the gap between reality and ideality of entity matching: a revisiting and benchmark re-construction (2022 arxiv-2205.05889

    Google Scholar 

  30. Zhuang, L., Wayne, L., Ya, S., Jun, Z.: A robustly optimized BERT pre-training approach with post-training. In: Li, S., (eds.) et al Proceedings of the 20th Chinese National Conference on Computational Linguistics. pp. 1218–1227. Chinese Information Processing Society of China, Huhhot, China (2021). https://doi.org/10.1007/978-3-030-84186-7_31

Download references

Acknowledgement

This paper is based on results obtained from “Research and Development Project of the Enhanced Infrastructures for Post-5G Information and Communication Systems” (JPNP20017), commissioned by the New Energy and Industrial Technology Development Organization (NEDO) - JPNP20006, JST CREST Grant Number JPMJCR22M2, and JSPS KAKENHI Grant Number JP23K24949.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Bosco Mugeni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mugeni, J.B., Lynden, S., Amagasa, T., Matono, A. (2024). MultiMatch: Low-Resource Generalized Entity Matching Using Task-Conditioned Hyperadapters in Multitask Learning. In: Wrembel, R., Chiusano, S., Kotsis, G., Tjoa, A.M., Khalil, I. (eds) Big Data Analytics and Knowledge Discovery. DaWaK 2024. Lecture Notes in Computer Science, vol 14912. Springer, Cham. https://doi.org/10.1007/978-3-031-68323-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-68323-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-68322-0

  • Online ISBN: 978-3-031-68323-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics