Abstract
Recent studies in the field of Neural Machine Translation for SPARQL query generation have shown rapidly rising performance. State-of-the-art models have reached almost perfect query generation for simple datasets. However, such progress raises the question of the ability of these models to generalize and deal with unseen question-query structures and entities. In this work, we propose copy-enhanced pre-trained models with question annotation and test the ability of several models to handle unknown question-query structures and URIs. To do so, we split two popular datasets based on unknown URIs and question-query structures. Our results show that the copy mechanism effectively allows non-pre-trained models to deal with unknown URIs, and that it also improves the results of some pre-trained models. Our results also show that, when exposed to unknown question-query structures on a simple dataset, pre-trained models significantly outperform non-pre-trained models, but both non-pre-trained and pre-trained models have a considerable drop in performance on a harder dataset. However, the copy mechanism significantly boosts the results of non-pre-trained models on all settings and of the BART pre-trained model, except for the template split on LC-QuAD 2.0 dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Banerjee, D., Nair, P.A., Kaur, J.N., Usbeck, R., Biemann, C.: Modern baselines for SPARQL semantic parsing. In: Amigó, E., Castells, P., Gonzalo, J., Carterette, B., Shane Culpepper, J., Kazai, G. (eds.) SIGIR 2022: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022, pp. 2260–2265. ACM (2022)
Baroni, M.: Linguistic generalization and compositionality in modern artificial neural networks. Philos. Trans. Royal Soc. B: Biol. Sci. 375(1791), 20190307 (2019)
IAIS bFraunhofer. Knowledge graph question answering using graph-pattern isomorphism. In: Further with Knowledge Graphs: Proceedings of the 17th International Conference on Semantic Systems, 6–9 September 2021, Amsterdam, The Netherlands, vol. 53, p. 103. IOS Press (2021)
Chen, Y., Li, H., Qi, G., Wu, T., Wang, T.: Outlining and filling: hierarchical query graph generation for answering complex questions over knowledge graphs. arXiv preprint arXiv:2111.00732 (2021)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics (June 2019)
Dubey, M., Banerjee, D., Abdelkawi, A., Lehmann, J.: LC-QuAD 2.0: a large dataset for complex question answering over Wikidata and DBpedia. In: Ghidini, C., et al. (eds.) ISWC 2019. LNCS, vol. 11779, pp. 69–78. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30796-7_5
Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017. Proceedings of Machine Learning Research, vol. 70 , pp. 1243–1252. PMLR (2017)
Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, 7–12 August 2016, Berlin, Germany, Volume 1: Long Papers, pp. 1631–1640. The Association for Computer Linguistics (2016)
Gu, Y., et al.: Three levels of generalization for question answering on knowledge bases. In: Proceedings of the Web Conference 2021, WWW 2021, pp. 3477–3488. Association for Computing Machinery, New York (2021)
Hirigoyen, R., Zouaq, A., Reyd, S.: A copy mechanism for handling knowledge base elements in SPARQL neural machine translation. In: Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pp. 226–236, Online only. Association for Computational Linguistics (November 2022)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Jiang, L., Usbeck, R.: Knowledge graph question answering datasets and their generalizability: Are they enough for future research? In: Amigó, E., Castells, P., Gonzalo, J., Carterette, B., Shane Culpepper, J., Kazai, G. (eds.) SIGIR 2022: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022, pp. 3209–3218. ACM (2022)
Keysers, D.: Measuring Compositional Generalization: A Comprehensive Method on Realistic Data . arXiv:1912.09713 ((June 2020)
Lake, B., Baroni, M.: Generalization without systematicity: on the compositional skills of sequence-to-sequence recurrent networks. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, 10–15 Jul. Proceedings of Machine Learning Research, vol. 80, pp. 2873–2882. PMLR (2018)
Lewis, M.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Jurafsky, D., Chai, J., Schluter, N., Tetreault, J.R. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, 5–10 July 2020, pp. 7871–7880. Association for Computational Linguistics (2020)
Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 6–12 July 2002, Philadelphia, PA, USA, pp. 311–318. ACL (2002)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 140:1–140:67 (2020)
See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Barzilay, R., Kan, M.-Y. (eds.) Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, 30 July - 4 August, Volume 1: Long Papers, pp. 1073–1083. Association for Computational Linguistics (2017)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 8–13 December 2014, Montreal, Quebec, Canada, pp. 3104–3112. Curran Associates Inc (2014)
Trivedi, P., Maheshwari, G., Dubey, M., Lehmann, J.: LC-QuAD: a corpus for complex question answering over knowledge graphs. In: d’Amato, C., et al. (eds.) ISWC 2017. LNCS, vol. 10588, pp. 210–218. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68204-4_22
Vaswani, A.: Attention is all you need. In: Guyon, I., et al.: (eds.) Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4–9 December 2017, Long Beach, CA, USA, pp. 5998–6008. Curran Associates Inc (2017)
Yin, X., Gromann, D., Rudolph, S.: Neural machine translating from natural language to SPARQL. Future Gener. Comput. Syst. 117, 510–519 (2021)
Acknowledgements
This research has been funded by the NSERC Discovery Grant Program. The authors acknowledge support from Compute Canada for providing computational resources. We would like to thank Karou Diallo for setting the DBpedia SPARQL endpoint used in this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Reyd, S., Zouaq, A. (2023). Assessing the Generalization Capabilities of Neural Machine Translation Models for SPARQL Query Generation. In: Payne, T.R., et al. The Semantic Web – ISWC 2023. ISWC 2023. Lecture Notes in Computer Science, vol 14265. Springer, Cham. https://doi.org/10.1007/978-3-031-47240-4_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-47240-4_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47239-8
Online ISBN: 978-3-031-47240-4
eBook Packages: Computer ScienceComputer Science (R0)