Abstract
The knowledge Graph (KGs) is a valuable tool and useful resource to describe the entities and their relationships in various natural language processing tasks. Especially, the insufficient semantic of entities and relationship in text limited the efficiency and accuracy of knowledge representation. With the increasing of knowledge base resources, many scholars began to study the knowledge graph’s construction technology based on knowledge base embedding. The basic idea is that the knowledge graph will be treated as a recursive process. Through utilizing the knowledge base’s resources and the semantic representation of text characteristic, we can extend the new features that improve learning performance and knowledge graph completeness. In this paper, we give a general overview of knowledge graph’s construction research based on knowledge embedding, including knowledge representation, knowledge embedding and so on. Then we summarize the challenge for the knowledge graph and the future development trend.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amit, S.: Introducing the knowledge graph. Official Blog of Google, America (2012)
Lenat, D.B.: CYC: a large-scale investment in knowledge infrastructure. Commun. ACM 38(11), 33–38 (1995). https://doi.org/10.1145/219717.219745
Bizer, C., Lehmann, J., Kobilarov, G., et al.: DBpedia - a crystallization point for the web of data. Web Semant. Sci. Serv. Agents World Wide Web 7(3), 154–165 (2009)
Suchanek, F.M., Kasneci, G., Weikum, G.: YAGO: a large ontology from wikipedia and WordNet. Web Semant. Sci. Serv. Agents World Wide Web 6(3), 203–217 (2008)
Campbell, C.: Wikipedia: the free encyclopedia. Ref. Rev. 26(16), 5 (2002)
Wang, Z., Li, J., Wang, Z., et al.: Xlore: A large-scale English-Chinese bilingual knowledge graph. In: Proceedings of the 2013 International Conference on Posters & Demonstrations Track-Volume 1035, pp. 121–124 (2013)
Carlson, A., Betteridge, J., Kisiel, B., et al.: Toward an architecture for never-ending language learning. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, vol. 5, pp. 1306–1313 (2010)
Bordes, A., Usunier, N., Garcia-Duran, A., et al.: Translating embeddings for modeling multi-relational data. In: Proceedings of NIPS, pp. 2787–2795. MIT Press, Cambridge, MA (2013)
Wang, Z., Zhang, J., Feng, J., et al.: Knowledge graph embedding by translating on hyperplanes. In: Proceedings of AAAI, pp. 1112–1119, Menlo Park, CA (2014)
Lin, Y., Liu, Z., Zhu, X., et al.: Learning entity and relation embeddings for knowledge graph completion. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2181–2187. AAAI Press (2015)
Ji, G., He, S., Xu, L., et al.: Knowledge graph embedding via dynamic mapping matrix. In: Proceedings of ACL, pp. 687–696. ACL, Stroudsburg, PA (2015)
Ji, G., Liu, K., He, S., et al.: Knowledge graph completion with adaptive sparse transfer matrix. In: Thirtieth AAAI Conference on Artificial Intelligence, pp. 985–991. AAAI Press (2016)
Xie, R., Liu, Z., Jia, J., et al.: Representation learning of knowledge graphs with entity descriptions. In: Proceedings of AAAI. AAAI, Mcnlo Park, CA (2016)
Xiao, H., Huang, M., Hao, Y., et al.: TransG: a generative mixture model for knowledge graph embedding. ArXiv Preprint arXiv:1509.05488, vol. 1509, p. 05488 (2015)
He, S., Liu, K., Ji, J., et al.: Learning to represent knowledge graphs with Gaussian embedding. In: Proceedings of CIKM, pp. 623–632. ACM, New York (2015)
Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: International Conference on Machine Learning, pp. 809–816. Omnipress (2011)
Yang, B., Yih, W.T., He, X., et al.: Embedding entities and relations for learning and inference in knowledge bases. Eprint Arxiv arXiv:1412.6575 (2014)
Trouillon, T., Welbl, J., Riedel, S., et al.: Complex embeddings for simple link prediction, pp. 2071–2080 (2016)
Nickel, M., Rosasco, L., Poggio, T.: Holographic embeddings of knowledge graphs. In: Thirtieth AAAI Conference on Artificial Intelligence, pp. 1955–1961. AAAI Press (2016)
Dettmers, T., Minervini, P., Stenetorp, P., et al.: Convolutional 2D knowledge graph embeddings (2017)
Zeng, D., Liu, K., Chen, Y., et al.: Distant supervision for relation extraction via piecewise convolutional neural networks. In: Conference on Empirical Methods in Natural Language Processing, pp. 1753–1762 (2015)
Lin, Y., Shen, S., Liu, Z., et al.: Neural relation extraction with selective attention over instances. In: Meeting of the Association for Computational Linguistics, pp. 2124–2133 (2016)
Wu, Y., Bamman, D., Russell, S.: Adversarial training for relation extraction. In: Conference on Empirical Methods in Natural Language Processing, pp. 1778–1783 (2017)
Frege, G.: Jber sinn und bedeutung. Wittgenstein Studien 100, 25–50 (1892)
Hermann, K.M.: Distributed Representations for Compositional Semantics. Ph.D. Dissertation, University of Oxford (2014)
Socher, R., Karpathy, A., Le, Q.V., Manning, C.D., Ng, A.Y.: Grounded compositional semantics for finding and describing images with sentences. Trans. Assoc. Comput. Linguist. 2, 207–218 (2014)
Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, USA, pp. 1631–1642 (2013)
Socher, R., Huang, E.H., Pennin, J., Manning, C.D., Ng, A.Y.: Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In: Proceedings of Advances in Neural Information Processing Systems, vol. 24, Granada, Spain, pp. 801–809 (2011)
Socher, R., Pennington, J., Huang, E.H., Ng, A.Y., Manning, C.D.: Semi-supervised recursive autoencoders for predicting sentiment distributions. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 151–161. Association for Computational Linguistics, Scotland, UK (2011)
Mitchell, J., Lapata, M.: Composition in distributional models of semantics. Cognit. Sci. 34(8), 1388–1429 (2010)
Elman, J.L.: Finding structure in time. Cognit. Sci. 14(2), 179–211 (1990)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Yao, Y., Huang, Z.: Bi-directional LSTM recurrent neural network for Chinese word segmentation. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) ICONIP 2016, Part IV. LNCS, vol. 9950, pp. 345–353. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46681-1_42
Chiu, J.P.C., Nichols, E.: Named entity recognition with bidirectional LSTM-CNNs. Comput. Sci. (2015)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Conneau, A., Schwenk, H., Barrault, L., Lecun, Y.: Very deep convolutional networks for natural language processing. arXiv preprint arXiv:1606.01781 (2016)
Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
Kim, Y.: Convolutional neural networks for sentence classifcation. In: EMNLP (2014)
Zhou, G., et al.: Deep Interest Network for Click-Through Rate Prediction. arXiv preprint arXiv:1706.06978 (2017)
Friedman, L., Markovitch, S.: Recursive Feature Generation for Knowledge-based Learning (2018)
Annervaz, K.M., Chowdhury, S.B.R., Dukkipati, A.: Learning beyond datasets: knowledge graph augmented neural networks for natural language processing (2018)
Liu, Z.-y, Sun, M.-s, Lin, Y.-k, et al.: Knowledge representation learning: a review. J. Comput. Res. Develop. 53(2), 1–16 (2016)
Kemp, C., Tenenbaum, J.B.: Structured statistical models of inductive reasoning. Psychol. Rev. 116(1), 20–58 (2009)
Yan, J., Wang, C., Cheng, W., et al.: A retrospective of knowledge graphs. Front. Comput. Sci., 1–20 (2016)
Acknowledgement
This article was supported jointly by the National Natural Science Foundation of China (No. F020807), Found Program of Ministry of Education of China for “Integration of Cloud Computing and Big Data, Innovation of Science and Education” (No. 2017B00030), Basic Scientific Research Operating Expenses of Central Universities (No. ZDYF2017006), Science and Technology Department Collaborative Innovation Program of Shaanxi Provincial (No. 2015XT-21) and Shaanxi Soft Science Key Program (No. 2013KRZ10).We would like to thank them for providing support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Caifang, T., Yuan, R., Hualei, Y., Jiamin, C. (2018). Research Progress of Knowledge Graph Based on Knowledge Base Embedding. In: Zhou, Q., Miao, Q., Wang, H., Xie, W., Wang, Y., Lu, Z. (eds) Data Science. ICPCSEE 2018. Communications in Computer and Information Science, vol 902. Springer, Singapore. https://doi.org/10.1007/978-981-13-2206-8_16
Download citation
DOI: https://doi.org/10.1007/978-981-13-2206-8_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-2205-1
Online ISBN: 978-981-13-2206-8
eBook Packages: Computer ScienceComputer Science (R0)