[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Knowledge Transfer in Neural Language Models

  • Conference paper
  • First Online:
Artificial Intelligence XXXIV (SGAI 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10630))

  • 1665 Accesses

Abstract

The complexity and depth of Information Extraction becomes increasingly apparent as time goes on. Heuristics, shocastic and more recently, neural models have proved challenging to scale into and out of various domains. In this paper we discuss the limitations of current approaches and explore if transferring human knowledge into a neural language model could improve performance in an deep learning setting. We approach this by constructing gazetteers from existing public resources. We demonstrate that leveraging existing knowledge we can increase performance and train such networks faster. We argue a case for further research into leveraging pre-existing domain knowledge and engineering resources to train neural models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Implementation: https://github.com/zhiweiuu/SGAITagger.

  2. 2.

    This work is partially supported by the EPSRC (Grant REF: EP/P031668/1).

References

  1. Manning, C.D.: Computational linguistics and deep learning. Comput. Linguist. 41(4), 701–707 (2015)

    Article  MathSciNet  Google Scholar 

  2. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  3. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  4. Bahrampour, S., Ramakrishnan, N., Schott, L., Shah, M.: Comparative study of caffe, neon, theano, and torch for deep learning. arXiv preprint arXiv:1511.06435 (2015)

  5. Tjong Kim Sang, E.F., De Meulder, F.: Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In: Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, vol. 4. Association for Computational Linguistics (2003)

    Google Scholar 

  6. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J.: Scikit-learn machine learning in Python. J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  7. Ma, X., Hovy, E.: End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. arXiv preprint arXiv:1603.01354 (2016)

  8. Schmitz, M., Bart, R., Soderland, S., Etzioni, O.: Open language learning for information extraction. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 523–534. Association for Computational Linguistics, July 2012

    Google Scholar 

  9. Zhu, J., et al.: 2D conditional random fields for web information extraction. In: Proceedings of the 22nd International Conference on Machine Learning. ACM (2005)

    Google Scholar 

  10. Sarawagi, S.: Information extraction. Found. Trends Databases 1(3), 261–377 (2008)

    Article  MATH  Google Scholar 

  11. Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)

  12. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543, October 2014

    Google Scholar 

  13. Bontcheva, K., Derczynski, L., Funk, A., Greenwood, M.A., Maynard, D., Aswani, N.: TwitIE: an open-source information extraction pipeline for microblog text. In: RANLP, pp. 83–90, September 2013

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter John Hampton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hampton, P.J., Wang, H., Lin, Z. (2017). Knowledge Transfer in Neural Language Models. In: Bramer, M., Petridis, M. (eds) Artificial Intelligence XXXIV. SGAI 2017. Lecture Notes in Computer Science(), vol 10630. Springer, Cham. https://doi.org/10.1007/978-3-319-71078-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-71078-5_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-71077-8

  • Online ISBN: 978-3-319-71078-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics