[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Empirical evaluation of multi-task learning in deep neural networks for natural language processing

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Multi-task learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple-related tasks. It has shown great success in natural language processing (NLP). Currently, a number of MTL architectures and learning mechanisms have been proposed for various NLP tasks, including exploring linguistic hierarchies, orthogonality constraints, adversarial learning, gate mechanism, and label embedding. However, there is no systematic exploration and comparison of different MTL architectures and learning mechanisms for their strong performance in-depth. In this paper, we conduct a thorough examination of five typical MTL methods with deep learning architectures for a broad range of representative NLP tasks. Our primary goal is to understand the merits and demerits of existing MTL methods in NLP tasks, thus devising new hybrid architectures intended to combine their strengths. Following the empirical evaluation, we offer our insights and conclusions regarding the MTL methods we have considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. https://nlp.stanford.edu/projects/glove/.

References

  1. Augenstein I, Ruder S, Søgaard A (2018) Multi-task learning of pairwise sequence classification tasks over disparate label spaces. arXiv:1802.09913

  2. Bousmalis K, Trigeorgis G, Silberman N, Krishnan D, Erhan D (2016) Domain separation networks. In: Advances in neural information processing systems, pp 343–351

  3. Cer D, Diab M, Agirre E, Lopez-Gazpio I, Specia L (2017) Semeval-2017 task 1: semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv:1708.00055

  4. Chen Q, Zhu X, Ling Z, Wei S, Jiang H, Inkpen D (2016) Enhanced lstm for natural language inference. arXiv:1609.06038

  5. Collins M (2002) Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In: EMNLP, pp 1–8

  6. Dolan WB, Brockett C (2005) Automatically constructing a corpus of sentential paraphrases. In: Proceedings of the third international workshop on paraphrasing (IWP2005)

  7. Duong L, Cohn T, Bird S, Cook P (2015) Low resource dependency parsing: cross-lingual parameter sharing in a neural network parser. In: ACL, pp 845–850

  8. Hashimoto K, Xiong C, Tsuruoka Y, Socher R (2016) A joint many-task model: growing a neural network for multiple nlp tasks. arXiv:1611.01587

  9. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  10. Kitaev N, Klein D (2018) Constituency parsing with a self-attentive encoder. In: ACL, pp 2676–2686

  11. Liu P, Qiu X, Huang X (2016) Recurrent neural network for text classification with multi-task learning. arXiv:1605.05101

  12. Liu P, Qiu X, Huang X (2017) Adversarial multi-task learning for text classification. arXiv:1704.05742

  13. Liu X, He P, Chen W, Gao J (2019) Multi-task deep neural networks for natural language understanding. arXiv:1901.11504

  14. Marcus MP, Marcinkiewicz MA, Santorini B (1993) Building a large annotated corpus of English: the penn treebank. Comput Linguist 19(2):313–330

    Google Scholar 

  15. Matthews BW (1975) Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochim Biophys Acta Protein Struct 405(2):442–451

    Article  Google Scholar 

  16. Misra I, Shrivastava A, Gupta A, Hebert M (2016) Cross-stitch networks for multi-task learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3994–4003

  17. Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 2383–2392

  18. Ruder S (2017) An overview of multi-task learning in deep neural networks. arXiv:1706.05098

  19. Ruder S, Bingel J, Augenstein I, Søgaard A (2017) Learning what to share between loosely related tasks. arXiv:1705.08142

  20. Salzmann M, Ek CH, Urtasun R, Darrell T (2010) Factorized orthogonal latent spaces. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp 701–708

  21. Socher R, Perelygin A, Wu J, Chuang J, Manning CD, Ng AY, Potts C (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the conference on empirical methods in natural language processing, pp 1631–1642

  22. Søgaard A, Goldberg Y (2016) Deep multi-task learning with low level tasks supervised at lower layers. In: ACL, pp 231–235

  23. Tjong EF, Sang K, De Meulder F (2003) Introduction to the CoNLL-2003 shared task: lnguage-independent named entity recognition. In: Proceedings of the seventh conference on natural language learning at HLT-NAACL 2003, pp 142–147

  24. Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S (2018) GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of the 2018 EMNLP workshop BlackboxNLP: analyzing and interpreting neural networks for NLP, pp 353–355

  25. Warstadt A, Singh A, Bowman SR (2019) Neural network acceptability judgments. Trans Assoc Comput Linguist 7:625–641

    Article  Google Scholar 

  26. Williams A, Nangia N, Bowman SR (2017) A broad-coverage challenge corpus for sentence understanding through inference. arXiv:1704.05426

  27. Xiao L, Zhang H, Chen W (2018) Gated multi-task network for text classification. In: NAACL-HLT, pp 726–731

  28. Yang Y, Hospedales TM (2016) Trace norm regularised deep multi-task learning. arXiv:1606.04038

  29. Yang Z, Salakhutdinov R, Cohen W (2016) Multi-task cross-lingual sequence tagging from scratch. arXiv:1603.06270

  30. Zhang Y, Yang Q (2017) A survey on multi-task learning. arXiv:1707.08114

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Yang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Liu, X., Yin, W. et al. Empirical evaluation of multi-task learning in deep neural networks for natural language processing. Neural Comput & Applic 33, 4417–4428 (2021). https://doi.org/10.1007/s00521-020-05268-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05268-w

Keywords

Navigation