[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Semantic dependency network for lyrics generation from melody

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Melody-conditioned lyrics generation aims to create novel lyrics based on the melodies by learning the relationship between lyrics and melodies, which is an attractive topic in the music field. However, two serious issues, called deficiency of inter-dependency between melody attributes and text degeneration, degrade the quality of the lyrics generation. To solve these issues, this paper proposes a new model called semantic dependency network with two key components: (i) N-gram CNN block is used to compress the information from the single melody attribute and extract the inter-dependency from the multiple melody attributes. (ii) In lyrics, unlikelihood training is exploited to mitigate the syllables mismatching and logic missing and keep the intra-syllable integrity and logic by learning semantic dependency. Extensive evaluation experiments on a large-scale dataset demonstrate that our model can generate higher quality and more harmonic lyrics from the melodies compared with the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availibility statement

The datasets generated during and/or analyzed during the current study are available in the [20] repository, https://github.com/yy1lab/Lyrics-Conditioned-Neural-Melody-Generation.

References

  1. Chen Y, Lerch A (2020) Melody-conditioned lyrics generation with seqgans, In: IEEE International symposium on multimedia (ISM), pp 189–196

  2. Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding, In: 2019 conference of the North American Chapter of the association for computational linguistics: human language technologies (NAACL-HLT), pp 4171–4186

  3. Fan A, Lewis M, Dauphin YN (2018) Hierarchical neural story generation, In: 56th annual meeting of the association for computational linguistics (ACL), pp 889–898

  4. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville AC, Bengio Y (2014) Generative adversarial networks, CoRR, arXiv:abs/1406.2661

  5. Holtzman A, Buys J, Du L, Forbes M, Choi Y (2020) The curious case of neural text degeneration, In: 8th international conference on learning representations (ICLR)

  6. Huang Yin-Fu, You Kai-Cheng (2021) Automated generation of chinese lyrics based on melody emotions. IEEE Access 9:98060–98071

    Article  Google Scholar 

  7. Lagutin E, Gavrilov D, Kalaidin P (2021) Implicit unlikelihood training: Improving neural text generation with reinforcement learning, In: 16th conference of the European chapter of the association for computational linguistics (ACL), pp 1432–1441

  8. Li M, Roller S, Kulikov I, Welleck S, Boureau Y-L, Cho K, Weston J (2020) Don’t say that! making inconsistent dialogue unlikely with unlikelihood training, In: 58th annual meeting of the association for computational linguistics (ACL), pp 4715–4728

  9. Lin C-Y (2004) Rouge: a package for automatic evaluation of summaries, Text summarization branches out, pp 74–81

  10. Malmi E, Takala P, Toivonen H, Raiko T, Gionis A (2016) Dopelearning: a computational approach to rap lyrics generation, In: 22nd ACM international conference on knowledge discovery and data mining(SIGKDD), pp 195–204

  11. Nie W, Narodytska N, Patel A (2019) Relgan: Relational generative adversarial networks for text generation, In: 7th international conference on learning representations (ICLR)

  12. Oliveira HG (2021) Tra-la-lyrics 2.0: automatic generation of song lyrics on a semantic domain. J Artif General Intell 6(1):87–110

    Article  Google Scholar 

  13. Rodrigues MA, Oliveira A, Moreira A, Possi M (2022) Lyrics generation supported by pre-trained models, In: Thirty-fifth international florida artificial intelligence research society conference (FLAIRS)

  14. Srivastava A, Duan W, Shah RR, Wu J, Tang S, Li W, Yu Y (2022) Melody generation from lyrics using three branch conditional LSTM-GAN, In: 28th international conference on multimedia modeling (MMM), pp 569–581

  15. Sutton RS, McAllester D, Singh S, Mansour Y (1999) Policy gradient methods for reinforcement learning with function approximation, Adv Neural Inf Process Syst, 12

  16. Takahashi R, Nose T, Chiba Y, Ito A (2020) Successive Japanese lyrics generation based on encoder-decoder model, In: 9th IEEE global conference on consumer electronics (GCCE), pp 126–127

  17. Watanabe K, Matsubayashi Y, Inui K, Goto M (2014) Modeling structural topic transitions for automatic lyrics generation, In: 28th Pacific Asia conference on language, information and computation (PACLIC), pp 422–431

  18. Welleck S, Kulikov I, Roller S, Dinan E, Cho K, Weston J (2020) Neural text generation with unlikelihood training, In: 8th international conference on learning representations (ICLR)

  19. Yu J, Zhang W, Wang J, Yu Y (2017) Seqgan: sequence generative adversarial nets with policy gradient, In: The thirty-first AAAI conference on artificial intelligence (AAAI), pp 2852–2858

  20. Yu Y, Srivastava A, Canales S (2021) Conditional LSTM-GAN for melody generation from lyrics. ACM Trans Multimed Comput Commun Appl (TOMCCAP) 17(1):1–20

    Article  Google Scholar 

  21. Zhang L, Zhang R, Mao X, Chang Y (2022) Qiuniu: a Chinese lyrics generation system with passage-level input, In: 60th annual meeting of the association for computational linguistics (ACL), pp 76-82

  22. Zhang Yizhe, Gan Zhe, Carin Lawrence (2016) Generating text via adversarial training. In: NIPS workshop on adversarial training 21:21–32

Download references

Acknowledgements

JST, the establishment of university fellowships toward the creation of science technology innovation, Grant Number JPMJFS2136.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Yu.

Ethics declarations

Conflicts of interest

All authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duan, W., Yu, Y. & Oyama, K. Semantic dependency network for lyrics generation from melody. Neural Comput & Applic 36, 4059–4069 (2024). https://doi.org/10.1007/s00521-023-09282-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09282-6

Keywords

Navigation