Abstract
This paper presents a novel, syllable-structured Chinese lyrics generation model given a piece of original melody. Most previously reported lyrics generation models fail to include the relationship between lyrics and melody. In this work, we propose to interpret lyrics-melody alignments as syllable structural information and use a multi-channel sequence-to-sequence model with considering both phrasal structures and semantics. Two different RNN encoders are applied, one of which is for encoding syllable structures while the other for semantic encoding with contextual sentences or input keywords. Moreover, a large Chinese lyrics corpus for model training is leveraged. With automatic and human evaluations, results demonstrate the effectiveness of our proposed lyrics generation model. To the best of our knowledge, there is few previous reports on lyrics generation considering both music and linguistic perspectives.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Oliveira, H.: PoeTryMe: a versatile platform for poetry generation. Computational Creativity (2012)
He, J., Zhou, M., Jiang, L.: Generating Chinese classical poems with statistical machine translation models. In: Proceedings AAAI (2012)
Yi, X., Li, R., Sun, M.: Generating Chinese classical poems with RNN encoder-decoder. arXiv:1604.01537 (2016)
Wang, Q., Luo, T., Wang, D., Xing, C.: Chinese song iambics generation with neural attention-based model. arXiv:1604.06274 (2016)
Potash, P., Romanov, A., Rumshisky, A.: Ghostwriter: using an LSTM for automatic rap lyric generation. In: EMNLP, pp. 1919–1924 (2015)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)
Mikolov, T., Kombrink, S., Burget, L., Černocký, J.H., Khudanpur, S.: Extensions of recurrent neural network language model. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5528–5531. IEEE (2011)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Ghazvininejad, M., Shi, X., Choi, Y.: Generating topical poetry. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (2016)
Liu, B., Lane, I.: Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling (2016)
Watanabe, K., Matsubayashi, Y., Fukayama, S., Goto, M., Inui1, K., Nakano, T.: A melody-conditioned lyrics language model. In: Proceedings of NAACL-HLT, pp. 163–172 (2018)
Acknowledgement
This work was supported by Ping An Technology (Shenzhen) Co., Ltd., China.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Lu, X., Wang, J., Zhuang, B., Wang, S., Xiao, J. (2019). A Syllable-Structured, Contextually-Based Conditionally Generation of Chinese Lyrics. In: Nayak, A., Sharma, A. (eds) PRICAI 2019: Trends in Artificial Intelligence. PRICAI 2019. Lecture Notes in Computer Science(), vol 11672. Springer, Cham. https://doi.org/10.1007/978-3-030-29894-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-29894-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29893-7
Online ISBN: 978-3-030-29894-4
eBook Packages: Computer ScienceComputer Science (R0)