Abstract
Automatic music composition could dramatically decrease music production costs, lower the threshold for the non-professionals to compose as well as improve the efficiency of music creation. In this paper, we proposed an intelligent music composition neutral network to automatically generate a specific style of music. The advantage of our model is the innovative structure: we obtained the music sequence through an actor’s long short term memory, then fixed the probability of sequence by a reward-based procedure which serves as feedback to improve the performance of music composition. The music theoretical rule is introduced to constrain the style of generated music. We also utilized a subjective validation in experiment to guarantee the superiority of our model compared with state-of-the-art works.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
van der Zwaag MD, Janssen JH, Westerink JHDM (2013) Directing physiology and mood through music: validation of an affective music player. IEEE Trans Affect Comput 4(1):57–68
Li Z, Tang J, Mei T (2019) Deep collaborative embedding for social image understanding. IEEE Trans Pattern Anal Mach Intell 41(9):2070–2083
Li Z, Tang J (2015) Weakly supervised deep metric learning for community-contributed image retrieval. IEEE Trans Multimed 17(11):1989–1999
Margounakis D, Politis D (2012) Science of networking: new trends in music production, distribution and management. In: Proceedings of international conference on information society, London, pp 486–491
Fernández JD, Vico F (2013) AI methods in algorithmic composition: a comprehensive survey. J Artif Intell Res 48:513–582
http://magenta.tensorflow.org. Accessed 2016
Zhu X, Yang J, Zhang C, Zhang S (2019) Efficient utilization of missing data in cost-sensitive learning. IEEE Trans Knowl Data Eng. https://doi.org/10.1109/TKDE.2019.2956530
Yan M, Chan CA, Gygax AF, Yan J, Campbell L, Nirmalathas A, Leckie C (2019) Modeling the total energy consumption of mobile network services and applications. Energies 12(1):184
Li Z, Tang J (2015) Unsupervised feature selection via nonnegative spectral analysis and redundancy control. IEEE Trans Image Process 24(12):5343–5355
Zhu X, Zhu Y, Zheng W (2019) Spectral rotation for deep one-step clustering. Pattern Recognit. https://doi.org/10.1016/j.patcog.2019.107175
Balaban M, Ebcioglu K, Laske O (1992) Understanding music with AI: perspectives on music cognition. In: AAAI Press, pp 186–205
Ebcioglu K (1990) An expert system for harmonizing chorales in the style of J. S. Bach. J Log Program 8(1–2):145–185
Biles JA (1994) GenJam: a genetic algorithm for generating jazz solos. ICMA 94:131–137
Johnson D (2015) Composing music with recurrent neural networks. Blog. http://www.hexahedria.com/2015/08/03/composing-musicwith-recurrent-neural-networks. Accessed 2015
Waite E (2016) Generating long-term structure in songs and stories. http://magenta.tensorflow.org/2016/07/15/lookback-rnn-attention-rnn. Accessed 2016
Hadjeres G, Pachet F, Nielsen F (2017) Deepbach: a steerable model for bach chorales generation. In: Proceedings of ICML, pp 1362–1371
Lattner S, Grachten M, Widmer G (2018) Imposing higher-level structure in polyphonic music generation using convolutional restricted Boltzmann machines and constraints. J Creat Music Syst. https://doi.org/10.5920/jcms.2018.01
Akbari M, Liang J (2018) Semi-recurrent CNN-based VAE–GAN for sequential data generation. In: Proceedings of ICASSP, pp 2321–2325
Roberts A, Engel J, Raffel C, Hawthorne C (2018) A hierarchical latent vector model for learning long-term structure in music. In: Proceedings of ICML
Brunner G, Konrad A, Wang Y, Wattenhofer R (2018) MIDI-VAE: modeling dynamics and instrumentation of music with applications to style transfer. In: Proceedings of ISMIR
Yang L-C, Chou S-Y, Yang Y-H (2017) MidiNet: a convolutional generative adversarial networks for symbolic-domain music generation. In: Proceedings of ISMIR
Dong HW, Hsiao WY, Yang LC, Yang YH (2018) MuseGAN: symbolic-domain music generation and accompaniment with multitrack sequential generative adversarial networks. In: Proceedings of AAAI
Yu L, Zhang W, Wang J, Yu Y (2017) SeqGAN: sequence generative adversarial nets with policy gradient. In: Proceedings of AAAI
Brunner G, Wang Y, Wattenhofer R, Zhao S (2018) Symbolic music genre transfer with CycleGAN. In: Proceedings of ICTAI
Zhu X, Zhang S, Hu R, He W, Lei C, Zhu P (2019) One-step multi-view spectral clustering. IEEE Trans Knowl Data Eng 31(10):2022–2034
van den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A (2016) WaveNet: a generative model for raw audio. arXiv preprint arXiv:1609.03499
Manzelli R, Thakkar V, Siahkamari A et al (2018) Conditioning deep generative raw audio models for structured automatic music. In: Proceedings of ISMIR
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceedings of NIPS, pp 2672–2680
Gross R, Gu Y, Li W, Gauci M (2017) Generalizing GANs: a turing perspective. In: Proceedings of NIPS, pp 6316–6326
Grondman I, Busoniu L, Lopes GAD, Babuska R (2012) A survey of actor–critic reinforcement learning: standard and natural policy gradients. IEEE Trans Syst Man Cybern 42(6):1291–1307
Li L, Li D, Song T, Xu X (2018) Actor–critic learning control based on regularized temporal-difference prediction with gradient correction. IEEE Trans Neural Netw Learn Syst 29(12):5899–5909
LeCun Y et al (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Zaremba W, Sutskever I, Vinyals O (2014) Recurrent neural network regularization. arXiv:1410.2329
Sak H, Senior A, Beaufays F (2014) Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Proceedings of INTERSPEECH, pp 338–342
Fan Y, Qian Y, Xie F, Soong FK (2014) TTS synthesis with bidirectional LSTM based recurrent neural networks. In: Proceedings of INTERSPEECH, pp 1964–1968
Malandrino D, Pirozzi D, Zaccagnino R (2018) Visualization and music harmony: design, implementation and evaluation. In: Proceedings of 22nd international conference information visualisation, pp 498–503
Chiu S, Shan M (2006) Computer music composition based on discovered music patterns. In: Proceedings of IEEE international conference on systems, man and cybernetics, pp 4401–4406
Chathuranga EADY, Ratnayake HUW, Premaratne IA (2017) An expert system to generate chords for melodies composed in eastern music format. In: Proceedings of international conference on computer, communications and electronics, pp 501–504
Mnih V, Kavukcuoglu K, Silver D et al (2013) Playing Atari with deep reinforcement learning. In: Proceedings of NIPS, pp 201–220
Mnih V, Kavukcuoglu K, Silver D et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533
Watkin CJCH (1989) Learning from delayed rewards. Robot Auton Syst 15(4):233–235
Van Hasselt H, Guez A, Silver D (2016) Deep reinforcement learning with double Q-learning. In: Proceedings of AAAI, pp 2094–2100
Van Hasselt H (2010) Double Q-learning. In: Proceedings of AAAI, pp 2613–2621
Quan W, Wang K, Yan D, Zhang X (2018) Distinguishing between natural and computer-generated images using convolutional neural networks. IEEE Trans Inf Forensics Secur 13(11):2772–2787
Sun T, Wang Y, Yang J, Hu X (2017) Convolution neural networks with two pathways for image style recognition. IEEE Trans Image Process 26(9):4102–4113
Wright C (2013) Listening to music. Nelson Education, Scarborough
Dhanalakshmy DM, Menon HP, Vinaya V (2017) Musical notes to MIDI conversion. In: Proceedings of international conference on advances in computing, communications and informatics, pp 799–804
Colmenares G, Escalante R, Sans JF, Surós R (2011) Computational modeling of reproducing-piano rolls. Comput Music J 35(1):58–75
Jaques N, Gu S, Turner RE, Eck D (2017) Tuning recurrent neural networks with reinforcement learning. arXiv:1611.02796
Malkauthekar MD (2013) Analysis of Euclidean distance and Manhattan Distance measure in face recognition. In: Proceedings of international conference on computational intelligence and information technology, pp 503–507
Wang L, Zhang Y, Feng J (2005) On the Euclidean distance of images. IEEE Trans PAMI 27(8):1334–1339
Greche L, Jazouli M, Es-Sbai N, Majda A, Zarghili A (2017) Comparison between Euclidean and Manhattan distance measure for facial expressions classification. In: Proceedings of international conference on wireless technologies, embedded and intelligent systems, pp 1–4
Wiriyachaiporn P, Chanasit K, Suchato A, Punyabukkana P, Chuangsuwanich E (2018) Algorithmic music composition comparison. In: Proceedings of international joint conference on computer science and software engineering, pp 1–6
Acknowledgements
We acknowledge the help of Chao Zhang and Xing Wang of Chenda Music Co., Ltd, Beijing.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This document is the results of the research project funded by the National Natural Science Foundation of China (Grant Nos. 61631016, 61901421 and 11571325), National Key R&D Program of China (Grant No. 2018YFB1403903) and supported by the Fundamental Research Funds for the Central Universities (Grant Nos. 2019E002, CUC19ZD003 and CUC200B017).
Rights and permissions
About this article
Cite this article
Jin, C., Tie, Y., Bai, Y. et al. A Style-Specific Music Composition Neural Network. Neural Process Lett 52, 1893–1912 (2020). https://doi.org/10.1007/s11063-020-10241-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-020-10241-8