Abstract
Due to the increase in technology and research over the past few decades, music had become increasingly available to the public, but with a vast selection available, it becomes challenging to choose the songs to listen too. From research done on music recommendation systems (MRS), there are three main methods to recommend songs; context based, content based and collaborative filtering. A hybrid combination of the three methods has the potential to improve music recommendation; however, it has not been fully explored. In this paper, a hybrid music recommendation system, using emotion as the context and musical data as content is proposed. To achieve this, the outputs of a convolution neural network (CNN) and a weight extraction method are combined. The CNN extracts user emotion from a favorite playlist and extracts audio features from the songs and metadata. The output of the user emotion and audio features is combined, and a collaborative filtering method is used to select the best song for recommendation. For performance, proposed recommendation system is compared with content similarity music recommendation system (CSMRS) as well as other personalized music recommendation systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chou, P.-W., Lin, F.-N., Chang, K.-N., Chen, H.-Y.: A simple score following system for music ensembles using chroma and dynamic time warping. In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, pp. 529–532 (2018)
Luo, S.: Intro to Recommender System: Collaborative Filtering. Towards Data Science (2018). https://towardsdatascience.com/intro-to-recommender-system-collaborative-filtering-64a238194a26
Hassen, A.K., JanBen, H., Assenmacher, D., Preuss, M., Vatolkin, I.: Classifying music genres using image classification neural networks. In: Archives of Data Science, Series A (Online First), 5(1), 20. KIT Scientific Publishing (2018)
Pedro, C., Koppenberger, M., Wack, N.: Content-based music audio recommendation. I:n Proceedings of the 13th annual ACM international conference on Multimedia, pp. 211–212 (2005)
Yoshii, K., Masataka, G., Kazunori, K., Tetsuya, O., Okuno, H.: Hybrid collaborative and content-based music recommendation using probabilistic model with latent user preferences. In: ISMIR6, pp. 296-301 (2006)
Wang, X., Wang, Y.: Improving content-based and hybrid music recommendation using deep learning. In: Proceedings of the 22nd ACM international conference on Multimedia, pp. 627–636 (2014)
Mandapaka, J.S., Omowonuola, V., Kher, S.: Estimating musical appreciation using neural network. In: Proceedings of the Future Technologies Conference FTC 2021, pp. 415–430. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89880-9_32
Malik, M., Sharath, A., Konstantinos, D., Tuomas, V., Dasa, T., Jarina, R.: Stacked convolutional and recurrent neural networks for music emotion recognition. arXiv preprint arXiv:1706.02292 (2017)
Aljanaki, A., Yang, Y.-H., Soleymani, M.: Developing a benchmark for emotional analysis of music. PloS one 12(3), e0173392 (2017)
Akella, R.: Music Mood Classification Using Convolutional Neural Networks. San Jose State University, Master’s project (2019)
O’Shaughnessy, D.: Speech Communication. Addison Wesley, Human and Machine (1987)
Roberts, Leland. 2020. Understanding the Mel Spectrogram. Medium. March 14, 2020. (2020)
Roberts, L.: Medium.com, 05-Mar-2020. https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53 Accessed 21 Jun 2022
Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.-Y., Yang, Y.-H.: 1000 songs for emotional analysis of music. In: Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia, pp. 1–6 (2013)
McFee, B., Raffel, C., Liang, D., Ellis, D.P., McVicar, M., Battenberg, E., Nieto, O.: librosa: Audio and music signal analysis in python. In: Proceedings of the 14th Python in Science Conference 8, 18–25 (2015)
Spotify. 2019. Web API Reference | Spotify for Developers. Spotify.com. (2019). https://developer.spotify.com/documentation/web-api/reference/
Defferrard, M., Benzi, K., Vandergheynst, P., Bresson, X.: Fma: A dataset for music analysis. arXiv preprint arXiv:1612.01840 (2016)
Olteanu, A.: GTZAN Dataset - Music Genre Classification. Kaggle.com (2019). https://www.kaggle.com/andradaolteanu/gtzan-dataset-music-genre-classification
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Omowonuola, V., Wilkerson, B., Kher, S. (2023). Hybrid Context-Content Based Music Recommendation System. In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2022, Volume 1. FTC 2022 2022. Lecture Notes in Networks and Systems, vol 559. Springer, Cham. https://doi.org/10.1007/978-3-031-18461-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-18461-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18460-4
Online ISBN: 978-3-031-18461-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)