Abstract
Image aesthetic assessment is a hot issue in current research, but less research has been done in the art image aesthetic assessment field, mainly due to the lack of large-scale artwork datasets. The recently proposed BAID dataset fills this gap and allows us to delve into the aesthetic assessment methods of artworks, and this research will contribute to the study of artworks and can also be applied to real-life scenarios, such as art exams, to assist in judging. In this paper, we propose a new method, TSC-Net (Theme-Style-Color guided Artistic Image Aesthetics Assessment Network), which extracts image theme information, image style information, and color information and fuses general aesthetic information to assess art images. Experiments show that our proposed method outperforms existing methods using the BAID dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Yi, R., Tian, H., Gu, Z., Lai, Y.-K., Rosin, P.: Towards Artistic Image Aesthetics Assessment: a Large-scale Dataset and a New Method. ArXiv. (2023)
Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: rating pictorial aesthetics using deep learning. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 457–466 (2014). https://doi.org/10.1145/2647868.2654927
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017). https://doi.org/10.1145/3065386
Talebi, H., Milanfar, P.: NIMA: neural image assessment. IEEE Trans. on Image Process. 27, 3998–4011 (2018). https://doi.org/10.1109/TIP.2018.2831899
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. ArXiv. (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR. (2014)
Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015). https://doi.org/10.1109/CVPR.2015.7298594
Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo Aesthetics Ranking Network with Attributes and Content Adaptation. 9905, 662–679 (2016).https://doi.org/10.1007/978-3-319-46448-0_40
Gao, F., Li, Z., Jun, Y., Junze, Y., Huang, Q., Tian, Q.: Style-adaptive photo aesthetic rating via convolutional neural networks and multi-task learning. Neurocomputing 395, 247–254 (2020). https://doi.org/10.1016/j.neucom.2018.06.099
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Murray, N., Marchesotti, L., Perronnin, F.: AVA: A large-scale database for aesthetic visual analysis. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2408–2415 (2012). https://doi.org/10.1109/CVPR.2012.6247954
Cui, C., Liu, H., Lian, T., Nie, L., Zhu, L., Yin, Y.: Distribution-Oriented aesthetics assessment with Semantic-aware hybrid network. IEEE Trans. Multimedia 21(5), 1209–1220 (2019). https://doi.org/10.1109/TMM.2018.2875357
He, S., Zhang, Y., Xie, R., Jiang, D., Ming, A.: Rethinking image aesthetics assessment: models, datasets and benchmarks. In: Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, pp. 942–948. International Joint Conferences on Artificial Intelligence Organization, Vienna, Austria (2022). https://doi.org/10.24963/ijcai.2022/132
Hosu, V., Goldlucke, B., Saupe, D.: Effective aesthetics prediction with multi-level spatially pooled features. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375 (2019). https://doi.org/10.1109/CVPR.2019.00960
Deng, J., Dong, W., Socher, R., Li, L.-J., Kai Li, Li Fei-Fei: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). https://doi.org/10.1109/CVPR.2009.5206848
Huang, X., Belongie, S.: Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1510–1519 (2017). https://doi.org/10.1109/ICCV.2017.167
Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.-H.: Universal style transfer via feature transforms. Presented at the NIPS May 23 (2017)
Liu, S., et al.: AdaAttN: revisit attention mechanism in arbitrary neural style transfer. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6629–6638 (2021). https://doi.org/10.1109/ICCV48922.2021.00658
Park, D.Y., Lee, K.H.: Arbitrary style transfer with style-attentional networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5873–5881 (2019). https://doi.org/10.1109/CVPR.2019.00603
Lee, B., et al.: Dissecting landscape art history with information theory. Proc. Natl. Acad. Sci. U.S.A. 117, 26580–26590 (2020). https://doi.org/10.1073/pnas.2011927117
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: MobileNetV2: inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018). https://doi.org/10.1109/CVPR.2018.00474
Sheng, K., Dong, W., Ma, C., Mei, X., Huang, F., Hu, B.-G.: Attention-based multi-patch aggregation for image aesthetic assessment. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 879–886 (2018). https://doi.org/10.1145/3240508.3240554
Zhu, H., Li, L., Wu, J., Zhao, S., Ding, G., Shi, G.: Personalized image aesthetics assessment via Meta-Learning with bilevel gradient optimization. IEEE Trans. Cybern. 52, 1798–1811 (2022). https://doi.org/10.1109/TCYB.2020.2984670
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1452–1464 (2018). https://doi.org/10.1109/TPAMI.2017.2723009
Yu, Y., Li, D., Li, B., Li, N.: Multi-style image generation based on semantic image. Vis. Comput. (2023). https://doi.org/10.1007/s00371-023-03042-2
Li, H., Sheng, B., Li, P., Ali, R., Chen, C.L.P.: Globally and locally semantic colorization via exemplar-based Broad-GAN. IEEE Trans. Image Process. 30, 8526–8539 (2021). https://doi.org/10.1109/TIP.2021.3117061
Sun, Q., et al.: A GAN-based approach toward architectural line drawing colorization prototyping. Vis. Comput.Comput. 38, 1283–1300 (2022). https://doi.org/10.1007/s00371-021-02219-x
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, Y., Cao, W., Sheng, N., Shi, H., Guo, C., Ke, Y. (2024). TSC-Net: Theme-Style-Color Guided Artistic Image Aesthetics Assessment Network. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14495. Springer, Cham. https://doi.org/10.1007/978-3-031-50069-5_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-50069-5_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50068-8
Online ISBN: 978-3-031-50069-5
eBook Packages: Computer ScienceComputer Science (R0)