Abstract
The transfer of artistic styles into the image has become prevalent in industry and academia. The neural style transfer (NST) is a method to transfer the style of an image to another image. The study and analysis of the NST methods are essential to obtaining realistic, stylized images efficiently. This study explored different methods to perform the style transfer and revealed the key challenges in the style transfer. Further, the specific research gaps have been identified in the field of NST. Moreover, an exhaustive analysis of the existing methods of the NST has been presented in this study. The qualitative and quantitative comparisons of the renowned methods of NST have been conducted and presented in this study. The COCO dataset has been utilized to compute the PSNR and SSIM values to compare the results. Further, the computation time of different methods of the NST has been discussed. Finally, the study has been concluded with future scope in the field of NST.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Jing, Y., Yang, Y., Feng, Z., Ye, J., Yizhou, Y., Song, M.: Neural style transfer: a review. IEEE Trans. Vis. Comput. Graph. 26(11), 3365–3385 (2020)
Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style (2015)
Liu, Z., Yang, X., Gao, R., Liu, S., Dou, H., He, S., Huang, Y., Huang, Y., Luo, H., Zhang, Y., Xiong, Y., Ni, D.: Remove appearance shift for ultrasound image segmentation via fast and universal style transfer. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1824–1828 (2020)
Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Lawrence Zitnick, C., Dollár, P.: Microsoft coco: common objects in context (2014)
Kyprianidis, J.E., Collomosse, J., Wang, T., Isenberg, T.: State of the “art”: a taxonomy of artistic stylization techniques for images and video. IEEE Trans. Vis. Comput. Graph. 19(5), 866–885 (2013)
Hertzmann, A.: Painterly rendering with curved brush strokes of multiple sizes. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’98, pp. 453–460. Association for Computing Machinery, New York, NY, USA (1998)
Kolliopoulos, A.: Image segmentation for stylized non-photorealistic rendering and animation (2005)
Gooch, B., Coombe, G., Shirley, P.: Artistic vision: painterly rendering using computer vision techniques. In: NPAR Symposium on Non-photorealistic Animation and Rendering (2003)
Song, Y.-Z., Rosin, P.L., Hall, P.M., Collomosse, J.: Arty shapes. In: Proceedings of the Fourth Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, Computational Aesthetics’08, pp. 65–72. Eurographics Association, Goslar, DEU (2008)
Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’01, pp. 327–340. Association for Computing Machinery, New York, NY, USA (2001)
Winnemoeller, H., Olsen, S., Gooch, B.: Real-time video abstraction. ACM Trans. Graph. 25, 1221–1226 (2006)
Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), pp. 839–846 (1998)
Gooch, B., Reinhard, E., Gooch, A.: Human facial illustrations: creation and psychophysical evaluation. ACM Trans. Graph. 23(1), 27–44 (2004). Jan
Li, C., Wand, M.: Combining Markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Li, C., Wand, M.: Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: European Conference on Computer Vision, pp. 702–716. Springer (2016)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision—ECCV 2016, pp. 694–711. Springer International Publishing, Cham (2016)
Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.S.: Texture networks: feed-forward synthesis of textures and stylized images. In: ICML vol. 1, p. 4 (2016)
Azadi, S., Fisher, M., Kim, V., Wang, Z., Shechtman, E., Darrell, T.: Multi-content GAN for few-shot font style transfer (2017)
Zhang, L., Ji, Y., Lin, X.: Style transfer for anime sketches with enhanced residual u-net and auxiliary classifier GAN (2017)
Peng, Chunlei, Wang, Nannan, Li, Jie, Gao, Xinbo: Universal face photo-sketch style transfer via multiview domain translation. IEEE Trans. Image Process. 29, 8519–8534 (2020)
Xu, W., Long, C., Wang, R., Wang, G.: DRB-GAN: a dynamic ResBlock generative adversarial network for artistic style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6383–6392 (2021)
Mallika, Ubhi, J.S., Aggarwal, A.K.: Neural style transfer for image within images and conditional GANs for destylization. J. Vis. Commun. Image Representation 85, 103483 (2022)
Zhang, Y., Zhang, Y., Cai, W.: A unified framework for generalizable style transfer: style and content separation. IEEE Trans. Image Process. 29, 4085–4098 (2020)
Zhu, A., Lu, X., Bai, X., Uchida, S., Iwana, B.K., Xiong, S.: Few-shot text style transfer via deep feature similarity. IEEE Trans. Image Process. 29, 6932–6946 (2020)
Dongdong Chen, L., Yuan, J.L., Nenghai, Y., Hua, G.: Explicit filterbank learning for neural image style transfer and image processing. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2373–2387 (2021)
Chandran, P., Zoss, G., Gotardo, P., Gross, M., Bradley, D.: Adaptive convolutions for structure-aware style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7972–7981 (2021)
Kai, X., Wen, L., Li, G., Qi, H., Bo, L., Huang, Q.: Learning self-supervised space-time CNN for fast video style transfer. IEEE Trans. Image Process. 30, 2501–2512 (2021)
Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.-W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.-H.: Diversified texture synthesis with feed-forward networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3920–3928 (2017)
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Zhang, H., Dana, K.: Multi-style generative network for real-time transfer. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops (2018)
Li, Y., Fang, C., Yang, J., Lu, X., Yang, M.-H.: Universal style transfer via feature transforms (2017)
Qiao, Y., Cui, J., Huang, F., Liu, H., Bao, C., Li, X.: Efficient style-corpus constrained learning for photorealistic style transfer. IEEE Trans. Image Process. 30, 3154–3166 (2021)
Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., Song, M.: Neural style transfer: a review. IEEE Trans. Vis. Comput. Graph. 26(11), 3365–3385 (2020). Nov
Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337 (2016)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6924–6932 (2017)
Wang, W., Yang, S., Jizheng, X., Liu, J.: Consistent video style transfer via relaxation and regularization. IEEE Trans. Image Process. 29, 9125–9139 (2020)
Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. CoRR, abs/1610.07629 (2016)
Chen, D., Yuan, L., Liao, J., Yu, N., Hua, G.: An explicit representation for neural image style transfer. Stylebank (2017)
Collomosse, J., Rosin, P.: Image and Video-Based Artistic Stylisation, vol. 42 (2013)
Gooch, A.A., Long, J., Ji, L., Estey, A., Gooch, B.S.: Viewing progress in non-photorealistic rendering through Heinlein’s lens. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10, pp. 165–171. Association for Computing Machinery, New York, NY, USA (2010)
DeCarlo, D., Stone, M.: Visual explanations. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10, pp. 173–178. Association for Computing Machinery, New York, NY, USA (2010)
Hertzmann, A.: Non-photorealistic rendering and the science of art. In: Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10, pp. 147–157. Association for Computing Machinery, New York, NY, USA (2010)
Mould, D.: Authorial subjective evaluation of non-photorealistic images. In: Proceedings of the Workshop on Non-Photorealistic Animation and Rendering, NPAR ’14, pp. 49–56. Association for Computing Machinery, New York, NY, USA (2014)
Luan, F., Paris, S., Shechtman, E., Bala, K.: Deep photo style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Cheng, M.-M., Liu, X.-C., Wang, J., Shao-Ping, L., Lai, Y.-K., Rosin, P.L.: Structure-preserving neural style transfer. IEEE Trans. Image Process. 29, 909–920 (2020)
Li, R., Chi-Hao, W., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: SDP-GAN: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Trans. Image Process. 30, 374–385 (2021)
Ling, J., Xue, H., Song, L., Xie, R., Gu, X.: Region-aware adaptive instance normalization for image harmonization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9361–9370 (2021)
Cheng, J., Jaiswal, A., Wu, Y., Natarajan, P., Natarajan, P.: Style-aware normalized loss for improving arbitrary style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 134–143 (2021)
Lin, C.-T., Huang, S.-W., Yen-Yi, W., Lai, S.-H.: Gan-based day-to-night image style transfer for nighttime vehicle detection. IEEE Trans. Intell. Transp. Syst. 22(2), 951–963 (2021)
Virtusio, J.J., Ople, J.J.M., Tan, D.S., Tanveer, M., Kumar, N., Hua, K.: Neural style palette: a multimodal and interactive style transfer from a single style image. IEEE Trans. Multimedia 23, 2245–2258 (2021)
Xiao, Q., Liu, B., Li, Z., Ni, W., Yang, Z., Li, L.: Progressive data augmentation method for remote sensing ship image classification based on imaging simulation system and neural style transfer. IEEE J. Sel. Top. Appl. Earth Observations Remote Sens. 14, 9176–9186 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Bagwari, S., Choudhary, K., Raikwar, S., Rana, P.S., Mighlani, S. (2023). A Review: The Study and Analysis of Neural Style Transfer in Image. In: Tistarelli, M., Dubey, S.R., Singh, S.K., Jiang, X. (eds) Computer Vision and Machine Intelligence. Lecture Notes in Networks and Systems, vol 586. Springer, Singapore. https://doi.org/10.1007/978-981-19-7867-8_17
Download citation
DOI: https://doi.org/10.1007/978-981-19-7867-8_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7866-1
Online ISBN: 978-981-19-7867-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)