Abstract
Sketch-to-portrait conversion is an emerging research area that aims to transform rough facial line sketches into highly detailed and realistic portrait images. This paper presents a comprehensive study on the impact of different loss functions and data augmentation techniques in achieving superior results using the U-Net256 network architecture. The study explores the effects of Mean Squared Error (MSE) loss, L1 loss, Generative Adversarial Network (GAN) loss, and the number of parameters on the quality of the generated portrait images. Experimental results demonstrate that the choice of loss function significantly influences the perceptual quality and accuracy of the converted portraits. While both MSE and L1 loss contribute to capturing the overall structure, GAN loss excels in generating fine-grained details. Moreover, a trade-off is observed between the number of parameters and image quality, with higher parameter counts resulting in more intricate outputs but increased computational complexity. In conclusion, this paper offers valuable insights into the sketch to portrait conversion task, shedding light on the effects of different loss functions and data augmentation techniques. The findings contribute to the advancement of sketch-to-portrait conversion systems, pushing the boundaries of realism and detail in generated portrait images. We finally reached FID value of 0.2184, the fourth in the CGI-PSG2023 leaderboard as of September 21st. The dataset can be found on CGI-PSG2023 webpage. All code is open-source and can be found in https://github.com/KKK-Liu/Portrait.git
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Choi, Y., Choi, M., Kim, M., Ha, J., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation (2017). CoRR abs/1711.09020, http://arxiv.org/abs/1711.09020
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423 (2016). https://doi.org/10.1109/CVPR.2016.265
Goodfellow, I.J., et al.: Generative adversarial networks (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium (2018)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976 (2017). https://doi.org/10.1109/CVPR.2017.632
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks (2018)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2017)
Lata, K., Dave, M., Nishanth, K.N.: Image-to-image translation using generative adversarial network. In: 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 186–189 (2019). https://doi.org/10.1109/ICECA.2019.8822195
Li, P., Sheng, B., Chen, C.L.P.: Face sketch synthesis using regularized broad learning system. IEEE Trans. Neural Netw. Learn. Syst. 33(10), 5346–5360 (2022). https://doi.org/10.1109/TNNLS.2021.3070463
Li, S., Wu, F., fan, Y., Song, X., Dong, W.: PLDGAN: portrait line drawing generation with prior knowledge and conditioning target. Vis. Comput. 39(8), 3507–3518 (2023). https://doi.org/10.1007/s00371-023-02956-1
Li, Z., Togo, R., Ogawa, T., Haseyama, M.: Semantic-aware unpaired image-to-image translation for urban scene images. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2150–2154 (2021). https://doi.org/10.1109/ICASSP39728.2021.9414192
Liu, M.Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., Kautz, J.: Few-shot unsupervised image-to-image translation (2019)
Nozawa, N., Shum, H.P.H., Feng, Q., Ho, E.S.L., Morishima, S.: 3D car shape reconstruction from a contour sketch using GAN and lazy learning. Vis. Comput. 38(4), 1317–1330 (2022). https://doi.org/10.1007/s00371-020-02024-y
Park, T., Liu, M., Wang, T., Zhu, J.: Semantic image synthesis with spatially-adaptive normalization (2019). CoRR abs/1903.07291, http://arxiv.org/abs/1903.07291
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation (2015)
Tschannen, M., Agustsson, E., Lucic, M.: Deep generative models for distribution-preserving lossy compression (2018)
Tsuda, H., Hotta, K.: Cell image segmentation by integrating pix2pixs for each class. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1065–1073 (2019). https://doi.org/10.1109/CVPRW.2019.00139
Uzunova, H., Ehrhardt, J., Jacob, F., Frydrychowicz, A., Handels, H.: Multi-scale GANs for memory-efficient generation of high resolution medical images (2019)
Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861
Yeh, R.A., Chen, C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models (2017)
Yoshikawa, T., Endo, Y., Kanamori, Y.: Diversifying detail and appearance in sketch-based face image synthesis. The Visual Computer (Proc. of Computer Graphics Internatinal 2022) (2022)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, K., Wu, Q., Xie, M. (2024). Large GAN Is All You Need. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14495. Springer, Cham. https://doi.org/10.1007/978-3-031-50069-5_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-50069-5_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50068-8
Online ISBN: 978-3-031-50069-5
eBook Packages: Computer ScienceComputer Science (R0)