[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Restoration of ancient temple murals using cGAN and PConv networks

Published: 01 December 2022 Publication History

Abstract

India is a fountainhead of several art forms like paintings, inscriptions, sculptures, pottery, textile arts, and so on. Mural paintings are one in all of them and are usually seen on the walls of temples and caves. A redeeming feature of temple murals is that the architectural elements of the given space are simultaneously incorporated into the paintings. The majority of these paintings have faded with time, and a select handful have cracks and dirt stains on top of them. Understanding the nuances of the paintings become difficult as they deteriorate further. These paintings need to be restored by qualified craftsmen, who are difficult to find today. Therefore, a powerful image restoration method is needed to meet the requirements of the mural paintings. Thus an efficient inpainting technique for the reconstruction of ancient temple murals that ignores these multiple random irregularities is being proposed. The proposed method makes use of cGAN for both the automatic development of masks and the identification of degraded sections. A mask is an example of a black and white image where the white pixels require some editing while the black pixels do not. These masks are used by the algorithm as a hyper-parameter to forecast which patch has to be filled in next. The deteriorated murals are rebuilt using a sliding window-based Deep Convolutional Network, where the convolution is masked and renormalized to be conditioned on only valid pixels. As part of the forward pass, the proposed work automatically creates a changed mask for the following layer. The performance of our combined cGAN-Deepconv inpainting technique has been compared with six state-of-the-art inpainting methods. The experimental reconstruction results confirmed that our Sliding-window-Deepconv inpainting is more adaptable and better suited for mural restoration. Further, the proposed method achieved the best reconstruction results with the absolute best values of several performance parameters, i.e. Peak Signal to Noise Ratio, Mean Squared Error, and Structural Similarity Index.

Graphical abstract

Display Omitted

Highlights

This article introduces an efficient technique for the identification and restoration of damaged areas seen in ancient temple murals.
This work uses a Partial Convolution based inpainting technique and results in better performance.
The performance of the proposed system is measured by comparing the existing inpainting techniques.

References

[1]
Colta I.P., The conservation of a painting. Case study: Cornel minişan, “landscape from caransebeş” (oil on canvas, 20th century), Proc Chem 8 (2013) 72–77. Youth in the Conservation of CUltural Heritage, YOCOCU 2012.
[2]
Zhou Shutian, Xie Yanhong, Intelligent restoration technology of mural digital image based on machine learning algorithm, Wirel Commun Mob Comput (2022),.
[3]
Yang Jinglong, Cao Jing, Yang Haoming, Wang1 Juanli, Digitally assisted preservation and restoration of a fragmented mural in a Tang Tomb, Sens Imaging 22 (2021) 32.
[4]
Rakhimol V, Maheswari PU. A survey on restoration of paintings. In: 2020 international conference on communication and signal processing. 2020, p. 102–8.
[5]
Ruoic T., Piourica A., Context-aware patch-based image inpainting using Markov random field modelling, IEEE Trans Image Process 24 (11) (2015) 3498–3511. (2014).
[6]
Chen Yuantao, Liu Linwu, Tao Jiajun, Xia Runlong, Zhang Qian, Yang Kai, et al. The improved image inpainting algorithm via encoder and similarity constraint. In: 2020 the visual computer-Springer. https://doi.org/10.1007/s00371-020-01932-3.
[7]
Jin KH, Ye JC. Annihilating filter-based low-rank Hankel matrix approach for image inpainting. In: IEEE transactions on image processing. 2019.
[8]
Barnes C., Shechtman E., Finkelstein A., Goldman D.B., Patch-Match: A randomized correspondence algorithm for structural image editing, 28 (3) (2009) 24.
[9]
Iizuka S., Simo-Serra E., Finkelstein A., Ishikawa S., Patch-Match: Globally and locally consistent image completion, ACM Trans Graph (ToG) 36 (4) (2017) 1–14.
[10]
El. Harrouss O., Almaadeed N., Al-ma’adeed A., Akbari Y., Coherent semantic attention for image inpainting, Neural Process Lett 36 (4) (2017) 1–14.
[11]
Liu H, Jiang B, Xiao Y, Yang C. Image inpainting: A review. In: Proceedings of the IEEE international conference on computer vision. 2019, p. 4170–9.
[12]
Yeh RA, Chen C, Yian Lim Y, Schwing AG, Hasegawa-Johnson M, Do MN. Semantic image inpainting with deep generative models. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, p. 5485–93.
[13]
Yu T, Lin C, Zhang S, Ding X, Wu J, Zhang J. End-to-end partial convolutions neural networks for DunHuang Grottoes wall-painting restoration. In: Proceedings of the IEEE international conference on computer vision workshops. 2019.
[14]
Cai N., Su Z., Lin Z., Wang H., Yang Z., Ling B.W.K., Blind-inpainting using the fully convolutional neural network, Visual Comput 33 (2) (2017) 249–261.
[15]
Pathak D, Krahenbuhl P, Donahue J, Darrell T, Yang Z, Efros AA. Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 2536–44.
[16]
Zeng Y, Fu J, Chao H, Guo B. Learning pyramid-context encoder network for highquality image inpainting. In: IEEE conference on computer vision and pattern recognition. 2019, p. 1486–94.
[17]
Liu H, Jiang B, Xiao Y, Yang C. Coherent semantic attention for image inpainting, arXiv preprint arXiv:1905.12384.
[18]
Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of GANs for improved quality, stability, and variation. In: International conference on learning representations. 2018.
[19]
Ma Y., Liu X., Bai S., Wang L., He D., Liu A., Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation, in: Proceedings of the 28th international joint conference on artificial intelligence, AAAI Press, 2019, pp. 3123–3129.
[20]
Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS. Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, p. 5505–14.
[21]
Xiao Q., Li G., Chen Q., Deep inception generative network for cognitive image inpainting, 2018, CoRR, abs/1812.01458 [Online]. Available: http://arxiv.org/abs/1812.01458.
[22]
Nazeri K., Ng E., Joseph T., Qureshi F.Z., Ebrahimi M., Edgeconnect: Generative image inpainting with adversarial edge learning, 2019, CoRR, abs/1901.00212, [Online]. Available: http://arxiv.org/abs/1901.00212.
[23]
Liu G, Reda FA, Shih KJ, Wang T-C, Tao A, Catanzaro B. Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European conference on computer vision. 2018, p. 85–100.
[24]
Xie C, Liu S, Li C, Cheng M-M, Zuo W, Liu X, et al. Image inpainting with learnable bidirectional attention maps. In: Proceedings of the IEEE international conference on computer vision. 2019, p. 8858–67.
[25]
Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS. Free-form image inpainting with gated convolution. In: Proceedings of the IEEE international conference on computer vision. 2019, p. 4471–80.
[26]
Nakamura T., Zhu A., Yanai K., Uchida S., Scene text eraser, in: 14th IAPR internationalconference on document analysis and recognition, vol.1, IEEE, 2017, pp. 832–837.
[27]
Chang Y-L, Liu ZY, Lee K-Y, Hsu W. Free-form video inpaint-ing with 3D gated convolution and temporal patchgan. In: Proceedings of the IEEE international conference on computer vision. 2019, p. 9066–75.
[28]
Xiong W, Yu J, Lin Z, Yang J, Lu X, Barnes C, et al. Foreground-aware image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2019, p. 5840–8.
[29]
Rakhimol V., Maheswari P.U., The digital reconstruction of degraded ancient temple murals using dynamic mask generation and an extended exemplar-based region-filling algorithm, Herit Sci 9 (2021) 137,.
[30]
Wang Y, Tao X, Qi X, Shen X, Jia J. Image inpainting via generative multi-column convolutional neural networks. In: Advances in neural information processing systems. 2018, p. 331–40.
[31]
Gatys L., Ecker A., Bethge M., A neural algorithm of artistic style, J Vision 16 (12) (2016) 326.

Cited By

View all
  • (2024)Image Restoration Technology of Tang Dynasty Tomb Murals Using Adversarial Edge LearningJournal on Computing and Cultural Heritage 10.1145/367498417:3(1-11)Online publication date: 15-Jul-2024
  • (2024)Virtual cleaning of sooty murals in ancient temples using twice colour attenuation priorComputers and Graphics10.1016/j.cag.2024.103924120:COnline publication date: 18-Nov-2024
  • (2023)Identifying influences between artists based on artwork faces and geographic proximityComputers and Graphics10.1016/j.cag.2023.05.028114:C(116-125)Online publication date: 1-Aug-2023

Index Terms

  1. Restoration of ancient temple murals using cGAN and PConv networks
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Computers and Graphics
          Computers and Graphics  Volume 109, Issue C
          Dec 2022
          127 pages

          Publisher

          Pergamon Press, Inc.

          United States

          Publication History

          Published: 01 December 2022

          Author Tags

          1. Image inpainting
          2. cGAN
          3. Partial convolution
          4. Mask generation

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 18 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Image Restoration Technology of Tang Dynasty Tomb Murals Using Adversarial Edge LearningJournal on Computing and Cultural Heritage 10.1145/367498417:3(1-11)Online publication date: 15-Jul-2024
          • (2024)Virtual cleaning of sooty murals in ancient temples using twice colour attenuation priorComputers and Graphics10.1016/j.cag.2024.103924120:COnline publication date: 18-Nov-2024
          • (2023)Identifying influences between artists based on artwork faces and geographic proximityComputers and Graphics10.1016/j.cag.2023.05.028114:C(116-125)Online publication date: 1-Aug-2023

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media