[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

The context effect for blind image quality assessment

Published: 07 February 2023 Publication History

Abstract

Image quality assessment (IQA) is a process of visuo-cognitive, which is an essential stage in human interaction with the environment. The study of the context effect (Brown and Daniel, 1987) also shows that the evaluation results made by the human vision system (HVS) is related to the contrast between the distorted image and the background environment. However, the existing IQA methods carry out the quality evaluation that only depends on the distorted image itself and ignores the impact of environment to human perception. In this paper, we propose a novel blind image quality assessment(BIQA) based on the context effect. At first, we use a graphical model to describe how the context effect influences human perception of image quality. Based on the established graph, we construct the context relation between the distorted image and the background environment by the X. Han et al. (2015). Then the context features are extracted from the constructed relation, and the quality-related features are extracted by the fine-tuned neural network from the distorted image in pixel-wise. Finally, these features are concatenated to quantify image quality degradations and then regress to quality scores. In addition, the proposed method is adaptive to various deep neural networks. Experimental results show that the proposed method not only has the state-of-art performance on the synthetic distorted images, but also has a great improvement on the authentic distorted images.

References

[1]
T.C. Brown, T.C. Daniel, Context effects in perceived environmental quality assessment: scene selection and landscape quality ratings, Journal of Environmental Psychology 7 (1987) 233–250.
[2]
X. Han, T. Leung, Y. Jia, R. Sukthankar, A.C. Berg, Matchnet: Unifying feature and metric learning for patch-based matching, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3279–3286.
[3]
Z. Wang, A.C. Bovik, L. Lu, Why is image quality assessment so difficult?, in: Proceedings of International Conference on Acoustics, Speech and Signal Processing, 2002, pp. 3313-3316.
[4]
S. Wang, K. Jin, H. Lu, C. Cheng, J. Ye, D. Qian, Human visual system-based fundus image quality assessment of portable fundus camera photographs, IEEE transactions on medical imaging 35 (2015) 1046–1055.
[5]
B. Yan, B. Bare, W. Tan, Naturalness-aware deep no-reference image quality assessment, IEEE Transactions on Multimedia 21 (2019) 2603–2615.
[6]
K.-Y. Lin, G. Wang, Hallucinated-iqa: No-reference image quality assessment via adversarial learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 732–741.
[7]
D. Marr, Vision: a computational investigation into the human representation and processing of visual information, Freeman, San Francisco, 1982.
[8]
A. Newell, Unified theories of cognition, Harvard University Press, 1994.
[9]
A.K. Moorthy, A.C. Bovik, Blind image quality assessment: From natural scene statistics to perceptual quality, IEEE transactions on Image Processing 20 (2011) 3350–3364.
[10]
J. Xu, P. Ye, Q. Li, H. Du, Y. Liu, D. Doermann, Blind image quality assessment based on high order statistics aggregation, IEEE Transactions on Image Processing 25 (2016) 4444–4457.
[11]
Y. Liu, Q. Li, Y. Yuan, Q. Du, Q. Wang, Abnet: Adaptive balanced network for multiscale object detection in remote sensing imagery, IEEE Transactions on Geoscience and Remote Sensing 60 (2022) 1–14.
[12]
X. Liu, J. van de Weijer, A.D. Bagdanov, Rankiqa: Learning from rankings for no-reference image quality assessment, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1040–1049.
[13]
L. Kang, P. Ye, Y. Li, D. Doermann, Convolutional neural networks for no-reference image quality assessment, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1733–1740.
[14]
H. Talebi, P. Milanfar, Nima: Neural image assessment, IEEE transactions on image processing 27 (2018) 3998–4011.
[15]
S. Su, Q. Yan, Y. Zhu, C. Zhang, X. Ge, J. Sun, Y. Zhang, Blindly assess image quality in the wild guided by a self-adaptive hyper network, in, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020 (2020) 3664–3673.
[16]
W. Zhang, K. Ma, G. Zhai, X. Yang, Uncertainty-aware blind image quality assessment in the laboratory and wild, IEEE Transactions on Image Processing 30 (2021) 3474–3486.
[17]
K. Simonyan, A. Zisserman, Very deep convolutional networks for large- scale image recognition, Computational and Biological Learning Society (2015) 1–14.
[18]
S. Bosse, D. Maniry, K.-R. Müller, T. Wiegand, W. Samek, Deep neural networks for no-reference and full-reference image quality assessment, IEEE Transactions on Image Processing 27 (2018) 206–219.
[19]
H. Zeng, L. Zhang, A. C. Bovik, Blind image quality assessment with a probabilistic quality representation, in: Proceedings of the 25th IEEE International Conference on Image Processing (ICIP), 2018, pp. 609–613.
[20]
S. Bianco, L. Celona, P. Napoletano, R. Schettini, On the use of deep learning for blind image quality assessment, Signal, Image and Video Processing 12 (2018) 355–362.
[21]
K. Ma, W. Liu, T. Liu, Z. Wang, D. Tao, dipiq: Blind image quality assessment by learning-to-rank discriminable image pairs, IEEE Transactions on Image Processing 26 (2017) 3951–3964.
[22]
K. Ma, W. Liu, K. Zhang, Z. Duanmu, Z. Wang, W. Zuo, End-to-end blind image quality assessment using deep neural networks, IEEE Transactions on Image Processing 27 (2018) 1202–1213.
[23]
J. Kim, S. Lee, Fully deep blind image quality predictor, IEEE Journal of selected topics in signal processing 11 (2017) 206–220.
[24]
H. Zhu, L. Li, J. Wu, W. Dong, G. Shi, Metaiqa: Deep meta-learning for no-reference image quality assessment, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14143–14152.
[25]
Z. Pan, F. Yuan, J. Lei, Y. Fang, X. Shao, S. Kwong, Vcrnet: Visual compensation restoration network for no-reference image quality assessment, IEEE Transactions on Image Processing (2022).
[26]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Proceedings of the Advances in Neural Information Processing Systems (2014) 2672–2680.
[27]
H. Ren, D. Chen, Y. Wang, Ran4iqa: Restorative adversarial nets for no-reference image quality assessment, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[28]
K.-Y. Lin, G. Wang, Hallucinated-iqa: No-reference image quality assessment via adversarial learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 732–741.
[29]
J. Kim, A.-D. Nguyen, S. Lee, Deep cnn-based blind image quality predictor, IEEE Transactions on Neural Networks and Learning Systems (2018) 1–14.
[30]
O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
[31]
L. Zhang, L. Zhang, X. Mou, D. Zhang, Fsim: A feature similarity index for image quality assessment, IEEE transactions on Image Processing 20 (2011) 2378–2386.
[32]
D. Pan, P. Shi, M. Hou, Z. Ying, S. Fu, Y. Zhang, Blind predicting similar quality map for image quality assessment, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6373–6382.
[33]
Z. Wang, K. Ma, Active fine-tuning from gmad examples improves blind image quality assessment, IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).
[34]
M. Cheon, S.-J. Yoon, B. Kang, J. Lee, Perceptual image quality assessment with transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 433–442.
[35]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009, pp. 248–255.
[36]
J. Pearl, Graphical models for probabilistic and causal reasoning, in: Quantified representation of uncertainty and imprecision, Springer, 1998, pp. 367–389.
[37]
A.E. Gelfand, A.F. Smith, Sampling-based approaches to calculating marginal densities, Journal of the American statistical association 85 (1990) 398–409.
[38]
H.R. Sheikh, M.F. Sabir, A.C. Bovik, A statistical evaluation of recent full reference image quality assessment algorithms, IEEE Transactions on image processing 15 (2006) 3440–3451.
[39]
E.C. Larson, D.M. Chandler, Most apparent distortion: full-reference image quality assessment and the role of strategy, Journal of Electronic Imaging 19 (2010).
[40]
N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al., Image Database TID2013: Peculiarities, results and perspectives, Signal Processing: Image Communication 30 (2015) 57–77.
[41]
D. Ghadiyaram, A.C. Bovik, Massive online crowdsourced study of subjective and objective picture quality, IEEE Transactions on Image Processing 25 (2016) 372–387.
[42]
Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, et al., Image quality assessment: from error visibility to structural similarity, IEEE transactions on image processing 13 (2004) 600–612.
[43]
L. Zhang, L. Zhang, A.C. Bovik, A feature-enriched completely blind image quality evaluator, IEEE Transactions on Image Processing 24 (2015) 2579–2591.
[44]
S.A. Golestaneh, S. Dadsetan, K.M. Kitani, No-reference image quality assessment via transformers, relative ranking, and self-consistency, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 1220–1230.
[45]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[46]
C. Szegedy, S. Ioffe, V. Vanhoucke, A.A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning (2017) 4278–4284.
[47]
S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, Aggregated residual transformations for deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1492–1500.
[48]
Z. Wang, A.C. Bovik, Modern image quality assessment, Synthesis Lectures on Image, Video, and Multimedia Processing 2 (2006) 1–156.

Index Terms

  1. The context effect for blind image quality assessment
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Neurocomputing
      Neurocomputing  Volume 521, Issue C
      Feb 2023
      222 pages

      Publisher

      Elsevier Science Publishers B. V.

      Netherlands

      Publication History

      Published: 07 February 2023

      Author Tags

      1. Blind image quality assessment
      2. Context effect
      3. Probability graph

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 24 Dec 2024

      Other Metrics

      Citations

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media