Just Noticeable Difference Model for Images with Color Sensitivity
<p>The framework of the proposed CSJND model.</p> "> Figure 2
<p>An example of JND generation and a contaminated image guided by JND noise: (<b>a</b>) the original image; (<b>b</b>) response map for contrast masking of Y component; (<b>c</b>) response map for pattern masking of Y component; (<b>d</b>) saliency prediction map; (<b>e</b>) JND map of Y component; and (<b>f</b>) JND-contaminated image, with PSNR = 27.00 dB.</p> "> Figure 3
<p>The Prewitt kernels in vertical and horizontal directions.</p> "> Figure 4
<p>The comparison of contaminated images from JND models based on different proposed factors. The contaminated images have the same level of noise, with PSNR = 28.25 dB. (<b>a</b>) The original image. (<b>b</b>) The basic model <math display="inline"><semantics> <msubsup> <mrow> <mi>J</mi> <mspace width="-0.166667em"/> <mi>N</mi> <mspace width="-0.166667em"/> <mi>D</mi> </mrow> <mrow> <mi>θ</mi> </mrow> <mi>B</mi> </msubsup> </semantics></math>, VMAF = 80.10. (<b>c</b>) The model <math display="inline"><semantics> <msubsup> <mrow> <mi>J</mi> <mspace width="-0.166667em"/> <mi>N</mi> <mspace width="-0.166667em"/> <mi>D</mi> </mrow> <mrow> <mi>θ</mi> </mrow> <mi>S</mi> </msubsup> </semantics></math> based on the basic model and saliency modulation, with VMAF = 84.42. (<b>d</b>) The model <math display="inline"><semantics> <msubsup> <mrow> <mi>J</mi> <mspace width="-0.166667em"/> <mi>N</mi> <mspace width="-0.166667em"/> <mi>D</mi> </mrow> <mrow> <mi>θ</mi> </mrow> <mi>C</mi> </msubsup> </semantics></math> based on the basic model and color sensitivity modulation, with VMAF = 88.04. (<b>e</b>) The proposed model <math display="inline"><semantics> <mrow> <mi>C</mi> <mspace width="-0.166667em"/> <mi>S</mi> <mspace width="-0.166667em"/> <mi>J</mi> <mspace width="-0.166667em"/> <mi>N</mi> <mspace width="-0.166667em"/> <msub> <mi>D</mi> <mi>θ</mi> </msub> </mrow> </semantics></math>, with VMAF = 94.75.</p> "> Figure 5
<p>An example of the comparison of contaminated images from different JND models. The contaminated images have the same level of noise, with PSNR = 28.91 dB. (<b>a</b>) The original image; (<b>b</b>) Wu2013, VMAF = 82.65; (<b>c</b>) Wu2017, VMAF = 83.41; (<b>d</b>) Chen2019, VMAF = 87.44; (<b>e</b>) Jiang2022, VMAF = 90.34; and (<b>f</b>) The proposed CSJND model, VMAF = 94.99.</p> "> Figure 6
<p>The set of test images, in order from I1–I12.</p> ">
Abstract
:1. Introduction
- The masking effect, including contrast masking, pattern masking, and edge protection, are adaptively modulated by visual saliency, which is a more comprehensive measure of the masking effect.
- As the color sensitivities of the HVS to different color components differ, we applied color sensitivity modulation based on the visual sensitivity of human eyes to Y, Cb, Cr components. This is the first JND model that accounts for color sensitivity.
- The proposed CSJND model was utilized to guide the injection of noise into the images. Our experimental results demonstrated that the CSJND model could tolerate more noise and had better perceptual quality.
2. Related Works
3. The Proposed CSJND Model
3.1. Luminance Adaptation Effect
3.2. Visual Masking Effect with Saliency Modulation
3.2.1. Contrast Masking
3.2.2. Pattern Masking
3.2.3. Edge Protection
3.2.4. Saliency Modulation
3.3. Color Sensitivity Modulation
4. Experimental Results and Analysis
4.1. Analysis of the Proposed Factors
4.2. Comparison of JND Models
4.2.1. Performance Comparison of JND-Guided Noise Injection
4.2.2. Performance Comparison of Maximum Tolerable Noise Level
4.3. Comparison of Subjective Quality
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Shang, X.; Li, G.; Zhao, X.; Zuo, Y. Low complexity inter coding scheme for Versatile Video Coding (VVC). J. Vis. Commun. Image Represent. 2023, 90, 103683. [Google Scholar] [CrossRef]
- Wu, J.; Shi, G.; Lin, W. Survey of visual just noticeable difference estimation. Front. Comput. Sci. 2019, 13, 4–15. [Google Scholar] [CrossRef]
- Lin, W.; Ghinea, G. Progress and Opportunities in Modelling Just-Noticeable Difference (JND) for Multimedia. IEEE Trans. Multimed. 2021, 24, 3706–3721. [Google Scholar] [CrossRef]
- Wan, W.; Zhou, K.; Zhang, K.; Zhan, Y.; Li, J. JND-guided perceptually color image watermarking in spatial domain. IEEE Access 2020, 8, 164504–164520. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, Z.; Zhan, Y.; Meng, L.; Sun, J.; Wan, W. JND-aware robust image watermarking with tri-directional inter-block correlation. Int. J. Intell. Syst. 2021, 36, 7053–7079. [Google Scholar] [CrossRef]
- Wan, W.; Li, W.; Liu, W.; Diao, Z.; Zhan, Y. QuatJND: A Robust Quaternion JND Model for Color Image Watermarking. Entropy 2022, 24, 1051. [Google Scholar] [CrossRef] [PubMed]
- Ki, S.; Do, J.; Kim, M. Learning-based JND-directed HDR video preprocessing for perceptually lossless compression with HEVC. IEEE Access 2020, 8, 228605–228618. [Google Scholar] [CrossRef]
- Nami, S.; Pakdaman, F.; Hashemi, M.R.; Shirmohammadi, S. BL-JUNIPER: A CNN-Assisted Framework for Perceptual Video Coding Leveraging Block-Level JND. IEEE Trans. Multimed. 2022, 1–16. [Google Scholar] [CrossRef]
- Dai, T.; Gu, K.; Niu, L.; Zhang, Y.b.; Lu, W.; Xia, S.T. Referenceless quality metric of multiply-distorted images based on structural degradation. Neurocomputing 2018, 290, 185–195. [Google Scholar] [CrossRef]
- Seo, S.; Ki, S.; Kim, M. A novel just-noticeable-difference-based saliency-channel attention residual network for full-reference image quality predictions. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2602–2616. [Google Scholar] [CrossRef]
- Sendjasni, A.; Larabi, M.C.; Cheikh, F.A. Perceptually-weighted CNN for 360-degree image quality assessment using visual scan-path and JND. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 1439–1443. [Google Scholar]
- Chou, C.H.; Li, Y.C. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol. 1995, 5, 467–476. [Google Scholar] [CrossRef]
- Yang, X.; Ling, W.; Lu, Z.; Ong, E.P.; Yao, S. Just noticeable distortion model and its applications in video coding. Signal Process. Image Commun. 2005, 20, 662–680. [Google Scholar] [CrossRef]
- Liu, A.; Lin, W.; Paul, M.; Deng, C.; Zhang, F. Just noticeable difference for images with decomposition model for separating edge and textured regions. IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 1648–1652. [Google Scholar] [CrossRef]
- Chen, Z.; Guillemot, C. Perceptually-friendly H. 264/AVC video coding based on foveated just-noticeable-distortion model. IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 806–819. [Google Scholar] [CrossRef]
- Wu, J.; Lin, W.; Shi, G.; Wang, X.; Li, F. Pattern masking estimation in image with structural uncertainty. IEEE Trans. Image Process. 2013, 22, 4892–4904. [Google Scholar] [CrossRef]
- Wu, J.; Lin, W.; Shi, G.; Li, L.; Fang, Y. Orientation selectivity based visual pattern for reduced-reference image quality assessment. Inf. Sci. 2016, 351, 18–29. [Google Scholar] [CrossRef]
- Eckert, M.P.; Bradley, A.P. Perceptual quality metrics applied to still image compression. Signal Process. 1998, 70, 177–200. [Google Scholar] [CrossRef]
- Borji, A.; Itti, L. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 185–207. [Google Scholar] [CrossRef]
- Zhang, X.; Lin, W.; Xue, P. Improved estimation for just-noticeable visual distortion. Signal Process. 2005, 85, 795–808. [Google Scholar] [CrossRef]
- Wei, Z.; Ngan, K.N. Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain. IEEE Trans. Circuits Syst. Video Technol. 2009, 19, 337–346. [Google Scholar]
- Wu, J.; Shi, G.; Lin, W.; Liu, A.; Qi, F. Just noticeable difference estimation for images with free-energy principle. IEEE Trans. Multimed. 2013, 15, 1705–1710. [Google Scholar] [CrossRef]
- Wang, S.; Ma, L.; Fang, Y.; Lin, W.; Ma, S.; Gao, W. Just noticeable difference estimation for screen content images. IEEE Trans. Image Process. 2016, 25, 3838–3851. [Google Scholar] [CrossRef] [PubMed]
- Wu, J.; Li, L.; Dong, W.; Shi, G.; Lin, W.; Kuo, C.C.J. Enhanced just noticeable difference model for images with pattern complexity. IEEE Trans. Image Process. 2017, 26, 2682–2693. [Google Scholar] [CrossRef] [PubMed]
- Chen, Z.; Wu, W. Asymmetric foveated just-noticeable-difference model for images with visual field inhomogeneities. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 4064–4074. [Google Scholar] [CrossRef]
- Wang, C.; Wang, Y.; Lian, J. A Superpixel-Wise Just Noticeable Distortion Model. IEEE Access 2020, 8, 204816–204824. [Google Scholar] [CrossRef]
- Jiang, Q.; Liu, Z.; Wang, S.; Shao, F.; Lin, W. Towards Top-Down Just Noticeable Difference Estimation of Natural Images. IEEE Trans. Image Process. 2022, 31, 3697–3712. [Google Scholar] [CrossRef] [PubMed]
- Ahumada, A.J., Jr.; Peterson, H.A. Luminance-model-based DCT quantization for color image compression. In Proceedings of the Human Vision, Visual Processing, and Digital Display III, San Jose, CA, USA, 10–13 February 1992; Volume 1666, pp. 365–374. [Google Scholar]
- Watson, A.B. DCTune: A technique for visual optimization of DCT quantization matrices for individual images. In Proceedings of the SID International Symposium Digest of Technical Papers, Society for Information Display, Playa del Rey, CA, USA, 26 January 1993; Volume 24, p. 946. [Google Scholar]
- Zhang, X.; Lin, W.; Xue, P. Just-noticeable difference estimation with pixels in images. J. Vis. Commun. Image Represent. 2008, 19, 30–41. [Google Scholar] [CrossRef]
- Wang, H.; Wang, L.; Hu, X.; Tu, Q.; Men, A. Perceptual video coding based on saliency and just noticeable distortion for H. 265/HEVC. In Proceedings of the 2014 International Symposium on Wireless Personal Multimedia Communications (WPMC), Sydney, Australia, 7–10 September 2014; pp. 106–111. [Google Scholar]
- Wan, W.; Wu, J.; Xie, X.; Shi, G. A novel just noticeable difference model via orientation regularity in DCT domain. IEEE Access 2017, 5, 22953–22964. [Google Scholar] [CrossRef]
- Wang, H.; Yu, L.; Wang, S.; Xia, G.; Yin, H. A novel foveated-JND profile based on an adaptive foveated weighting model. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 December 2018; pp. 1–4. [Google Scholar]
- Ki, S.; Bae, S.H.; Kim, M.; Ko, H. Learning-based just-noticeable-quantization-distortion modeling for perceptual video coding. IEEE Trans. Image Process. 2018, 27, 3178–3193. [Google Scholar] [CrossRef]
- Liu, H.; Zhang, Y.; Zhang, H.; Fan, C.; Kwong, S.; Kuo, C.C.J.; Fan, X. Deep learning-based picture-wise just noticeable distortion prediction model for image compression. IEEE Trans. Image Process. 2019, 29, 641–656. [Google Scholar] [CrossRef]
- Shen, X.; Ni, Z.; Yang, W.; Zhang, X.; Wang, S.; Kwong, S. Just noticeable distortion profile inference: A patch-level structural visibility learning approach. IEEE Trans. Image Process. 2020, 30, 26–38. [Google Scholar] [CrossRef]
- Shang, X.; Wang, G.; Zhao, X.; Zuo, Y.; Liang, J.; Bajić, I.V. Weighting quantization matrices for HEVC/H. 265-coded RGB videos. IEEE Access 2019, 7, 36019–36032. [Google Scholar] [CrossRef]
- Shang, X.; Liang, J.; Wang, G.; Zhao, H.; Wu, C.; Lin, C. Color-sensitivity-based combined PSNR for objective video quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 1239–1250. [Google Scholar] [CrossRef]
- Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, L.; Gu, Z.; Li, H. SDSP: A novel saliency detection method by combining simple priors. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 171–175. [Google Scholar]
- Li, Z.; Aaron, A.; Katsavounidis, I.; Moorthy, A.; Manohara, M. Toward a Practical Perceptual Video Quality Metric. Netflix Technology Blog, 6 June 2016. [Google Scholar]
- Li, Z.; Bampis, C.; Novak, J.; Aaron, A.; Swanson, K.; Moorthy, A.; Cock, J. VMAF: The Journey Continues. Netflix Technology Blog, 25 October 2018. [Google Scholar]
- Sheikh, H.R. Image and Video Quality Assessment Research at LIVE. 2003. The University of Texas. Available online: https://sipi.usc.edu/database/database.php (accessed on 10 January 2022).
- Franzen, R. Kodak Lossless True Color Image Suite. Available online: http://www.r0k.us/graphics/kodak/ (accessed on 1 February 2022).
- Methodology for the Subjective Assessment of the Quality of Television Pictures; Document ITU-R BT. 500-11; International Telecommunication Union: Geneva, Switzerland, 2002.
Image | PSNR | Wu2013 | Wu2017 | Chen2019 | Jiang2022 | Proposed | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
SSIM | VMAF | SSIM | VMAF | SSIM | VMAF | SSIM | VMAF | SSIM | VMAF | ||
I1 | 28.23 | 0.80 | 87.05 | 0.81 | 89.31 | 0.84 | 92.37 | 0.85 | 94.17 | 0.88 | 96.01 |
I2 | 24.52 | 0.71 | 78.16 | 0.72 | 80.76 | 0.75 | 83.48 | 0.77 | 85.56 | 0.82 | 90.98 |
I3 | 26.47 | 0.80 | 88.24 | 0.80 | 88.69 | 0.82 | 90.71 | 0.83 | 92.48 | 0.87 | 95.85 |
I4 | 25.95 | 0.74 | 83.33 | 0.75 | 85.04 | 0.78 | 87.34 | 0.82 | 90.02 | 0.85 | 94.50 |
I5 | 26.18 | 0.78 | 84.12 | 0.79 | 85.31 | 0.80 | 86.75 | 0.81 | 89.91 | 0.85 | 94.08 |
I6 | 26.91 | 0.79 | 86.95 | 0.81 | 87.76 | 0.82 | 88.54 | 0.83 | 91.36 | 0.84 | 95.74 |
I7 | 27.49 | 0.80 | 85.87 | 0.82 | 86.99 | 0.84 | 90.47 | 0.85 | 92.47 | 0.86 | 94.15 |
I8 | 24.91 | 0.73 | 83.42 | 0.79 | 84.64 | 0.81 | 88.40 | 0.82 | 89.40 | 0.84 | 93.03 |
I9 | 26.37 | 0.72 | 82.86 | 0.73 | 83.72 | 0.73 | 84.01 | 0.75 | 86.01 | 0.78 | 89.73 |
I10 | 24.56 | 0.71 | 80.83 | 0.75 | 85.76 | 0.76 | 88.58 | 0.79 | 90.37 | 0.82 | 94.59 |
I11 | 26.38 | 0.79 | 86.43 | 0.80 | 87.49 | 0.82 | 89.45 | 0.83 | 92.80 | 0.85 | 94.43 |
I12 | 25.16 | 0.73 | 82.82 | 0.75 | 83.88 | 0.77 | 86.79 | 0.79 | 88.94 | 0.84 | 93.91 |
Average | 26.09 | 0.76 | 84.17 | 0.78 | 85.78 | 0.80 | 88.07 | 0.81 | 90.29 | 0.84 | 93.92 |
Image | Wu2013 | Wu2017 | Chen2019 | Jiang2022 | Proposed |
---|---|---|---|---|---|
I1 | 35.72 | 35.28 | 34.45 | 32.56 | 32.25 |
I2 | 36.86 | 34.68 | 35.27 | 33.25 | 31.92 |
I3 | 35.42 | 36.82 | 35.58 | 34.84 | 32.82 |
I4 | 37.24 | 35.65 | 35.27 | 33.48 | 32.45 |
I5 | 35.54 | 34.78 | 34.82 | 33.65 | 32.28 |
I6 | 33.64 | 34.48 | 33.86 | 32.94 | 32.35 |
I7 | 36.53 | 35.85 | 33.46 | 34.52 | 32.93 |
I8 | 35.58 | 34.69 | 35.87 | 35.72 | 33.68 |
I9 | 33.64 | 33.93 | 33.41 | 33.85 | 31.38 |
I10 | 35.92 | 34.34 | 34.48 | 33.57 | 32.01 |
I11 | 34.82 | 35.14 | 34.78 | 34.42 | 32.57 |
I12 | 36.62 | 35.48 | 34.56 | 34.25 | 31.94 |
Average | 35.63 | 35.09 | 34.65 | 33.92 | 32.38 |
Description | Same Quality | Slightly Better | Better | Much Better |
---|---|---|---|---|
Score | 0 | 1 | 2 | 3 |
Methods | Proposed vs. Wu2013 | Proposed vs. Wu2017 | Proposed vs. Chen2019 | Proposed vs. Jiang2022 | |||||
---|---|---|---|---|---|---|---|---|---|
Images | Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
I1 | 0.667 | 0.577 | 0.934 | 0.455 | 0.778 | 0.574 | 0.834 | 0.574 | |
I2 | 0.762 | 0.539 | 0.952 | 0.540 | 0.836 | 0.650 | 0.752 | 0.458 | |
I3 | 1.619 | 0.669 | 1.532 | 0.565 | 1.389 | 0.600 | 1.235 | 0.745 | |
I4 | 0.905 | 0.700 | 0.946 | 0.432 | 0.862 | 0.512 | 0.746 | 0.375 | |
I5 | 1.143 | 1.062 | 0.864 | 0.742 | 1.124 | 0.648 | 0.962 | 0.784 | |
I6 | 1.190 | 0.750 | 1.183 | 0.790 | 0.854 | 0.820 | 0.943 | 0.824 | |
I7 | 0.714 | 0.463 | 0.644 | 0.452 | 0.684 | 0.620 | 0.648 | 0.848 | |
I8 | 0.857 | 1.153 | 0.843 | 0.604 | 0.793 | 0.704 | 0.745 | 0.762 | |
I9 | 0.381 | 0.590 | 0.416 | 0.580 | 0.177 | 0.850 | 0.211 | 0.650 | |
I10 | 1.333 | 0.658 | 1.348 | 0.694 | 1.368 | 0.569 | 1.248 | 0.480 | |
I11 | 1.429 | 0.676 | 0.968 | 0.480 | 0.983 | 0.834 | 0.954 | 0.568 | |
I12 | 1.190 | 0.602 | 0.793 | 0.675 | 0.867 | 0.704 | 0.853 | 0.565 | |
Average | 1.016 | 0.703 | 0.952 | 0.584 | 0.893 | 0.674 | 0.844 | 0.636 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Shang, X.; Li, G.; Wang, G. Just Noticeable Difference Model for Images with Color Sensitivity. Sensors 2023, 23, 2634. https://doi.org/10.3390/s23052634
Zhang Z, Shang X, Li G, Wang G. Just Noticeable Difference Model for Images with Color Sensitivity. Sensors. 2023; 23(5):2634. https://doi.org/10.3390/s23052634
Chicago/Turabian StyleZhang, Zhao, Xiwu Shang, Guoping Li, and Guozhong Wang. 2023. "Just Noticeable Difference Model for Images with Color Sensitivity" Sensors 23, no. 5: 2634. https://doi.org/10.3390/s23052634
APA StyleZhang, Z., Shang, X., Li, G., & Wang, G. (2023). Just Noticeable Difference Model for Images with Color Sensitivity. Sensors, 23(5), 2634. https://doi.org/10.3390/s23052634