[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Advertisement

Poisson Shot Noise Removal by an Oracular Non-Local Algorithm

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

In this paper, we address the problem of denoising images obtained under low-light conditions for the Poisson shot noise model. Under such conditions, the variance stabilization transform (VST) is no longer applicable, so that the state-of-the-art algorithms which are proficient for the additive white Gaussian noise cannot be applied. We first introduce an oracular non-local algorithm and prove its convergence with the optimal rate of convergence under a Hölder regularity assumption for the underlying image, when the search window size is suitably chosen. We also prove that the convergence remains valid when the oracle function is estimated within a prescribed error range. We then define a realizable filter by a statistical estimation of the similarity function which determines the oracle weight. The convergence of the realizable filter is justified by proving that the estimator of the similarity function lies in the prescribed error range with high probability. The experiments show that under low-light conditions the proposed filter is competitive compared with the recent state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://sipi.usc.edu/database/.

  2. https://fermi.gsfc.nasa.gov/ssc.

References

  1. Anscombe, F.: The transformation of Poisson, binomial and negative-binomial data. Biometrika 35(3/4), 246–254 (1948)

    Article  MathSciNet  Google Scholar 

  2. Azzari, L., Foi, A.: Variance stabilization for noisy+ estimate combination in iterative poisson denoising. IEEE Signal Process. Lett. 23(8), 1086–1090 (2016)

    Article  Google Scholar 

  3. Azzari, L., Foi, A.: Variance stabilization in Poisson image deblurring. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp 728–731. IEEE, (2017)

  4. Bria, A., Marrocco, C., Borges, L.R., Molinara, M., Marchesi, A., Mordang, J.-J., Karssemeijer, N., Tortorella, F.: Improving the automated detection of calcifications using adaptive variance stabilization. IEEE Trans. Med. Imag. (2018)

  5. Buades, A., Coll, B., Morel, J.: A review of image denoising algorithms, with a new one. SIAM J. Multiscale Model. Simulat. 4(2), 490–530 (2005)

    Article  MathSciNet  Google Scholar 

  6. Chouzenoux, E., Jezierska, A., Pesquet, J.-C., Talbot, H.: A convex approach for image restoration with exact poisson-Gaussian likelihood. SIAM J. Image. Sci. 8(4), 2662–2682 (2015)

    Article  MathSciNet  Google Scholar 

  7. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  8. Danielyan, A., Foi, A., Katkovnik, V., Egiazarian, K.: Denoising of multispectral images via nonlocal groupwise spectrum-PCA. In: Conference on Colour in Graphics, Imaging, and Vision, Society for Imaging Science and Technology, no. 1, pp 261–266, (2010)

  9. Deledalle, C.-A., Tupin, F., Denis, L.: Poisson NL means: Unsupervised non local means for Poisson noise. In: 2010 IEEE International Conference on Image Processing, pp 801–804. IEEE, (2010)

  10. Donoho, D., Johnstone, J.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425 (1994)

    Article  MathSciNet  Google Scholar 

  11. Fan, J.: Local linear regression smoothers and their minimax efficiencies. Ann. Stat., pp 196–216, (1993)

  12. Feng, W., Qiao, H., Chen, Y.: Poisson noise reduction with higher-order natural image prior model. SIAM J. Imag. Sci. 9(3), 1502–1524 (2016)

    Article  MathSciNet  Google Scholar 

  13. Fisz, M.: The limiting distribution of a function of two independent random variables and its statistical application. In: Colloquium Mathematicae Institute of Mathematics Polish Academy of Sciences, vol 3, pp 138–146, (1955)

  14. Fryzlewicz, P.: Likelihood ratio Haar variance stabilization and normalization for Poisson and other non-Gaussian noise removal. arXiv:1701.07263, (2017)

  15. Giryes, R., Elad, M.: Sparsity-based Poisson denoising with dictionary learning. IEEE Trans. Image Process. 23(12), 5057–5069 (2014)

    Article  MathSciNet  Google Scholar 

  16. Goudail, F.: Performance comparison of pseudo-inverse and maximum-likelihood estimators of Stokes parameters in the presence of Poisson noise for spherical design-based measurement structures. Opt. Lett. 42(10), 1899–1902 (2017)

    Article  Google Scholar 

  17. Jansen, M.: Multiscale Poisson data smoothing. J. Roy. Statist. Soc. B 68(1), 27–48 (2006)

    Article  MathSciNet  Google Scholar 

  18. Jianqing, F., Gijbels, I.: Local Polynomial Modelling and its Applications Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, Boca Raton (1996)

    Google Scholar 

  19. Jin, Q., Grama, I., Kervrann, C., Liu, Q.: Nonlocal means and optimal weights for noise removal. SIAM J. Imag. Sci. 10(4), 1878–1920 (2017)

    Article  MathSciNet  Google Scholar 

  20. Jin, Q., Grama, I., Liu, Q.: A new poisson noise filter based on weights optimization. J. Sci. Comput. 58(3), 548–573 (2014)

    Article  MathSciNet  Google Scholar 

  21. Jin, Q., Grama, I., Liu, Q.: Convergence theorems for the non-local means filter. Inverse Probl. Imag. 12(4), 853–881 (2018)

    Article  MathSciNet  Google Scholar 

  22. Lebrun, M., Buades, A., Morel, J.-M.: A nonlocal bayesian image denoising algorithm. SIAM J. Imag. Sci. 6(3), 1665–1688 (2013)

    Article  MathSciNet  Google Scholar 

  23. Luisier, F., Vonesch, C., Blu, T., Unser, M.: Fast interscale wavelet denoising of Poisson-corrupted images. Signal Process. 90(2), 415–427 (2010)

    Article  Google Scholar 

  24. Makitalo, M., Foi, A.: A closed-form approximation of the exact unbiased inverse of the Anscombe variance-stabilizing transformation. IEEE Trans. Image Process. PP(99), 1 (2011)

    MathSciNet  MATH  Google Scholar 

  25. Makitalo, M., Foi, A.: Optimal inversion of the Anscombe transformation in low-count Poisson image denoising. IEEE Trans. Image Process. 20(1), 99–109 (2011)

    Article  MathSciNet  Google Scholar 

  26. Mandel, J.: Use of the singular value decomposition in regression analysis. Am. Stat. 36(1), 15–24 (1982)

    Google Scholar 

  27. Prucnal, P.R., Saleh, B.E.: Transformation of image-signal-dependent noise into image-signal-independent noise. Opt. Lett. 6(7), 316–318 (1981)

    Article  Google Scholar 

  28. Rond, A., Giryes, R., Elad, M.: Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016)

    Article  Google Scholar 

  29. Salmon, J., Harmany, Z., Deledalle, C.-A., Willett, R.: Poisson noise reduction with non-local PCA. J. Mathe. Imag. Vis. 48(2), 279–294 (2014)

    Article  MathSciNet  Google Scholar 

  30. Srivastava, R., Srivastava, S.: Restoration of Poisson noise corrupted digital images with nonlinear PDE based filters along with the choice of regularization parameter estimation. Pattern Recognit. Lett. 34(10), 1175–1185 (2013)

    Article  Google Scholar 

  31. Sutour, C., Deledalle, C.-A., Aujol, J.-F.: Adaptive regularization of the NL-means: application to image and video denoising. IEEE Trans. Image Process. 23(8), 3506–3521 (2014)

    Article  MathSciNet  Google Scholar 

  32. Terrell, G.R., Scott, D.W.: Variable kernel density estimation. Ann. Stat., pp 1236–1265, (1992)

  33. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  34. Zhang, B., Fadili, J., Starck, J.: Wavelets, ridgelets, and curvelets for Poisson noise removal. IEEE Trans. Image Process. 17(7), 1093–1108 (2008)

    Article  MathSciNet  Google Scholar 

  35. Zhang, J., Hirakawa, K.: Improved denoising via poisson mixture modeling of image sensor noise. IEEE Trans. Image Process. PP(99), 1 (2017)

    MathSciNet  MATH  Google Scholar 

  36. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  37. Zhang, Y., Song, P., Dai, Q.: Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed Poisson-Gaussian likelihood. Opt. Exp. 25(1), 168–179 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to Jean-Michel Morel for careful reading, helpful comments and suggestions. They are also grateful to the reviewers for their valuable comments and remarks. The work has been supported by the National Natural Science Foundation of China (Grants Nos. 12061052, 11731012 and 11971063), the Natural Science Fund of Inner Mongolia Autonomous Region (Grant No. 2020MS01002),the “111 project” of higher education talent training in Inner Mongolia Autonomous Region, the China Scholarship Council for a one year visiting at Ecole Normale Supérieure Paris-Saclay (No. 201806810001), the Centre Henri Lebesgue (CHL, ANR-11-LABX-0020-01) and the network information center of Inner Mongolia University. Q. Jin would also like to thank Professor Guoqing Chen for helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quansheng Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs of the Main Results

Appendix: Proofs of the Main Results

1.1 Proof of Theorem 1

We first notice that by the Hölder condition (17), for each \(x_0 \in {\mathbf {I}}_0\), we have \(| u (x'_0) - u(x_0) | \le L\Vert x'_0- x_0\Vert _\infty ^\beta \le L N^{-\beta } = Ln^{-\beta /2}\), where \( x_{0}'=(x'_{0,1},x'_{0,2})= \left( \frac{[N x_{0,1}]}{N},\frac{[N x_{0,2}]}{N}\right) \in {\mathbf {I}}\). This, together with the elementary inequality \((a+b)^2 \le 2a^2 + 2b^2\), implies that

$$\begin{aligned} |u^*(x_0)- u(x_0) |^2= & {} | u^*(x'_0) - u(x_0) |^2 \\\le & {} 2 | u^*(x'_0) - u(x'_0) |^2 + 2 | u (x'_0) - u(x_0) |^2 \\\le & {} 2 | u^*(x'_0) - u(x'_0) |^2 + 2L n^{- \beta }. \end{aligned}$$

Since \(n^{-\beta } < n^{-\frac{\beta }{\beta +1}}\), it suffices to prove that

$$\begin{aligned} {\mathbb {E}} | u^*(x'_0) - u(x'_0) |^2 = O (n^{-\frac{\beta }{\beta +1}}). \end{aligned}$$
(33)

In other words (since \(x'_0 \in {\mathbf {I}}\)), it suffices to prove (19) for each \(x_0 \in {\mathbf {I}}.\) So in the following, we suppose that \(x_0 \in {\mathbf {I}}.\) In this case \(x'_0\) coincides with \(x_0\).

Denoting for brevity

$$\begin{aligned} I_1= & {} \left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x)\rho (x_0,x)\right) ^{2} \nonumber \\= & {} \left( \frac{ \displaystyle \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta } e^{-\frac{\rho ^2(x_0,x)}{H^2(x_0)} }\rho (x_0,x)}{\displaystyle \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta }e^{-\frac{\rho ^2(x_0,x)}{ H^2(x_0)} } }\right) ^2, \end{aligned}$$
(34)

and

$$\begin{aligned} I_2= & {} \sum _{x\in {\mathcal {N}}_{x_0,D }}\left( w^*(x_0,x)\right) ^2{u}(x) \nonumber \\\le & {} \sum _{x\in {\mathcal {N}}_{x_0,D }}\left( w^*(x_0,x)\right) ^2 \varGamma \nonumber \\\le & {} \frac{\varGamma \displaystyle \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta } e^{-2\frac{\rho ^2(x_0,x)}{H^2(x_0)} }}{\left( \displaystyle \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta }e^{-\frac{ \rho ^2(x_0,x)}{H^2(x_0)} } \right) ^2}, \end{aligned}$$
(35)

then we have

$$\begin{aligned} g(w^*)\le I_1+I_2. \end{aligned}$$
(36)

By the assumption of the theorem \(\gamma \ge c L\varDelta ^{\beta } (c>\sqrt{2})\), which implies that for \(x\in {\mathcal {N}}_{x_0,D}\), we have

$$\begin{aligned} \frac{L^2\Vert x-x_{0}\Vert _{\infty }^{2\beta }}{H^2(x_0)}\le \frac{L^2\varDelta ^{2\beta }}{\gamma ^2}\le \frac{1}{c^2}. \end{aligned}$$
(37)

Noting that \(e^{-\frac{\tau ^2}{H^2(x_0)}}\), \(\tau \in [0,\gamma /\sqrt{2})\) is decreasing, and using one term Taylor expansion, the inequality (37) implies that

$$\begin{aligned} \displaystyle \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta }e^{-\frac{ \rho ^2(x_0,x)}{H^2(x_0)} }\ge & {} \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta }e^{-\frac{L^2\Vert x-x_{0}\Vert _{\infty }^{2\beta }}{ H^2(x_0)}}\nonumber \\\ge & {} \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta } \left( 1-\frac{ L^2\Vert x-x_{0}\Vert _{\infty }^{2\beta }}{H^2(x_0)}\right) \nonumber \\\ge & {} D^2 (1- \frac{1}{c^2}), \end{aligned}$$
(38)

where \(D^2 = (2N\varDelta +1)^2\) is the cardinality of the search window (cf. Eq.(18)). Since \(\tau e^{-\frac{\tau ^2}{H^2(x_0)}}\) is increasing in \(\tau \in [0, \gamma /\sqrt{2})\),

$$\begin{aligned} \sum _{\Vert x\!-\!x_{0}\Vert _{\infty }\le \varDelta } e^{\!-\!\frac{\rho ^2(x_0,x)}{H^2(x_0)} }\rho (x_0,x)\le & {} \sum _{\Vert x\!-\!x_{0}\Vert _{\infty }\le \varDelta } L\Vert x\!-\!x_{0}\Vert ^{\beta }_{\infty }e^{\!-\!\frac{ L^2\Vert x\!-\!x_{0}\Vert _{\infty }^{2\beta }}{H^2(x_0)}} \nonumber \\\le & {} \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta } L\Vert x-x_{0}\Vert ^{\beta }_{\infty }\nonumber \\\le & {} D^2 L \varDelta ^{2}. \end{aligned}$$
(39)

The above three inequalities (34), (38) and (39) imply that

$$\begin{aligned} I_1 \le \left( \frac{D^2L\varDelta ^{\beta }}{D^2 (1-\frac{1}{c^2})} \right) ^2 = c' L^2\varDelta ^{2\beta }, \quad \text{ where } c'=\left( \frac{c^2}{c^2-1} \right) ^2. \end{aligned}$$
(40)

Taking into account (35), (38) and the inequality

$$\begin{aligned} \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta } e^{-2\frac{\rho ^2(x_0,x)}{H^2(x_0)} } \le \sum _{\Vert x-x_{0}\Vert _{\infty }\le \varDelta }1=D^2, \end{aligned}$$

it is easily seen that

$$\begin{aligned} I_2\le \frac{D^2 \varGamma }{(D^2 )^2} \le \frac{\varGamma }{4\varDelta ^2n}. \end{aligned}$$
(41)

Combining (36), (40) and (41), we get

$$\begin{aligned} g(w^*)\le c' L^2\varDelta ^{2\beta }+\frac{\varGamma }{4\varDelta ^2n}. \end{aligned}$$
(42)

Using the condition \( c_1 n^{-\frac{1}{2\beta +2}} \le \varDelta \le c_2 n^{-\frac{1}{2\beta +2}} \) for some constants \(c_1, c_2 >0\), from this we infer that

$$\begin{aligned} g(w^*)\le \left( c'c2^{2\beta }L^2 + \frac{\varGamma }{4c_1^2} \right) n^{-\frac{\beta }{\beta +1}}. \end{aligned}$$

This ends the proof of (19).

1.2 Proof of Theorem 2

As in the proof of Theorem 1, we can assume that \(x_0 \in {\mathbf {I}}\), so that \(x'_0= x_0\).

By our condition, \( e(x,x_0) = {{\bar{\rho }}}^2 (x_0,x) - \rho ^2 (x_0,x)\) satisfies \( |e(x,x_0) | \le \eta _n = O(n^{-\frac{\beta }{\beta +1}}),\) where \(\eta _n = \max _{{x_0},x \in {\mathbf {I}}} |e(x,x_0)|\). Using the elementary inequality \( (a-b)^2 \le |a^2 - b^2| \) for \(a,b \ge 0\), we obtain that

$$\begin{aligned}&| {{\bar{\rho }}}(x_0,x) - \rho (x_0,x) |^2 \le | {{\bar{\rho }}}(x_0,x)^2 - \rho ^2(x_0,x) | \\&\quad \le \eta _n = O(n^{-\frac{\beta }{\beta +1}}). \end{aligned}$$

Therefore,

$$\begin{aligned} {{\bar{\rho }}}(x_0,x) \le \rho (x_0,x) + \sqrt{\eta _n}. \end{aligned}$$

As \((a+b)^2 \le 2a^2 + 2b^2\), we have

$$\begin{aligned} \left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x){{\bar{\rho }}}(x_0,x)\right) ^{2}\le & {} \left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x) \rho (x_0,x) \!+\! \sqrt{ \eta _n} \right) ^{2} \\\le & {} 2\left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x)|u(x)\!-\!u(x_0)|\right) ^{2} \!+\!2\eta _n. \end{aligned}$$

Hence,

$$\begin{aligned}&{\mathbb {E}} \left( u^*(x_0)-u(x_0)\right) ^2\\&\quad \le 2\left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x)|u(x)-u(x_0)|\right) ^{2} \\&\quad +2\eta _n + \sum _{x\in {\mathcal {N}}_{x_0,D }}{w}^{*}(x_0,x)^{2}u(x). \\&\quad \le 2\left( \left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x)|u(x)-u(x_0)|\right) ^{2} \right. \\&\quad \left. + \sum _{x\in {\mathcal {N}}_{x_0,D }}{w}^{*}(x_0,x)^{2}u(x)\right) + 2\eta _{n}. \end{aligned}$$

From the proof of Theorem 1, we deduce that

$$\begin{aligned}&\left( \sum _{x\in {\mathcal {N}}_{x_0,D }}w^*(x_0,x)|u(x)-u(x_0)|\right) ^{2} \\&\quad + \sum _{x\in {\mathcal {N}}_{x_0,D }}{w}^{*}(x_0,x)^{2}u(x) = O\left( n^{-\frac{\beta }{\beta +1}}\right) . \end{aligned}$$

Since \( \eta _n = O(n^{-\frac{\beta }{2+2\beta }}) \), we obtain

$$\begin{aligned} {\mathbb {E}} \left( u^*(x_0)\!-\!u(x_0)\right) ^2\!=\! & {} 2 O\left( n^{\!-\!\frac{\beta }{\beta \!+\!1}}\right) \!+\! 2 O\left( n^{\!-\!\frac{\beta }{\beta \!+\!}}\right) \!=\!O\left( n^{\!-\!\frac{\beta }{\beta \!+\!1}}\right) . \end{aligned}$$

1.3 Proof of Theorem 3

We first give an expression of \(\widehat{\rho ^2} (x_0,x)\) defined by (23), which will be suitable for the estimation. For convenience, let

$$\begin{aligned} \varLambda _{x_{0},x}(t)=u(x_0+t)-u(x+t) \end{aligned}$$
(43)

and

$$\begin{aligned} \zeta _{x_0,x}(t)=\varepsilon (x_0+t)-\varepsilon (x+t). \end{aligned}$$
(44)

With these notations and using (3), we see that the function in the definition of \(\widehat{\rho ^2}(x_0,x) \) (cf. (23)) can be written as:

$$\begin{aligned}&\Vert v({\mathcal {N}}_{x,d })-v({\mathcal {N}}_{x_{0},d })\Vert ^2_{2}- {\overline{u}}(x_0)-{\overline{u}}(x) \\&\quad = \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }} \left( v(x_0+t)- v(x+t)\right) ^2 - {\overline{u}}(x_0)-{\overline{u}}(x) \\&\quad = \frac{1}{d^2}\sum \limits _{y\in {\mathcal {N}}_{x_0,d }} \left( u(x_0 + t)-u(x+t)+\varepsilon (x_0+t)\right. \\&\quad \left. -\varepsilon (x+t)\right) ^2 - {\overline{u}}(x_0)-{\overline{u}}(x) \\&\quad = \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }} \left( \varLambda _{x_{0},x}(t)+\zeta _{x_0,x}(t)\right) ^2 \\&\quad - {\overline{u}}(x_0)-{\overline{u}}(x) \\&\quad = \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) +\frac{1}{d^2}S(x_0,x), \end{aligned}$$

where

$$\begin{aligned} S(x_0,x)= & {} \sum \limits _{t\in {\mathcal {N}}_{0,d }} \left( \zeta _{x_0,x}(t) ^{2}-u(x_0+t)\right. \nonumber \\&\quad \left. -u(x+t)+2\varLambda _{x_0,x} \left( t\right) \zeta _{x_0,x}(t) \right) . \end{aligned}$$
(45)

Therefore, by the definition of \(\widehat{\rho ^2}(x_0,x) \) (see (23)),

$$\begin{aligned} \widehat{\rho ^2}(x_0,x) =\left( \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) +\frac{1}{d^2}S(x_0,x)\right) ^+. \end{aligned}$$
(46)

We will need two lemmas for the estimation of the two sums in (46).

Lemma 1

Under the local Hölder condition (17), with \(\varDelta \) and \(\delta \) defined by (18) and (26), we have

$$\begin{aligned} \left| \frac{1}{d^2}\sum \limits _{y\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) -|u(x)-u(x_0)|^{2} \right| \le 4L^2 \varDelta ^ \beta \delta ^ \beta . \end{aligned}$$

The proof of this lemma can be found in [19, 21].

Lemma 2

There are two positive constants \(c_{1}\) and \(c_{2}\), depending only on L and \(\varGamma ,\) such that for any \(0< z\le c_{1}d,\)

$$\begin{aligned} {\mathbb {P}}\left( \left| S(x_0,x)\right| \ge zd\right) \le c_2 z^{-2} . \end{aligned}$$

Proof

Note that the variables

$$\begin{aligned} X_{t} =\zeta _{x_0,x}(t) ^{2}-u(x_0+t)-u(x+t)+2\varLambda _{x_0,x} \left( t\right) \zeta _{x_0,x}(t) ,\quad t\in {\mathcal {N}}_{0,d } \end{aligned}$$
(47)

are identically distributed with \({\mathbb {E}}X_{t} = 0\). We prove below that the variance \( {\mathbb {E}}X^2_{t}\) satisfies \( \max _{t\in {\mathcal {N}}_{0,d }} {\mathbb {E}}X^2_{t}\le b\) for some constant \(b>0.\) As v(x) has Poisson law with parameter u(x), it holds that

$$\begin{aligned} {\mathbb {E}}v(x)= & {} u(x), \\ {\mathbb {E}}v^2(x)= & {} u(x) + u^2(x), \\ {\mathbb {E}}v^3(x)= & {} u(x) + 3u^2(x) + u^3(x),\\ {\mathbb {E}}v^4(x)= & {} u(x) + 7u^2(x) + 6u^3(x)+u^4(x). \end{aligned}$$

Hence, for each \(x\in {\mathcal {N}}_{x_0,D}\) and each \(t\in {\mathcal {N}}_{0,d}\),

$$\begin{aligned} {\mathbb {E}}\varepsilon ^4 (x+t)= & {} {\mathbb {E}}(v(x+t)-u(x+t))^4\nonumber \\= & {} {\mathbb {E}}v^4(x+t)-3u(x+t){\mathbb {E}}v^3(x+t)\nonumber \\&+6u^2(x+t){\mathbb {E}}v^2(x+t)\nonumber \\&-3u^3(x+t){\mathbb {E}}v(x+t)+u^4(x+t)\nonumber \\= & {} u(x+t)+4u^2(x+t)+3u^3(x+t)+2u^4(x+t)\nonumber \\\le & {} \varGamma + 4\varGamma ^2 +3\varGamma ^3 +2\varGamma ^4, \end{aligned}$$
(48)

where the last inequality follows by the definition of \(\varGamma \) (see (16)). From (48) and the inequality \((a+b)^4 \le 8a^4 +8b^4\) for \(a,b \in {\mathbf {R}}\), we have

$$\begin{aligned} {\mathbb {E}}(\zeta ^{4}_{x_0,x}(t) )= & {} {\mathbb {E}}\left( \varepsilon (x_0+t)-\varepsilon (x+t)\right) ^{4}\nonumber \\\le & {} {\mathbb {E}}\left( 8\varepsilon ^{4}(x_0+t)+8\varepsilon ^{4} (x+t)\right) \nonumber \\\le & {} 16(\varGamma + 4\varGamma ^2 +3\varGamma ^3 +2\varGamma ^4). \end{aligned}$$
(49)

As \({\mathbb {E}}(\varepsilon (x))=0\) and \({\mathbb {V}}ar(\varepsilon (x))=u(x)\), by the independence of \(\varepsilon (x_0+t)\) and \(\varepsilon (x+t),\) it follows that

$$\begin{aligned} {\mathbb {E}}(\zeta _{x_0,x}(t) ^{2})= & {} {\mathbb {E}}\left( \varepsilon (x_0+t)-\varepsilon (x+t)\right) ^{2}\nonumber \\= & {} {\mathbb {E}}\varepsilon ^2 (x+t) + {\mathbb {E}}\varepsilon ^2 (x_0+t)\nonumber \\= & {} u(x+t) + u(x_0+t)\nonumber \\\le & {} 2\varGamma . \end{aligned}$$
(50)

As the function u satisfies the local Hölder condition (17), for \(x\in {\mathcal {N}}_{x_0,x}\)

$$\begin{aligned} \varLambda ^2_{x_0,x} \left( t\right) = (u(x_0+t)-u(x+t))^2\le L^2\varDelta ^{2\beta }\le L^2. \end{aligned}$$
(51)

Therefore, taking into account (49), (50) and (51), we obtain, uniformly in \(t\in {\mathcal {N}}_{0,d }\),

$$\begin{aligned} {\mathbb {E}}X^2_{t}= & {} {\mathbb {E}}(\zeta _{x_0,x}(t) ^{2}-u(x_0+t)-u(x+t)+2\varLambda _{x_0,x} \left( t\right) \zeta _{x_0,x}(t) )^2 \\= & {} {\mathbb {E}}(\zeta ^{4}_{x_0,x}(t)) + (u(x_0+t)+u(x+t))^2 \\&+ 4\varLambda ^2_{x_0,x}{\mathbb {E}}(\zeta _{x_0,x}^2 \left( t\right) )+2(u(x_0+t)+u(x+t)){\mathbb {E}}(\zeta _{x_0,x}^{2}(t) ) \\\le & {} 16(\varGamma + 4\varGamma ^2 +3\varGamma ^3 +2\varGamma ^4) + 4\varGamma ^2 +4L^2\times 2\varGamma +4\varGamma \times \varGamma \\= & {} 8(2+L^2)\varGamma + 72\varGamma ^2 +48\varGamma ^3 +32\varGamma ^4. \end{aligned}$$

We have therefore proved that \( {\mathbb {E}}X^2_{t}\le b\), where \(b:= 8(2+L^2)\varGamma + 72\varGamma ^2 +48\varGamma ^3 +32\varGamma ^4\).

The point in handling the sum \(S(x_0, x) =\sum \limits _{t\in {\mathcal {N}}_{0,d }}X_{t} \) is that the variables \(X_{t}, t\in {\mathcal {N}}_{0,d } \) are not necessarily independent. Remark that \(\zeta _{x_0,x}(t)\) and \(\zeta _{x_0,x} \left( s\right) \) are correlated if and only if \(t-s = \pm (x_0 - x):\) indeed, it can be easily checked that

$$\begin{aligned} {\mathbb {E}}(\zeta _{x_0,x}(t) \zeta _{x_0,x} \left( s\right) )=\left\{ \begin{array}{cl} -\sigma ^2, &{} \text {if } t-s= x_0-x, \\ \sigma ^2, &{} \text {if } t-s= x-x_0,\\ 0, &{} \text {otherwise. } \end{array} \right. \end{aligned}$$

By the definition of \(\zeta _{x_{0},x}( t )\), if \( t - s \ne \pm (x-x_0),\) then \(\zeta _{x_{0},x}( t )\) and \(\zeta _{x_{0},x}( s )\) are independent, so that \(X_ t \) and \(X_{ s}\) are also independent. Consequently

$$\begin{aligned} {\mathrm {Var}} (S(x_0,x) )= & {} {\mathbb {E}} (S(x_0,x)^2 ) = \sum _{ t , s \in {\mathcal {N}}_{0,d }} {\mathbb {E}} ( X_{ t } X_{ s}) \end{aligned}$$
(52)
$$\begin{aligned}= & {} \sum _{ t \in {\mathcal {N}}_{x_{0},d }} {\mathbb {E}} ( X_{ t }^2) + \sum _{ t \in {\mathcal {N}}_{x_{0},d }} \sum _{ s \in {\mathcal {N}}_{0,d }: s = t \pm (x-x_0) } {\mathbb {E}} ( X_{ t } X_{ s}).\nonumber \\ \end{aligned}$$
(53)

By the Cauchy–Schwarz inequality, \({\mathbb {E}} ( X_{ t } X_{ s}) \le b\). Hence,

$$\begin{aligned} {\mathrm {Var}} (S(x_0,x) ) \le d^2 b + 2d^2 b= 3d^2 b. \end{aligned}$$
(54)

Therefore, by Chebyshev’s inequality

$$\begin{aligned} {\mathbb {P}}\left( \left| S(x_0,x)\right| \ge zd\right) \le \frac{{\mathrm {Var}} (S(x_0,x) )}{z^2 d^2 } \le \frac{3b}{z^2}. \end{aligned}$$

\(\square \)

Now we turn to the proof of Theorem 3. Below \(c_1, c_2, \cdots \) stand for some constants (independent of n). By equation (26) and the assumption on \(\delta \), we have \(d \ge c_{1}n^{\frac{1}{2}-\alpha }.\)

Applying Lemma 2 with \(z=\sqrt{\frac{1}{c_3}\ln n} \le c_2 d\), we see that

$$\begin{aligned} {\mathbb {P}}\left( \frac{1}{d^2}\left| S(x_0,x)\right| \ge \frac{\sqrt{\frac{1}{c_{3}}\ln n}}{d}\right) \le \frac{c_3}{\ln n}. \end{aligned}$$
(55)

Therefore,

$$\begin{aligned} {\mathbb {P}}\left( \frac{1}{d^2} \left| S(x_0,x)\right| \ge c_4 n^{\alpha -\frac{1}{2}} \sqrt{\ln n} \right) \le \frac{c_3}{\ln n}. \end{aligned}$$
(56)

By Lemma 1 and the conditions on \(\varDelta \) and \(\delta \), we have,

$$\begin{aligned}&\left| \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) - |u(x)-u(x_0)|^{2} \right| \nonumber \\&\quad \le 4L^2 \varDelta ^ \beta \delta ^ \beta \le c_5 n^{- \frac{\beta }{2\beta +2} - \alpha \beta }. \end{aligned}$$
(57)

From (46), we see that

$$\begin{aligned}&{\widehat{\rho }}^{2}(x_0,x) \!-\!|u(x)-u(x_0)|^{2} \le \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) \!-\!|u(x)-u(x_0)|^{2} \\&\quad + \frac{1}{d^2}\left| S(x_0, x)\right| \end{aligned}$$

and

$$\begin{aligned}&{\widehat{\rho }}^{2}(x_0,x) -|u(x)-u(x_0)|^{2} \ge \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) \\&\quad -|u(x)-u(x_0)|^{2} - \frac{1}{d^2}\left| S(x_0, x)\right| , \end{aligned}$$

so that

$$\begin{aligned}&\left| {\widehat{\rho }}^{2}(x_0,x) -|u(x)-u(x_0)|^{2} \right| \\&\quad \le \left| \frac{1}{d^2}\sum \limits _{t\in {\mathcal {N}}_{0,d }}\varLambda ^2 _{x_{0},x}(t) - |u(x)-u(x_0)|^{2} \right| \\&\quad + \frac{1}{d^2}\left| S(x_0, x)\right| . \end{aligned}$$

Therefore, from (57), we obtain

$$\begin{aligned} \left| {\widehat{\rho }}^{2}(x_0,x)-|u(x)-u(x_0)|^{2}\right| \le c_5 n^{- \frac{\beta }{2\beta +2} - \alpha \beta } + \frac{1}{d^2}\left| S(x_0, x)\right| . \end{aligned}$$
(58)

Combining (56) and (58), we get

$$\begin{aligned}&{\mathbb {P}}\left( \left| {\widehat{\rho }}^{2}(x_0,x)-|u(x)-u(x_0)|^{2}\right| \ge c_4 n^{\alpha -\frac{1}{2}} \sqrt{\ln n} \right. \\&\quad \left. +c_5 n^{- \frac{\beta }{2\beta +2} - \alpha \beta } \right) \le \frac{c_3}{\ln n}. \end{aligned}$$

Since the condition \(\frac{1}{2(\beta +1)^2}<\alpha <\frac{1}{2}\) implies \( \frac{\beta }{2\beta +2} + \alpha \beta> \frac{1}{2}-\alpha > 0\), this implies the inequality (27).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, Q., Grama, I. & Liu, Q. Poisson Shot Noise Removal by an Oracular Non-Local Algorithm. J Math Imaging Vis 63, 855–874 (2021). https://doi.org/10.1007/s10851-021-01033-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-021-01033-3

Keywords