Abstract
In this paper, we address the problem of denoising images obtained under low-light conditions for the Poisson shot noise model. Under such conditions, the variance stabilization transform (VST) is no longer applicable, so that the state-of-the-art algorithms which are proficient for the additive white Gaussian noise cannot be applied. We first introduce an oracular non-local algorithm and prove its convergence with the optimal rate of convergence under a Hölder regularity assumption for the underlying image, when the search window size is suitably chosen. We also prove that the convergence remains valid when the oracle function is estimated within a prescribed error range. We then define a realizable filter by a statistical estimation of the similarity function which determines the oracle weight. The convergence of the realizable filter is justified by proving that the estimator of the similarity function lies in the prescribed error range with high probability. The experiments show that under low-light conditions the proposed filter is competitive compared with the recent state-of-the-art algorithms.
Similar content being viewed by others
References
Anscombe, F.: The transformation of Poisson, binomial and negative-binomial data. Biometrika 35(3/4), 246–254 (1948)
Azzari, L., Foi, A.: Variance stabilization for noisy+ estimate combination in iterative poisson denoising. IEEE Signal Process. Lett. 23(8), 1086–1090 (2016)
Azzari, L., Foi, A.: Variance stabilization in Poisson image deblurring. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp 728–731. IEEE, (2017)
Bria, A., Marrocco, C., Borges, L.R., Molinara, M., Marchesi, A., Mordang, J.-J., Karssemeijer, N., Tortorella, F.: Improving the automated detection of calcifications using adaptive variance stabilization. IEEE Trans. Med. Imag. (2018)
Buades, A., Coll, B., Morel, J.: A review of image denoising algorithms, with a new one. SIAM J. Multiscale Model. Simulat. 4(2), 490–530 (2005)
Chouzenoux, E., Jezierska, A., Pesquet, J.-C., Talbot, H.: A convex approach for image restoration with exact poisson-Gaussian likelihood. SIAM J. Image. Sci. 8(4), 2662–2682 (2015)
Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)
Danielyan, A., Foi, A., Katkovnik, V., Egiazarian, K.: Denoising of multispectral images via nonlocal groupwise spectrum-PCA. In: Conference on Colour in Graphics, Imaging, and Vision, Society for Imaging Science and Technology, no. 1, pp 261–266, (2010)
Deledalle, C.-A., Tupin, F., Denis, L.: Poisson NL means: Unsupervised non local means for Poisson noise. In: 2010 IEEE International Conference on Image Processing, pp 801–804. IEEE, (2010)
Donoho, D., Johnstone, J.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425 (1994)
Fan, J.: Local linear regression smoothers and their minimax efficiencies. Ann. Stat., pp 196–216, (1993)
Feng, W., Qiao, H., Chen, Y.: Poisson noise reduction with higher-order natural image prior model. SIAM J. Imag. Sci. 9(3), 1502–1524 (2016)
Fisz, M.: The limiting distribution of a function of two independent random variables and its statistical application. In: Colloquium Mathematicae Institute of Mathematics Polish Academy of Sciences, vol 3, pp 138–146, (1955)
Fryzlewicz, P.: Likelihood ratio Haar variance stabilization and normalization for Poisson and other non-Gaussian noise removal. arXiv:1701.07263, (2017)
Giryes, R., Elad, M.: Sparsity-based Poisson denoising with dictionary learning. IEEE Trans. Image Process. 23(12), 5057–5069 (2014)
Goudail, F.: Performance comparison of pseudo-inverse and maximum-likelihood estimators of Stokes parameters in the presence of Poisson noise for spherical design-based measurement structures. Opt. Lett. 42(10), 1899–1902 (2017)
Jansen, M.: Multiscale Poisson data smoothing. J. Roy. Statist. Soc. B 68(1), 27–48 (2006)
Jianqing, F., Gijbels, I.: Local Polynomial Modelling and its Applications Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, Boca Raton (1996)
Jin, Q., Grama, I., Kervrann, C., Liu, Q.: Nonlocal means and optimal weights for noise removal. SIAM J. Imag. Sci. 10(4), 1878–1920 (2017)
Jin, Q., Grama, I., Liu, Q.: A new poisson noise filter based on weights optimization. J. Sci. Comput. 58(3), 548–573 (2014)
Jin, Q., Grama, I., Liu, Q.: Convergence theorems for the non-local means filter. Inverse Probl. Imag. 12(4), 853–881 (2018)
Lebrun, M., Buades, A., Morel, J.-M.: A nonlocal bayesian image denoising algorithm. SIAM J. Imag. Sci. 6(3), 1665–1688 (2013)
Luisier, F., Vonesch, C., Blu, T., Unser, M.: Fast interscale wavelet denoising of Poisson-corrupted images. Signal Process. 90(2), 415–427 (2010)
Makitalo, M., Foi, A.: A closed-form approximation of the exact unbiased inverse of the Anscombe variance-stabilizing transformation. IEEE Trans. Image Process. PP(99), 1 (2011)
Makitalo, M., Foi, A.: Optimal inversion of the Anscombe transformation in low-count Poisson image denoising. IEEE Trans. Image Process. 20(1), 99–109 (2011)
Mandel, J.: Use of the singular value decomposition in regression analysis. Am. Stat. 36(1), 15–24 (1982)
Prucnal, P.R., Saleh, B.E.: Transformation of image-signal-dependent noise into image-signal-independent noise. Opt. Lett. 6(7), 316–318 (1981)
Rond, A., Giryes, R., Elad, M.: Poisson inverse problems by the plug-and-play scheme. J. Vis. Commun. Image Represent. 41, 96–108 (2016)
Salmon, J., Harmany, Z., Deledalle, C.-A., Willett, R.: Poisson noise reduction with non-local PCA. J. Mathe. Imag. Vis. 48(2), 279–294 (2014)
Srivastava, R., Srivastava, S.: Restoration of Poisson noise corrupted digital images with nonlinear PDE based filters along with the choice of regularization parameter estimation. Pattern Recognit. Lett. 34(10), 1175–1185 (2013)
Sutour, C., Deledalle, C.-A., Aujol, J.-F.: Adaptive regularization of the NL-means: application to image and video denoising. IEEE Trans. Image Process. 23(8), 3506–3521 (2014)
Terrell, G.R., Scott, D.W.: Variable kernel density estimation. Ann. Stat., pp 1236–1265, (1992)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Zhang, B., Fadili, J., Starck, J.: Wavelets, ridgelets, and curvelets for Poisson noise removal. IEEE Trans. Image Process. 17(7), 1093–1108 (2008)
Zhang, J., Hirakawa, K.: Improved denoising via poisson mixture modeling of image sensor noise. IEEE Trans. Image Process. PP(99), 1 (2017)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)
Zhang, Y., Song, P., Dai, Q.: Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed Poisson-Gaussian likelihood. Opt. Exp. 25(1), 168–179 (2017)
Acknowledgements
The authors are very grateful to Jean-Michel Morel for careful reading, helpful comments and suggestions. They are also grateful to the reviewers for their valuable comments and remarks. The work has been supported by the National Natural Science Foundation of China (Grants Nos. 12061052, 11731012 and 11971063), the Natural Science Fund of Inner Mongolia Autonomous Region (Grant No. 2020MS01002),the “111 project” of higher education talent training in Inner Mongolia Autonomous Region, the China Scholarship Council for a one year visiting at Ecole Normale Supérieure Paris-Saclay (No. 201806810001), the Centre Henri Lebesgue (CHL, ANR-11-LABX-0020-01) and the network information center of Inner Mongolia University. Q. Jin would also like to thank Professor Guoqing Chen for helpful suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proofs of the Main Results
Appendix: Proofs of the Main Results
1.1 Proof of Theorem 1
We first notice that by the Hölder condition (17), for each \(x_0 \in {\mathbf {I}}_0\), we have \(| u (x'_0) - u(x_0) | \le L\Vert x'_0- x_0\Vert _\infty ^\beta \le L N^{-\beta } = Ln^{-\beta /2}\), where \( x_{0}'=(x'_{0,1},x'_{0,2})= \left( \frac{[N x_{0,1}]}{N},\frac{[N x_{0,2}]}{N}\right) \in {\mathbf {I}}\). This, together with the elementary inequality \((a+b)^2 \le 2a^2 + 2b^2\), implies that
Since \(n^{-\beta } < n^{-\frac{\beta }{\beta +1}}\), it suffices to prove that
In other words (since \(x'_0 \in {\mathbf {I}}\)), it suffices to prove (19) for each \(x_0 \in {\mathbf {I}}.\) So in the following, we suppose that \(x_0 \in {\mathbf {I}}.\) In this case \(x'_0\) coincides with \(x_0\).
Denoting for brevity
and
then we have
By the assumption of the theorem \(\gamma \ge c L\varDelta ^{\beta } (c>\sqrt{2})\), which implies that for \(x\in {\mathcal {N}}_{x_0,D}\), we have
Noting that \(e^{-\frac{\tau ^2}{H^2(x_0)}}\), \(\tau \in [0,\gamma /\sqrt{2})\) is decreasing, and using one term Taylor expansion, the inequality (37) implies that
where \(D^2 = (2N\varDelta +1)^2\) is the cardinality of the search window (cf. Eq.(18)). Since \(\tau e^{-\frac{\tau ^2}{H^2(x_0)}}\) is increasing in \(\tau \in [0, \gamma /\sqrt{2})\),
The above three inequalities (34), (38) and (39) imply that
Taking into account (35), (38) and the inequality
it is easily seen that
Combining (36), (40) and (41), we get
Using the condition \( c_1 n^{-\frac{1}{2\beta +2}} \le \varDelta \le c_2 n^{-\frac{1}{2\beta +2}} \) for some constants \(c_1, c_2 >0\), from this we infer that
This ends the proof of (19).
1.2 Proof of Theorem 2
As in the proof of Theorem 1, we can assume that \(x_0 \in {\mathbf {I}}\), so that \(x'_0= x_0\).
By our condition, \( e(x,x_0) = {{\bar{\rho }}}^2 (x_0,x) - \rho ^2 (x_0,x)\) satisfies \( |e(x,x_0) | \le \eta _n = O(n^{-\frac{\beta }{\beta +1}}),\) where \(\eta _n = \max _{{x_0},x \in {\mathbf {I}}} |e(x,x_0)|\). Using the elementary inequality \( (a-b)^2 \le |a^2 - b^2| \) for \(a,b \ge 0\), we obtain that
Therefore,
As \((a+b)^2 \le 2a^2 + 2b^2\), we have
Hence,
From the proof of Theorem 1, we deduce that
Since \( \eta _n = O(n^{-\frac{\beta }{2+2\beta }}) \), we obtain
1.3 Proof of Theorem 3
We first give an expression of \(\widehat{\rho ^2} (x_0,x)\) defined by (23), which will be suitable for the estimation. For convenience, let
and
With these notations and using (3), we see that the function in the definition of \(\widehat{\rho ^2}(x_0,x) \) (cf. (23)) can be written as:
where
Therefore, by the definition of \(\widehat{\rho ^2}(x_0,x) \) (see (23)),
We will need two lemmas for the estimation of the two sums in (46).
Lemma 1
Under the local Hölder condition (17), with \(\varDelta \) and \(\delta \) defined by (18) and (26), we have
The proof of this lemma can be found in [19, 21].
Lemma 2
There are two positive constants \(c_{1}\) and \(c_{2}\), depending only on L and \(\varGamma ,\) such that for any \(0< z\le c_{1}d,\)
Proof
Note that the variables
are identically distributed with \({\mathbb {E}}X_{t} = 0\). We prove below that the variance \( {\mathbb {E}}X^2_{t}\) satisfies \( \max _{t\in {\mathcal {N}}_{0,d }} {\mathbb {E}}X^2_{t}\le b\) for some constant \(b>0.\) As v(x) has Poisson law with parameter u(x), it holds that
Hence, for each \(x\in {\mathcal {N}}_{x_0,D}\) and each \(t\in {\mathcal {N}}_{0,d}\),
where the last inequality follows by the definition of \(\varGamma \) (see (16)). From (48) and the inequality \((a+b)^4 \le 8a^4 +8b^4\) for \(a,b \in {\mathbf {R}}\), we have
As \({\mathbb {E}}(\varepsilon (x))=0\) and \({\mathbb {V}}ar(\varepsilon (x))=u(x)\), by the independence of \(\varepsilon (x_0+t)\) and \(\varepsilon (x+t),\) it follows that
As the function u satisfies the local Hölder condition (17), for \(x\in {\mathcal {N}}_{x_0,x}\)
Therefore, taking into account (49), (50) and (51), we obtain, uniformly in \(t\in {\mathcal {N}}_{0,d }\),
We have therefore proved that \( {\mathbb {E}}X^2_{t}\le b\), where \(b:= 8(2+L^2)\varGamma + 72\varGamma ^2 +48\varGamma ^3 +32\varGamma ^4\).
The point in handling the sum \(S(x_0, x) =\sum \limits _{t\in {\mathcal {N}}_{0,d }}X_{t} \) is that the variables \(X_{t}, t\in {\mathcal {N}}_{0,d } \) are not necessarily independent. Remark that \(\zeta _{x_0,x}(t)\) and \(\zeta _{x_0,x} \left( s\right) \) are correlated if and only if \(t-s = \pm (x_0 - x):\) indeed, it can be easily checked that
By the definition of \(\zeta _{x_{0},x}( t )\), if \( t - s \ne \pm (x-x_0),\) then \(\zeta _{x_{0},x}( t )\) and \(\zeta _{x_{0},x}( s )\) are independent, so that \(X_ t \) and \(X_{ s}\) are also independent. Consequently
By the Cauchy–Schwarz inequality, \({\mathbb {E}} ( X_{ t } X_{ s}) \le b\). Hence,
Therefore, by Chebyshev’s inequality
\(\square \)
Now we turn to the proof of Theorem 3. Below \(c_1, c_2, \cdots \) stand for some constants (independent of n). By equation (26) and the assumption on \(\delta \), we have \(d \ge c_{1}n^{\frac{1}{2}-\alpha }.\)
Applying Lemma 2 with \(z=\sqrt{\frac{1}{c_3}\ln n} \le c_2 d\), we see that
Therefore,
By Lemma 1 and the conditions on \(\varDelta \) and \(\delta \), we have,
From (46), we see that
and
so that
Therefore, from (57), we obtain
Combining (56) and (58), we get
Since the condition \(\frac{1}{2(\beta +1)^2}<\alpha <\frac{1}{2}\) implies \( \frac{\beta }{2\beta +2} + \alpha \beta> \frac{1}{2}-\alpha > 0\), this implies the inequality (27).
Rights and permissions
About this article
Cite this article
Jin, Q., Grama, I. & Liu, Q. Poisson Shot Noise Removal by an Oracular Non-Local Algorithm. J Math Imaging Vis 63, 855–874 (2021). https://doi.org/10.1007/s10851-021-01033-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-021-01033-3