Abstract
The mixed Gaussian-Poisson noise is common in many systems. Sparse based methods are now considered state-of-the-art to reconstruct noisy images. Moreover, it’s gaining increasing attention to improve the sparse reconstruction methods with more image priors like nonlocal similarity. But most related work is aimed at single noise. And because of the definition of sparse representation, the image can only lie in a low dimensional subspace. The cosparse model is then proposed to move the emphasis on the number of zeros in the representation, thus enlarges the subspace’s dimensions. For the first time, we combine sparsity, nonlocal similarity and cosparsity to improve the reconstruction quality. Firstly, non local similarity is used as the melioration of sparse constraint. Then the data fidelity term and cosparsity constraint are added. The objective function is solved alternately and iteratively by IRLSM and GAP. Experimental results indicate that the proposed method can attain higher reconstruction quality.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Images are always degraded by noises in process of formulation, transmission and preservation. And in many applications, there is more than one kind of noises. For example, in the low-photon-counting systems like micro focus X-ray detection, astronomy and fluorescence microscopy, only limited photons are collected because of system requirement or physical constraints. Usually, the noise is modelled as Poisson distribution. And the electronic fluctuations and intrinsic thermal are always assumed as additive Gaussian noise [1]. Thus, Poisson noise and Gaussian noise co-exist in such systems. It’s essential to remove the mixed noise for better application. There are some but not many methods for mixed Gaussian-Poisson noise removal. PURE-LET [1], GAT+BM3D [2], GAT+BLS-GSM [3] are considered state-of-the-art. They are almost based on transform domain like wavelet.
The most popular noisy image reconstruction method should be the compressive sensing based methods. In the compressive sensing framework, the original clean image is sparse while the noisy one is not. Thus, the denoising process is actually to recover the sparsity of the image [4], i.e. to decompose the image into a linear combination of a small number of dictionary atoms [5]. It is also called synthesis sparsity. The synthesis sparsity explores the local information of the images and has shown great results. There are two main problems in sparse reconstruction. One is dictionary learning, the other is sparse decomposition [6]. The most representative sparse method should be K-SVD [7]. To improve the sparse reconstruction quality, it is gaining growing attractions to combine it with more image priors. Nonlocal similarity is the most common indicating that there are many similar patches in the images even if they are not in a local and adjacent area. Researches show that nonlocal similarity is good for reconstructing image structures [2, 8]. Actually, clustering-based dictionary learning like K-LLD [9] is an indirect way to use the non local similarity in sparse reconstruction. And in recent years, the CSR [10] proposed by Dong et al. has been widely considered, which combines nonlocal similarity and sparsity to get the double sparse constraints that help to preserve more details in reconstruction. Another literature [11] combines ideas from nonlocal sparse models and the Gaussian scale mixture model to keep the sharpness and suppress undesirable artifacts. However, the methods mentioned above are aimed at single noise model. And also, sparsity itself is limited because images can only lie in a low dimensional subspace. Given this, a dual analysis viewpoint to sparse representations called cosparsity or analysis sparsity was proposed and has attracted more and more attentions. Cosparsity shifts its focus from the nonzero decomposition coefficients of synthesis dictionary to the zeros in analysis dictionary [12]. The differences between these two models were compared in detail in [12]. Nevertheless, unlike synthespis sparsity, the understanding of analysis sparsity remains shallow and scarce today. Among them, [13] introduced many greedy-like methods in cosparse frame-work including Greedy Analysis Pursuit (GAP), analysis CoSaMP (ACoSaMP), Analysis SP (ASP), etc. And Rubinstein et al. proposed the dual K-SVD based on cosparsity called AK-SVD [14] in 2013, which outperformed K-SVD in removing Gaussian noise. In [15] the cosparse model is cast into row-to-row optimizations and use quadratic programming to obtain sparseness maximization results. As the sparsity and cosparsity are complementary, it’s reasonable to combine them together to promote reconstruction quality. In this paper, we propose a new reconstruction method for images degraded by mixed Gaussian-Poisson noise based on sparsity, cosparsity and nonlocal similarity. And in the follow-up to this article, we refer ‘sparse/sparsity’ to synthesis case and ‘cosparse/cosparsity’ to analysis one.
2 The Mixed Gaussian-Poisson Noise Model
Assume that \(y\in \mathbb {R}\) is corrupted by mixed Gaussian-Poisson noise, then the observation model is as follow:
where Poisson(u) means that the original image \(u\in \mathbb {R}^N\) is corrupted by Poisson noise; \(n\sim \mathcal {N}(0,\sigma ^2I)\) is the additive Gaussian noise; N is the product of image length and width.
Usually, the probability density function (PDF) based on joint probability distribution for formula (1) is complicated and thus the corresponding objective function is difficult to solve. One of the common ways to tackle it is to transform the mixed noise into a Gaussian one using Generalized Anscombe Transform (GAT) [16]. Then the problem turns into the additive case that is easier to solve. While in paper [17], the authors built up the PDF based on independent probability distribution to simplify the objective function in a dual adaptive regularization (DAR) scheme to remove the mixed noise. In this paper, we adopt the same strategy as [17]. Under the independent probability distribution, the PDF changes into:
where \(u_k\) and \(y_k\) is the \(k_{th}\) component of u and y.
Then the log-likelihood of formula (2) under MAP criterion is:
Motivated by [18], the Taylor’s approximation is used to transform the logarithmic term into a quadratic term. Therefore, the objective function becomes: at \(i_{th}\) iteration:
where \(\eta ^i\in \mathbb {R}\) is the second-order coefficient and is calculated via Barzilai-Borwein [19]; \(\nabla \) is the gradient operator; \(Poi(u)=\sum \limits _{k=1}^N(u_k-{y_k}\log {u_k})\) is the Poisson component.
3 The Proposed Method
We will formulate our objective function in this section on the basis of Sect. 2. Firstly, we use a sparse melioration based on nonlocal similarity to further constrain the synthesis sparse coefficients. Then, the mixed Gaussian-Poisson data fidelity term and cosparse constraint are combined with the modified synthesis model.
Thus, the objective function takes use of synthesis sparsity, analysis sparsity and nonlocal similarity and combines these priors together to improve the reconstruction quality.
To make it more intuitive, we rewrite the objective function (6) as:
where \(R_m\) in (6) is the operator to extract a patch in \(\mathbb {R}^n\) at location m; \(\eta ,\tau \) and \(\lambda \) are parameters to balance the data fidelity term and constraint terms; \(D=[D_1;D_2;\cdots D_c]\in \mathbb {R}^{n*c}\) is the synthesis dictionary. We used the same training strategy in [20] to get a clustering dictionary. \(\varOmega \in \mathbb {R}^{p*N}\) is the analysis dictionary updated by GAP; \(\alpha \)/\(\alpha _m\) is sparse coefficients; \(\beta \)/\(\beta _m\) is the melioration for sparse coefficients based on nonlocal similarity and decided via the following scheme:
Step1: Find out Q similar image patches that have smallest Euclidean distance with the input patch in the whole image;
Step2: Calculate the similarity via formula (9) [8], define the set of similar blocks \(\theta '\):
where \(u_{in}\) means the input image to get \(\beta \); \(u_j^{le}\) represents the \(j_{th}\) patch in Step 1; \(\varsigma \) is the threshold; a is the standard deviation of the Gaussian kernel; h is the scaling parameter; Z(m) is the normalized coefficient;
Step 3: Calculate the similar image patch:
Step 4: Calculate \(\beta \):
To solve formula (7), we divide it into two sub-problems based on alternative optimization:
Sub-problem 1: with u fixed, solve \(\alpha \):
Here we use the Iterative Re-weighted Least Squares Minimization (IRLSM) [21] for simplicity.
Sub-problem 2: with \(\alpha \) fixed, solve u:
We choose GAP to solve (14) as it’s the IRLSM method in cosparsity’s framework.
The complete method called SRMM can be found in Table 1.
4 Experiments
In this section, experiments are displayed to demonstrate the effectiveness and superiority of the proposed method (SRMM). Firstly, experiments are conducted on commonly used natural images in Fig. 1(a)–(d). We add different degrees of mixed noise (\(\sigma =5\) and \(\sigma =25\)) to test the availability. Examples are shown in Fig. 1(e)–(f). Moreover, two micro-focus X-ray noisy images are used as the real-data experimental subjects, see Fig. 1(g)–(h). Several methods are selected as comparisons: (1) mixed noise removal: PURE-LET (P-L) [1], GAT+BM3D (G+B) [2], GAT+BLS-GSM (G+B-G) [3], DAR [17]; (2) sparse-based: K-SVD [7] and AK-SVD [14]; (3) multiple-constraint-based: CSR [10].
Our proposed denoising approach has seven vital paramete6rs to be set: the patch size, the number of the atoms in learning dictionary, the number of the cluster centers, the iterations number \(T_{iter}\) and three regularization parameters \(\mu \), \(\tau \) and \(\lambda \). We set the patch size \(7\times 7\) and cluster centers number 8. Each cluster has 200 atoms. The regularization terms \(\mu \), \(\tau \) and \(\lambda \) are set to be 1, 0.5 and 0.4, respectively. The maximum total iterations number \(T_{iter}\) is set 10. These parameters are selected empirically for higher performance in our experiment, but suitable variations to these parameters are acceptable depending on different image size and noise strength.
Due to the space limitation, we only show partial enlarged results of two natural images in Figs. 2 and 3. For micro-focus X-ray images, we demonstrate the edge-detection results in Figs. 4 and 5 as the micro-focus X-ray images are low-contrast and hard to compare directly. Objective indexes can be seen in Tables 2, 3 and 4. Full-Reference PSNR and MSSIM are used for natural images. And no reference MSR for smooth regions, LS for detail regions [17] and BRISQUE [22] are applied in micro-focus X-ray images since it’s unable to get the noise-free micro-focus X-ray images.
For natural images, PURE-LET will not only blur details, but also bring artifacts that debase reconstruction quality. GAT+BLS-GSM suffers from over-smoothing, lots of details are lost. DAR suffers from details loss with slight noise remain. GAT+BM3D outperforms PURE-LET, GAT+BLS-GSM and DAR in visual effects. However, some details, especially weak edges are over-smoothed. K-SVD can preserve most details but there is noise left. Results of AK-SVD are not ideal for the reason that the edge transitions are not smooth. As the noise level especially the Poisson noise strength grows stronger, the noise in the non-smooth area is hard to be removed sufficiently. With two constraints, CSR reaches better results than the above methods. Nevertheless, some details are still lost. As for the proposed method, it can remove most mixed Gaussian-Poisson noise and in the meantime, preserve more edges and details.
For the reconstruction results of micro-focus X-ray images in Fig. 4, PURE-LET fails to reconstruct ‘Capacitance’. GAT+BM3D, DAR, K-SVD, AK-SVD and CSR suffer from over-smoothness. The details of pins as indicated in Fig. 4 are less. The noise is not well eliminated in GAT+BLS-GSM. The proposed method SRMM, however, removes most noise and preserves more details in the pins. For ‘Bubbles’ in Fig. 5, the proposed method and GAT+BM3D can reconstruct most details including small and weak bubbles in the left side. But GAT+BM3D’s result is smoother than SRMM. Other methods fail to reconstruct the shapes of the weak and small bubbles as much as SRMM does.
Tables 2, 3 and 4 show the objective indexes of all experiments. In Table 2, SRMM achieves higher PSNR and MSSIM in most results. Also, for micro-focus X-ray images, SRMM gets lowest BRISQUE indicating best overall quality. And relatively higher MSR (more noise removed) and higher LS (more structures kept) mean that SRMM can attain a better balance between noise-removal and edge-preservation.
5 Conclusions
In this paper, we propose a new image sparse reconstruction method for mixed Gaussian-Poisson noise with multiple constraints. In our model, three priors, namely synthesis sparsity, analysis sparsity and nonlocal similarity, are combined as the sparse constraints to obtain better results. Among them, the synthesis sparsity and analysis sparsity are complementary and reconstruct noisy images by means of re-covering the sparsity. And nonlocal similarity explores the structure information that helps to reconstruct details. The objective function is solved via IRLSM and GAP alternatively. Experimental results show that the proposed method outperforms the contrast methods both in visual effects and indexes, which will improve reconstruction quality.
References
Luisier, F., Blu, T., Unser, M.: Image denoising in mixed Poisson-Gaussian noise. IEEE Trans. Image Process. 20(3), 696–708 (2011)
Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)
Portilla, J., Strela, V., Wainwright, M.J., Simoncelli, E.P.: Image denoising using Gaussian scale mixtures in the wavelet domain. IEEE Trans. Image Process. 12(11), 1338–1351 (2003)
Lian, Q., Shi, B., Chen, S.Z.: Research advances on dictionary learning models, algorithms and applications. Acta Automatica Sin. 41(2), 240–260 (2015)
Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)
Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15(12), 3736–3745 (2006)
Aharon, M., Elad, M., Bruckstein, A.: K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006)
Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 60–65. IEEE Computer Society, Washington, DC (2005)
Chatterjee, P., Milanfar, P.: Clustering-based denoising with locally learned dictionaries. IEEE Trans. Image Process. 18(7), 1438–1451 (2009)
Dong, W., Li, X., Zhang, L., et al.: Sparsity-based image denoising via dictionary learning and structural clustering. In: 24th Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 457–464. IEEE Press, Colorado Springs (2011)
Dong, W., Shi, G., Ma, Y., Li, X.: Image restoration via simultaneous sparse coding: where structured sparsity meets Gaussian scale mixture. Springer Int. J. Comput. Vis. 114(2), 217–232 (2015)
Nam, S., Davies, M.E., Elad, M., Gribonval, E.: The cosparse analysis model and algorithms. Appl. Comput. Harmonic Anal. 34(1), 30–56 (2013)
Giryes, R., Nam, S., Elad, M., et al.: Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl. 441(1), 22–60 (2014)
Rubinstein, R., Peleg, T., Elad, M.: Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. IEEE Trans. Signal Process. 61(3), 661–677 (2013)
Li, Y., Ding, S., Li, Z.: Dictionary learning with the cosparse analysis model based on summation of blocked determinants as the sparseness measure. Elsevier Dig. Sig. Process. 48, 298–309 (2016)
Makitalo, M., Foi, A.: Optimal inversion of the generalized anscombe transformation for Poisson-Gaussian noise. IEEE Trans. Image Process. 22(1), 91–103 (2013)
Wu, Z., Gao, H., Ma, G., Wan, Y.: A dual adaptive regularization method to remove mixed Gaussian-Poisson noise. In: Chen, C.-S., Lu, J., Ma, K.-K. (eds.) ACCV 2016. LNCS, vol. 10116, pp. 206–221. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54407-6_14
Harmany, Z.T., Marcia, R.F., Willett, R.M.: This is SPIRAL-TAP: sparse poisson intensity reconstruction algorithms theory and practice. IEEE Trans. Image Process. 21(3), 1084–1096 (2012)
Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)
Dong, W., Zhang, L., Shi, G., et al.: Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011)
Daubechies, I., Devore, R., Fornasier, M.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63(1), 1–38 (2008)
Mittal, A., Moorthy, A.K., Bovik, A.C.: No reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)
Acknowledgments
This work was supported by Natural Science Foundation of China under Grant 61403146, Fundamental Research Funds for the Central Universities under Grant 2015ZM128, Science and Technology Program of Guangzhou, China under Grant 201707010054 and Science and Technology Program of Guangzhou, China under Grant 201704030072.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wu, Z., Gao, H., Chen, Y., Kang, H. (2017). A New Image Sparse Reconstruction Method for Mixed Gaussian-Poisson Noise with Multiple Constraints. In: Yang, J., et al. Computer Vision. CCCV 2017. Communications in Computer and Information Science, vol 772. Springer, Singapore. https://doi.org/10.1007/978-981-10-7302-1_29
Download citation
DOI: https://doi.org/10.1007/978-981-10-7302-1_29
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-7301-4
Online ISBN: 978-981-10-7302-1
eBook Packages: Computer ScienceComputer Science (R0)