Abstract
In this paper, a reduced half thresholding algorithm and its accelerated version, a reduced fast half thresholding algorithm, are proposed for solving large-scale \(L_{1/2}\)-regularized problems. At each iteration, the large-scale original problem is reduced into a sequence of small-scale subproblems, and the subproblems are solved by the (fast) half thresholding algorithm. Global convergence of the reduced half thresholding algorithm is proven, and two techniques: the Newton acceleration technique and the shrinking technique are presented to improve the performance. Compared with the state-of-the-art algorithms, the numerical results show that the presented algorithms are promising. As an example, our algorithms are efficient to recover the signal recovery problem with 19,264,097 samples and signal length 29,890,095
Similar content being viewed by others
Data Availability
All data and codes included in this study are available upon request by contacting the corresponding author.
References
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2, 183–202 (2009)
Large-scale regression with non-convex loss and penalty: Buccini, A., De la Cruz Cabrera, O., Donatelli et al., M. Appl. Numer. Math. 157, 590–601 (2020)
Candes, E.J., Randall, P.A.: Highly robust error correction byconvex programming. IEEE Trans. Inf. Theory 54, 2829–2840 (2008)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)
Candès, E.J., Romberg, J.: Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6, 227–254 (2006)
Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)
Candes, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006)
Candes, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor 52, 489–509 (2006)
Chan, R. H., Liang, H. X.: Half-quadratic algorithm for \(\ell _p-\ell _q\) problems with applications to TV-\(\ell _1 \) image restoration and compressive sensing, In Efficient algorithms for global optimization methods in computer vision (2014).pp. 78-103. Springer, Berlin, Heidelberg
Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)
Chartrand, R., Fast algorithms for nonconvex compressive sensing: mri reconstruction from very few data, in,: IEEE International Symposium on Biomedical Imaging: From Nano to Macro. IEEE 2009, 262–265 (2009)
Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Prob. 24, 035020 (2008)
Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(l_2-l_p\) minimization. Math. Program. 143, 371–383 (2014)
Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
Donoho, D.L.: High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Dis. Comput. Geom. 35, 617–652 (2006)
Esser, E., Lou, Y., Xin, J.: A method for finding structured sparse solutions to nonnegative least squares problems with applications. SIAM J. Imag. Sci. 6, 2010–2046 (2013)
Fan, J., Li, R.: Variable Selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)
Ge, D., Jiang, X., Ye, Y.: A note on the complexity of \(l_p\) minimization. Math. Program. 21, 1721–1739 (2011)
Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. Adv. Neural. Inf. Process. Syst. 22, 1033–1041 (2009)
Lee, H., Battle, A., Raina, R., Ng, A. Y.: Efficient sparse coding algorithms, In Advances in Neural Information Processing Systems, MIT Press: Vancourver, Canada, (2006), pp. 801–808
Lanza, A., Morigi, S., Reichel, L., Sgallari, F.: A generalized Krylov subspace method for \(\ell _p-\ell _q\) minimization. SIAM J. Sci. Comput. 37(5), S30–S50 (2015)
Meinshausen, N., Yu, B., et al.: Lasso-type recovery of sparse representations for high-dimensional data. Ann. Stat. 37, 246–270 (2009)
Mallat, S. G., Zhang, Z.: Matching pursuits with time-frequency dictionaries, IEEE Transactions on signal processing, 41.12(1993):pp.3397–3415
Needell, D., Vershynin, R.: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Select.Top. Sig. Proc. 4, 310–316 (2010)
Needell, D. , Tropp, J. A.: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl .Comput. Harm. Anal., 26.3(2009):pp.301–321
Nocedal, J., Wright, S.: Numerical optimization (1999) Springer. New York
Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)
Nesterov, Y., et al.: Gradient methods for minimizing composite function. Math. Program. 140, 125–161 (2013)
Natraajan, B.K.: Sparse approximation to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)
Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)
Tibshirani, R.: Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Ser. B (Methodological), (1996), pp. 267–288
Tropp, J. A., Gilbert, A. C.: Signal recovery from partial information via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53.12(2007):pp.4655–4666
Wang, G., Wei, X., Yu, B., Xu, L.: An efficient proximal block coordinate homotopy method for large-scale sparse least squares problems. SIAM J. Sci. Comput. 42, A395–A423 (2020)
Xu, Z., Chang, X., Xu, F., Zhang, H.: \(l_{1/2}\) regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)
Xu, Z.B., Zhang, H., Wang, Y., Chang, X.Y., Yong, L.: \(l_{1/2}\) regularization, science China. Inf. Sci. 53, 1159–1169 (2010)
Xu, Z. B., Guo, H., Wang, Y., Zhang, H.: The representation of \(L_{1/2}\) regularizer among \(L_q\)\((0 < q < 1)\)regularizer: An experimental study based on phase diagram, Acta Autom. Sinica, (2011), to be published
Xie, L. L.: A new algorithm for fast solving\(L_{1/2}\)-regularized problem (in Chinese), Master Thesis of Dalian University of Technology, (2014)
Yuan, G.X., Chang, K.W., Hsieh, C.J., Lin, C.J.: A comparison of optimization methods and software for large-scale \(l_1\)-regularized linear classification. J. Mach. Learn. Res. 11, 3183–3234 (2010)
Yin, P., Lou, Y., He, Q., Xin, J.: Minimization of \(l_{1/2}\) for compressed sensing. SIAM J. Sci. Comput. 37, A536–A563 (2015)
Xu, Z., Liang, G., Yao, W., Zhang, H.: Representative of \(l_{1/2}\) regularization among \(L_q\)\((0< q\le 1)\) regularizations: an experimental study based on phase diagram. Acta Automatica Sinica 38, 1225–1228 (2012)
Zeng, J., Xu, Z., Zhang, B., Hong, W., Wu, Y.: Accelerated \(l_{1/2}\) regularization based sar imaging via bcr and reduced newton skills. Signal Process. 93, 1831–1844 (2013)
Zhang, Y., Lee, J. D., Jordan, M. I.: \(l_1\)-regularized neural networks are improperly learnable in polynomial time, in International Conference on Machine Learning, PMLR, (2016), pp. 993–1001
Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101, 1418–1429 (2006)
Funding
This research was supported by the National Natural Science Foundation of China (11971092).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there are no potential conflicts of interest in this study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liang, X., Wang, G. & Yu, B. A Reduced Half Thresholding Algorithm. J Sci Comput 96, 61 (2023). https://doi.org/10.1007/s10915-023-02250-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-023-02250-1