[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

A Reduced Half Thresholding Algorithm

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

In this paper, a reduced half thresholding algorithm and its accelerated version, a reduced fast half thresholding algorithm, are proposed for solving large-scale \(L_{1/2}\)-regularized problems. At each iteration, the large-scale original problem is reduced into a sequence of small-scale subproblems, and the subproblems are solved by the (fast) half thresholding algorithm. Global convergence of the reduced half thresholding algorithm is proven, and two techniques: the Newton acceleration technique and the shrinking technique are presented to improve the performance. Compared with the state-of-the-art algorithms, the numerical results show that the presented algorithms are promising. As an example, our algorithms are efficient to recover the signal recovery problem with 19,264,097 samples and signal length 29,890,095

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

All data and codes included in this study are available upon request by contacting the corresponding author.

Notes

  1. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets.

  2. https://www.csie.ntu.edu.tw/\(\sim \)cjlin/libsvmtools/datasets.

References

  1. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2, 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Large-scale regression with non-convex loss and penalty: Buccini, A., De la Cruz Cabrera, O., Donatelli et al., M. Appl. Numer. Math. 157, 590–601 (2020)

  3. Candes, E.J., Randall, P.A.: Highly robust error correction byconvex programming. IEEE Trans. Inf. Theory 54, 2829–2840 (2008)

    Article  MATH  Google Scholar 

  4. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Candès, E.J., Romberg, J.: Quantitative robust uncertainty principles and optimally sparse decompositions. Found. Comput. Math. 6, 227–254 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  7. Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  8. Candes, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Candes, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor 52, 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  10. Chan, R. H., Liang, H. X.: Half-quadratic algorithm for \(\ell _p-\ell _q\) problems with applications to TV-\(\ell _1 \) image restoration and compressive sensing, In Efficient algorithms for global optimization methods in computer vision (2014).pp. 78-103. Springer, Berlin, Heidelberg

  11. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)

    Article  Google Scholar 

  12. Chartrand, R., Fast algorithms for nonconvex compressive sensing: mri reconstruction from very few data, in,: IEEE International Symposium on Biomedical Imaging: From Nano to Macro. IEEE 2009, 262–265 (2009)

  13. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Prob. 24, 035020 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  14. Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(l_2-l_p\) minimization. Math. Program. 143, 371–383 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  16. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  17. Donoho, D.L.: High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Dis. Comput. Geom. 35, 617–652 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. Esser, E., Lou, Y., Xin, J.: A method for finding structured sparse solutions to nonnegative least squares problems with applications. SIAM J. Imag. Sci. 6, 2010–2046 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Fan, J., Li, R.: Variable Selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  20. Ge, D., Jiang, X., Ye, Y.: A note on the complexity of \(l_p\) minimization. Math. Program. 21, 1721–1739 (2011)

    Google Scholar 

  21. Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. Adv. Neural. Inf. Process. Syst. 22, 1033–1041 (2009)

    Google Scholar 

  22. Lee, H., Battle, A., Raina, R., Ng, A. Y.: Efficient sparse coding algorithms, In Advances in Neural Information Processing Systems, MIT Press: Vancourver, Canada, (2006), pp. 801–808

  23. Lanza, A., Morigi, S., Reichel, L., Sgallari, F.: A generalized Krylov subspace method for \(\ell _p-\ell _q\) minimization. SIAM J. Sci. Comput. 37(5), S30–S50 (2015)

    Article  MATH  Google Scholar 

  24. Meinshausen, N., Yu, B., et al.: Lasso-type recovery of sparse representations for high-dimensional data. Ann. Stat. 37, 246–270 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Mallat, S. G., Zhang, Z.: Matching pursuits with time-frequency dictionaries, IEEE Transactions on signal processing, 41.12(1993):pp.3397–3415

  26. Needell, D., Vershynin, R.: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Select.Top. Sig. Proc. 4, 310–316 (2010)

    Article  Google Scholar 

  27. Needell, D. , Tropp, J. A.: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl .Comput. Harm. Anal., 26.3(2009):pp.301–321

  28. Nocedal, J., Wright, S.: Numerical optimization (1999) Springer. New York

  29. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  30. Nesterov, Y., et al.: Gradient methods for minimizing composite function. Math. Program. 140, 125–161 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  31. Natraajan, B.K.: Sparse approximation to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  Google Scholar 

  32. Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)

    Article  Google Scholar 

  33. Tibshirani, R.: Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Ser. B (Methodological), (1996), pp. 267–288

  34. Tropp, J. A., Gilbert, A. C.: Signal recovery from partial information via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53.12(2007):pp.4655–4666

  35. Wang, G., Wei, X., Yu, B., Xu, L.: An efficient proximal block coordinate homotopy method for large-scale sparse least squares problems. SIAM J. Sci. Comput. 42, A395–A423 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  36. Xu, Z., Chang, X., Xu, F., Zhang, H.: \(l_{1/2}\) regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)

    Article  Google Scholar 

  37. Xu, Z.B., Zhang, H., Wang, Y., Chang, X.Y., Yong, L.: \(l_{1/2}\) regularization, science China. Inf. Sci. 53, 1159–1169 (2010)

    MATH  Google Scholar 

  38. Xu, Z. B., Guo, H., Wang, Y., Zhang, H.: The representation of \(L_{1/2}\) regularizer among \(L_q\)\((0 < q < 1)\)regularizer: An experimental study based on phase diagram, Acta Autom. Sinica, (2011), to be published

  39. Xie, L. L.: A new algorithm for fast solving\(L_{1/2}\)-regularized problem (in Chinese), Master Thesis of Dalian University of Technology, (2014)

  40. Yuan, G.X., Chang, K.W., Hsieh, C.J., Lin, C.J.: A comparison of optimization methods and software for large-scale \(l_1\)-regularized linear classification. J. Mach. Learn. Res. 11, 3183–3234 (2010)

    MathSciNet  MATH  Google Scholar 

  41. Yin, P., Lou, Y., He, Q., Xin, J.: Minimization of \(l_{1/2}\) for compressed sensing. SIAM J. Sci. Comput. 37, A536–A563 (2015)

    Article  MATH  Google Scholar 

  42. Xu, Z., Liang, G., Yao, W., Zhang, H.: Representative of \(l_{1/2}\) regularization among \(L_q\)\((0< q\le 1)\) regularizations: an experimental study based on phase diagram. Acta Automatica Sinica 38, 1225–1228 (2012)

    MathSciNet  Google Scholar 

  43. Zeng, J., Xu, Z., Zhang, B., Hong, W., Wu, Y.: Accelerated \(l_{1/2}\) regularization based sar imaging via bcr and reduced newton skills. Signal Process. 93, 1831–1844 (2013)

    Article  Google Scholar 

  44. Zhang, Y., Lee, J. D., Jordan, M. I.: \(l_1\)-regularized neural networks are improperly learnable in polynomial time, in International Conference on Machine Learning, PMLR, (2016), pp. 993–1001

  45. Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101, 1418–1429 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This research was supported by the National Natural Science Foundation of China (11971092).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Yu.

Ethics declarations

Conflict of interest

The authors declare that there are no potential conflicts of interest in this study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, X., Wang, G. & Yu, B. A Reduced Half Thresholding Algorithm. J Sci Comput 96, 61 (2023). https://doi.org/10.1007/s10915-023-02250-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02250-1

Keywords

Navigation