[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

The Equivalence of Three Types of Error Bounds for Weakly and Approximately Convex Functions

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We start by establishing the equivalence of three types of error bounds: weak sharp minima, level-set subdifferential error bounds and Łojasiewicz (for short Ł) inequalities for weakly convex functions with exponent \(\alpha \in [0,1]\) and approximately convex functions. Then we apply these equivalence results to a class of nonconvex optimization problems, whose objective functions are the sum of a convex function and a composite function with a locally Lipschitz function and a smooth vector-valued function. Finally, applying a characterization for lower-order regularization problems, we show that the level-set subdifferential error bound with exponent 1 and the Ł inequality with exponent \(\frac{1}{2}\) hold at a local minimum point.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1–2), 91–129 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Aussel, D., Daniilidis, A., Thibault, A.D.: Subsmooth sets: functional characterizations and related concepts. Trans. Am. Math. Soc. 357(4), 1275–1301 (2004)

    MathSciNet  MATH  Google Scholar 

  3. Azé, D.: Asurvey on error bounds for lower semicontinuous functions. In: Proceedings of 2003 MODESMAI Conference of ESAIM Proceedings, EDP Sciences, Les Ulis, vol. 13, pp. 1–17 (2003)

  4. Bernard, F., Thibault, L.: Uniform prox-regularity of functions and epigraphs in Hilbert spaces. Nonlinear Anal. 60, 187–207 (2005)

    MathSciNet  MATH  Google Scholar 

  5. Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2007)

    MATH  Google Scholar 

  6. Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18, 556–572 (2007)

    MathSciNet  MATH  Google Scholar 

  7. Bolte, J., Nguyen, T.P., Peypouquet, J., Bruce, W.S.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165, 471–507 (2017)

    MathSciNet  MATH  Google Scholar 

  8. Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control Optim. 31(5), 1340–1359 (1993)

    MathSciNet  MATH  Google Scholar 

  9. Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program. 71(2), 179–194 (1995)

    MathSciNet  MATH  Google Scholar 

  10. Burke, J.V., Deng, S.: Weak sharp minima revisited. Part I: basic theory. Control Cybern. 31(3), 439–469 (2002)

    MATH  Google Scholar 

  11. Burke, J.V., Deng, S.: Weak sharp minima revisited, part II: application to linear regularity and error bounds. Math. Program. 104(2–3), 235–261 (2005)

    MathSciNet  MATH  Google Scholar 

  12. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24(3), 035020 (2008)

    MathSciNet  MATH  Google Scholar 

  13. Chen, X.J., Xu, F.M., Ye, Y.Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)

    MathSciNet  Google Scholar 

  14. Chouzenoux, E., Pesquet, J.C., Repetti, A.: Variable metric forward–backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162, 107–132 (2014)

    MathSciNet  MATH  Google Scholar 

  15. Cromme, L.: Strong uniqueness. Numer. Math. 29, 179–193 (1978)

    MathSciNet  MATH  Google Scholar 

  16. Daniilidis, A., Georgiev, P.: Approximate convexity and submonotonicity. J. Math. Anal. Appl. 291(1), 292–301 (2004)

    MathSciNet  MATH  Google Scholar 

  17. Daniilidis, A., Malick, J.: Filling the gap between lower-\(C^{1}\) and lower-\(C^{2}\) functions. J. Convex Anal. 12(2), 315–329 (2005)

    MathSciNet  MATH  Google Scholar 

  18. Deng, S., Yang, X.Q.: Weak sharp minima in multicriteria linear programming. SIAM J. Optim. 15(2), 456–460 (2005)

    MathSciNet  MATH  Google Scholar 

  19. Drusvyatskiy, D., Ioffe, A.: Quadratic growth and critical point stability of semi-algebraic functions. Math. Program. 153(2), 635–653 (2015)

    MathSciNet  MATH  Google Scholar 

  20. Drusvyatskiy, D., Lewis, A.S.: Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res. 43(3), 919–948 (2018)

    MathSciNet  MATH  Google Scholar 

  21. Drusvyatskiy, D., Ioffe, A.D., Lewis, A.S.: Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria. Math. Program. 185, 357–383 (2021)

    MathSciNet  MATH  Google Scholar 

  22. Ferris, M.C.: Weak Sharp Minima and Penalty Functions in Mathematical Programming. PhD thesis, University of Cambridge, Cambridge (1988)

  23. Ge, D.D., Jiang, X.Y., Ye, Y.Y.: A note on the complexity of \(\ell _p\) minimization. Math. Program. 129(2), 285–299 (2011)

    MathSciNet  MATH  Google Scholar 

  24. Hu, Y.H., Li, C., Yang, X.Q.: On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM J. Optim. 26(2), 1207–1235 (2016)

    MathSciNet  MATH  Google Scholar 

  25. Hu, Y.H., Yu, C.K.W., Yang, X.Q.: Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions. J. Global Optim. 75, 1003–1028 (2017)

    MathSciNet  MATH  Google Scholar 

  26. Hu, Y.H., Li, C., Meng, K.W., Yang, X.Q.: Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems. J. Global Optim. 79(4), 853–883 (2021)

    MathSciNet  MATH  Google Scholar 

  27. Kruger, A.Y., López, M.A., Yang, X.Q., Zhu, J.X.: Hölder error bounds and Hölder calmness with applications to convex semi-infinite optimization. Set Valued Var. Anal. 27(4), 995–1023 (2019)

    MathSciNet  MATH  Google Scholar 

  28. Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. l’institut Fourier 48, 769–783 (1998)

  29. Li, C., Wang, X.H.: On convergence of the Gauss–Newton method for convex composite optimization. Math. Program. 91(2), 349–356 (2002)

    MathSciNet  MATH  Google Scholar 

  30. Li, C., Mordukhovich, B.S., Wang, J.H., Yao, J.C.: Weak sharp minima on Riemannian manifolds. SIAM J. Optim. 21(4), 1523–1560 (2009)

    MathSciNet  MATH  Google Scholar 

  31. Li, G.Y., Pong, T.K.: Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Program. 159, 371–401 (2016)

    MathSciNet  MATH  Google Scholar 

  32. Li, G.Y., Pong, T.K.: Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comput. Math. 18, 1199–1232 (2018)

    MathSciNet  MATH  Google Scholar 

  33. Li, M.H., Meng, K.W., Yang, X.Q.: On error bound moduli for locally Lipschitz and regular functions. Math. Program. 171(1), 463–487 (2018)

    MathSciNet  MATH  Google Scholar 

  34. Liu, T.X., Pong, T.K.: Kurdyka–Łojasiewicz property of least squares problems with a certain class of nonconvex regularizers. Private Communication (2021)

  35. Łojasiewicz, S.: Une propriété topologi que des sous-ensembles analytiques réels. In: Les Équationsaux Dérivées Partielles, pp. 87–89. Éditions du Centre National de la Recherche Scientifique, Paris (1963)

  36. Marjanovic, G., Solo, V.: On \(\ell _p\) optimization and sparse inverse covariance selection. IEEE Trans. Signal Process. 62, 1644–1654 (2014)

    MathSciNet  MATH  Google Scholar 

  37. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basic Theory, 2nd edn. Springer, Berlin (2013)

    Google Scholar 

  38. Pang, J.S.: Error bounds in mathematical programming. Math. Program. 79, 299–332 (1997)

    MathSciNet  MATH  Google Scholar 

  39. Pant, J.K., Lu, W.S., Antoniou, A.: New improved algorithms for compressive sensing based on \(\ell _p\) norm. IEEE Trans. Circuits Syst. II Express Briefs 61, 198–202 (2014)

    Google Scholar 

  40. Polyak B.T.: Sharp Minima. Institute of Control Sciences Lecture Notes, Moscow, USSR, 1979; Presented at the IIASA Workshop on Generalized Lagrangians and Their Applications, IIASA, Laxenburg, Austria (1979)

  41. Poliquin, R.A., Rockafellar, R.T.: Prox-regular functions in variational analysis. Trans. Am. Math. Soc. 348, 1805–1838 (1996)

    MathSciNet  MATH  Google Scholar 

  42. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1997)

    MATH  Google Scholar 

  43. Studniarskin, M., Ward, D.E.: Weak sharp minima: characterizations and sufficient conditions. SIAM J. Control Optim. 38(1), 219–236 (1999)

    MathSciNet  MATH  Google Scholar 

  44. Wang, F.: Study on the Kurdyka–Łojasiewicz exponents of \(\ell _p\) regularization functions. Southwestern University of Finance and Economics, Chengdu (2021)

    Google Scholar 

  45. Wang, J., Hu, Y.H., Li, C., Yao, J.C.: Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 33(5), 055017 (2017)

    MathSciNet  MATH  Google Scholar 

  46. Wang, X.F., Ye, J.J., Yuan, X.M., Zeng, S.Z., Zhang, J.: Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis. Set Valued Var. Anal. 30, 39–79 (2022)

  47. Ward, D.E.: Characterizations of strict local minima and necessary conditions for weak sharp minima. J. Optim. Theory Appl. 80, 551–571 (1994)

    MathSciNet  MATH  Google Scholar 

  48. Xu, Z.B., Chang, X.Y., Xu, F.M., Zhang, H.: \(L_1/2\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)

    Google Scholar 

  49. Ye, J.J., Yuan, X.M., Zeng, S.Z., Zhang, J.: Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems. Set Valued Var. Anal. 29, 803–837 (2021)

    MathSciNet  MATH  Google Scholar 

  50. Yu, P.R., Li, G.Y., Pong, T.K.: Kurdyka–Łojasiewicz exponent via inf-projection. Found. Comput. Math. (2021). https://doi.org/10.1007/s10208-021-09528-6

    Article  Google Scholar 

  51. Zheng, X.Y., Yang, X.Q.: Weak sharp minima for semi-infinite optimization problems with applications. SIAM J. Optim. 18(2), 573–588 (2007)

    MathSciNet  MATH  Google Scholar 

  52. Zheng, X.Y., Ng, K.F.: Subsmooth semi-infinite and infinite optimization problems. Math. Program. 134(2), 365–393 (2012)

    MathSciNet  MATH  Google Scholar 

  53. Zhu, D.L., Deng, S., Li, M.H., Zhao, L.: Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method. J. Optim. Theory Appl. 189, 889–918 (2021)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees and the associate editor for their valuable suggestions, which have helped to improve the presentation of this manuscript. Bai’s work was supported in part by Chongqing University of Arts and Sciences (M2019 ME07) and Chongqing Jiaotong University (2020S0055). Li’s work was supported in part by Chongqing National Applied Mathematics Center (CQ-NCAM-2021-02) and the foundation for high-level talents of Chongqing University of Art and Sciences (P2017SC01). Zhu’s work was supported by Natural Science Foundation of China (71871140).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minghua Li.

Additional information

Communicated by Aris Daniilidis.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bai, S., Li, M., Lu, C. et al. The Equivalence of Three Types of Error Bounds for Weakly and Approximately Convex Functions. J Optim Theory Appl 194, 220–245 (2022). https://doi.org/10.1007/s10957-022-02016-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-022-02016-z

Keywords

Mathematics Subject Classification

Navigation