Abstract
We start by establishing the equivalence of three types of error bounds: weak sharp minima, level-set subdifferential error bounds and Łojasiewicz (for short Ł) inequalities for weakly convex functions with exponent \(\alpha \in [0,1]\) and approximately convex functions. Then we apply these equivalence results to a class of nonconvex optimization problems, whose objective functions are the sum of a convex function and a composite function with a locally Lipschitz function and a smooth vector-valued function. Finally, applying a characterization for lower-order regularization problems, we show that the level-set subdifferential error bound with exponent 1 and the Ł inequality with exponent \(\frac{1}{2}\) hold at a local minimum point.
Similar content being viewed by others
References
Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1–2), 91–129 (2013)
Aussel, D., Daniilidis, A., Thibault, A.D.: Subsmooth sets: functional characterizations and related concepts. Trans. Am. Math. Soc. 357(4), 1275–1301 (2004)
Azé, D.: Asurvey on error bounds for lower semicontinuous functions. In: Proceedings of 2003 MODESMAI Conference of ESAIM Proceedings, EDP Sciences, Les Ulis, vol. 13, pp. 1–17 (2003)
Bernard, F., Thibault, L.: Uniform prox-regularity of functions and epigraphs in Hilbert spaces. Nonlinear Anal. 60, 187–207 (2005)
Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2007)
Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18, 556–572 (2007)
Bolte, J., Nguyen, T.P., Peypouquet, J., Bruce, W.S.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165, 471–507 (2017)
Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control Optim. 31(5), 1340–1359 (1993)
Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program. 71(2), 179–194 (1995)
Burke, J.V., Deng, S.: Weak sharp minima revisited. Part I: basic theory. Control Cybern. 31(3), 439–469 (2002)
Burke, J.V., Deng, S.: Weak sharp minima revisited, part II: application to linear regularity and error bounds. Math. Program. 104(2–3), 235–261 (2005)
Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24(3), 035020 (2008)
Chen, X.J., Xu, F.M., Ye, Y.Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)
Chouzenoux, E., Pesquet, J.C., Repetti, A.: Variable metric forward–backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162, 107–132 (2014)
Cromme, L.: Strong uniqueness. Numer. Math. 29, 179–193 (1978)
Daniilidis, A., Georgiev, P.: Approximate convexity and submonotonicity. J. Math. Anal. Appl. 291(1), 292–301 (2004)
Daniilidis, A., Malick, J.: Filling the gap between lower-\(C^{1}\) and lower-\(C^{2}\) functions. J. Convex Anal. 12(2), 315–329 (2005)
Deng, S., Yang, X.Q.: Weak sharp minima in multicriteria linear programming. SIAM J. Optim. 15(2), 456–460 (2005)
Drusvyatskiy, D., Ioffe, A.: Quadratic growth and critical point stability of semi-algebraic functions. Math. Program. 153(2), 635–653 (2015)
Drusvyatskiy, D., Lewis, A.S.: Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res. 43(3), 919–948 (2018)
Drusvyatskiy, D., Ioffe, A.D., Lewis, A.S.: Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria. Math. Program. 185, 357–383 (2021)
Ferris, M.C.: Weak Sharp Minima and Penalty Functions in Mathematical Programming. PhD thesis, University of Cambridge, Cambridge (1988)
Ge, D.D., Jiang, X.Y., Ye, Y.Y.: A note on the complexity of \(\ell _p\) minimization. Math. Program. 129(2), 285–299 (2011)
Hu, Y.H., Li, C., Yang, X.Q.: On convergence rates of linearized proximal algorithms for convex composite optimization with applications. SIAM J. Optim. 26(2), 1207–1235 (2016)
Hu, Y.H., Yu, C.K.W., Yang, X.Q.: Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions. J. Global Optim. 75, 1003–1028 (2017)
Hu, Y.H., Li, C., Meng, K.W., Yang, X.Q.: Linear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problems. J. Global Optim. 79(4), 853–883 (2021)
Kruger, A.Y., López, M.A., Yang, X.Q., Zhu, J.X.: Hölder error bounds and Hölder calmness with applications to convex semi-infinite optimization. Set Valued Var. Anal. 27(4), 995–1023 (2019)
Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. l’institut Fourier 48, 769–783 (1998)
Li, C., Wang, X.H.: On convergence of the Gauss–Newton method for convex composite optimization. Math. Program. 91(2), 349–356 (2002)
Li, C., Mordukhovich, B.S., Wang, J.H., Yao, J.C.: Weak sharp minima on Riemannian manifolds. SIAM J. Optim. 21(4), 1523–1560 (2009)
Li, G.Y., Pong, T.K.: Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Math. Program. 159, 371–401 (2016)
Li, G.Y., Pong, T.K.: Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comput. Math. 18, 1199–1232 (2018)
Li, M.H., Meng, K.W., Yang, X.Q.: On error bound moduli for locally Lipschitz and regular functions. Math. Program. 171(1), 463–487 (2018)
Liu, T.X., Pong, T.K.: Kurdyka–Łojasiewicz property of least squares problems with a certain class of nonconvex regularizers. Private Communication (2021)
Łojasiewicz, S.: Une propriété topologi que des sous-ensembles analytiques réels. In: Les Équationsaux Dérivées Partielles, pp. 87–89. Éditions du Centre National de la Recherche Scientifique, Paris (1963)
Marjanovic, G., Solo, V.: On \(\ell _p\) optimization and sparse inverse covariance selection. IEEE Trans. Signal Process. 62, 1644–1654 (2014)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I: Basic Theory, 2nd edn. Springer, Berlin (2013)
Pang, J.S.: Error bounds in mathematical programming. Math. Program. 79, 299–332 (1997)
Pant, J.K., Lu, W.S., Antoniou, A.: New improved algorithms for compressive sensing based on \(\ell _p\) norm. IEEE Trans. Circuits Syst. II Express Briefs 61, 198–202 (2014)
Polyak B.T.: Sharp Minima. Institute of Control Sciences Lecture Notes, Moscow, USSR, 1979; Presented at the IIASA Workshop on Generalized Lagrangians and Their Applications, IIASA, Laxenburg, Austria (1979)
Poliquin, R.A., Rockafellar, R.T.: Prox-regular functions in variational analysis. Trans. Am. Math. Soc. 348, 1805–1838 (1996)
Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1997)
Studniarskin, M., Ward, D.E.: Weak sharp minima: characterizations and sufficient conditions. SIAM J. Control Optim. 38(1), 219–236 (1999)
Wang, F.: Study on the Kurdyka–Łojasiewicz exponents of \(\ell _p\) regularization functions. Southwestern University of Finance and Economics, Chengdu (2021)
Wang, J., Hu, Y.H., Li, C., Yao, J.C.: Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 33(5), 055017 (2017)
Wang, X.F., Ye, J.J., Yuan, X.M., Zeng, S.Z., Zhang, J.: Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis. Set Valued Var. Anal. 30, 39–79 (2022)
Ward, D.E.: Characterizations of strict local minima and necessary conditions for weak sharp minima. J. Optim. Theory Appl. 80, 551–571 (1994)
Xu, Z.B., Chang, X.Y., Xu, F.M., Zhang, H.: \(L_1/2\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)
Ye, J.J., Yuan, X.M., Zeng, S.Z., Zhang, J.: Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems. Set Valued Var. Anal. 29, 803–837 (2021)
Yu, P.R., Li, G.Y., Pong, T.K.: Kurdyka–Łojasiewicz exponent via inf-projection. Found. Comput. Math. (2021). https://doi.org/10.1007/s10208-021-09528-6
Zheng, X.Y., Yang, X.Q.: Weak sharp minima for semi-infinite optimization problems with applications. SIAM J. Optim. 18(2), 573–588 (2007)
Zheng, X.Y., Ng, K.F.: Subsmooth semi-infinite and infinite optimization problems. Math. Program. 134(2), 365–393 (2012)
Zhu, D.L., Deng, S., Li, M.H., Zhao, L.: Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method. J. Optim. Theory Appl. 189, 889–918 (2021)
Acknowledgements
The authors would like to thank the anonymous referees and the associate editor for their valuable suggestions, which have helped to improve the presentation of this manuscript. Bai’s work was supported in part by Chongqing University of Arts and Sciences (M2019 ME07) and Chongqing Jiaotong University (2020S0055). Li’s work was supported in part by Chongqing National Applied Mathematics Center (CQ-NCAM-2021-02) and the foundation for high-level talents of Chongqing University of Art and Sciences (P2017SC01). Zhu’s work was supported by Natural Science Foundation of China (71871140).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Aris Daniilidis.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Bai, S., Li, M., Lu, C. et al. The Equivalence of Three Types of Error Bounds for Weakly and Approximately Convex Functions. J Optim Theory Appl 194, 220–245 (2022). https://doi.org/10.1007/s10957-022-02016-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-022-02016-z
Keywords
- Weak sharp minima
- Level-set subdifferential error bounds
- Łojasiewicz inequalities
- Lower-order regularization problems