Abstract
In Baraldi (Math Program 20:1–40, 2022), we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated excellent performance and scalability with problem size. In this paper, we enrich the convergence analysis for this algorithm, proving strong convergence of the iterates with guaranteed rates. In particular, we demonstrate that the trust-region algorithm recovers superlinear, even quadratic, convergence rates when using a second-order Taylor approximation of the smooth objective function term.
Similar content being viewed by others
Data availability
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
References
Baraldi, R.J., Kouri, D.P.: A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations. Math. Program. 20, 1–40 (2022)
Clever, D., Lang, J., Ulbrich, S., Ziems, C.: Generalized Multilevel SQP-methods for PDAE-constrained Optimization Based on Space-Time Adaptive PDAE Solvers. In: Leugering, G., Engell, S., Griewank, A., Hinze, M., Rannacher, R., Schulz, V., Ulbrich, M., Ulbrich, S. (eds.) Constrained Optimization and Optimal Control for Partial Differential Equations, pp. 51–74. Springer, Basel (2012)
Garreis, S., Ulbrich, M.: An Inexact Trust-Region Algorithm for Constrained Problems in Hilbert Space and its Application to the Adaptive Solution of Optimal Control Problems with PDEs. Technical University of Munich, Munich (2019)
Kouri, D.P., Heinkenschloss, M., Ridzal, D., van Bloemen Waanders, B.G.: A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty. SIAM J. Sci. Comput. 35(4), 1847–1879 (2013)
Kouri, D.P., Heinkenschloss, M., Ridzal, D., van Bloemen Waanders, B.G.: Inexact objective function evaluations in a trust-region algorithm for PDE-constrained optimization under uncertainty. SIAM J. Sci. Comput. 36(6), 3011–3029 (2014)
Kouri, D.P., Ridzal, D.: Inexact trust-region methods for PDE-constrained optimization. In: Front. PDE-Constr. Optim., pp. 83–121. Springer, New York, NY (2018)
Muthukumar, R., Kouri, D.P., Udell, M.: Randomized sketching algorithms for low-memory dynamic optimization. SIAM J. Optim. 31(2), 1242–1275 (2021)
Zahr, M.J., Carlberg, K.T., Kouri, D.P.: An efficient, globally convergent method for optimization under uncertainty using adaptive model reduction and sparse grids. SIAM/ASA J. Uncertain. Quantif. 7(3), 877–912 (2019)
Ziems, J.C., Ulbrich, S.: Adaptive multilevel inexact SQP methods for PDE-constrained optimization. SIAM J. Optim. 21(1), 1–40 (2011). https://doi.org/10.1137/080743160
Zou, Z., Kouri, D.P., Aquino, W.: A locally adapted reduced-basis method for solving risk-averse PDE-constrained optimization problems. SIAM/ASA J. Uncertain. Quantif. 10(4), 1629–1651 (2022)
Josephy, N.H.: Newton’s method for generalized equations. Wisconsin University-Madison Mathematics Research Center, Madison (1979)
Josephy, N.H.: Quasi–Newton Methods for Generalized Qquations. Wisconsin University-Madison Mathematics Research Center, Madison (1979)
Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to Quasi–Newton methods. Math. Comput. 28(126), 549–560 (1974)
Dontchev, A.L.: Generalizations of the Dennis–Moré theorem. SIAM J. Optim. 22(3), 821–830 (2012)
Aragón Artacho, F.J., Belyakov, A., Dontchev, A.L., López, M.: Local convergence of quasi-Newton methods under metric regularity. Comput. Optim. Appl. 58(1), 225–247 (2014)
Dontchev, A.L.: Local convergence of the Newton method for generalized equations. Comptes Rendus Acad. Sci. Ser. Math. 322(4), 327–331 (1996)
Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings, vol. 543. Springer, Cham (2009)
Cibulka, R., Dontchev, A., Geoffroy, M.H.: Inexact Newton methods and Dennis–Moré theorems for nonsmooth generalized equations. SIAM J. Control Optim. 53(2), 1003–1019 (2015)
Dontchev, A.L., Rockafellar, R.T.: Convergence of inexact Newton methods for generalized equations. Math. Program. 139(1), 115–137 (2013)
Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: The Josephy–Newton method for semismooth generalized equations and semismooth SQP for optimization. Set Valued Var. Anal. 21(1), 17–45 (2013)
Izmailov, A.F., Solodov, M.V.: Inexact Josephy–Newton framework for generalized equations and its applications to local analysis of Newtonian methods for constrained optimization. Comput. Optim. Appl. 46(2), 347–368 (2010)
Kanzow, C., Lechner, T.: Globalized inexact proximal Newton-type methods for nonconvex composite functions. Comput. Optim. Appl. 78(2), 377–410 (2021)
Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 24(3), 1420–1443 (2014). https://doi.org/10.1137/130921428
Byrd, R.H., Nocedal, J., Oztoprak, F.: An inexact successive quadratic approximation method for L-1 regularized optimization. Math. Program. 157(2), 375–396 (2016)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces: CMS Books in Mathematics. Springer, Cham, Switzerland (2018)
Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust Region Methods. SIAM, Philadelphia, PA (2000)
Beck, A.: First Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2017)
Cartis, C., Gould, N.I.M., Toint, Ph.L.: On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming. SIAM J. Optim. 21(4), 1721–1739 (2011). https://doi.org/10.1137/11082381X
Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19(2), 400–408 (1982)
Lin, C.-J., Moré, J.J.: Newton’s method for large bound-constrained optimization problems. SIAM J. Optim. 9(4), 1100–1127 (1999)
Nocedal, J., Wright, S.: Numerical Optimization. Springer Series in Operations Research and Financial Engineering, Springer, Cham (2006)
Dennis, J.E., Jr., Mei, H.H.W.: Two new unconstrained optimization algorithms which use function and gradient values. J. Optim. Theory Appl. 28, 453–482 (1979). https://doi.org/10.1007/BF00932218
Kelley, C.T.: Iterative Methods for Optimization, vol. 18. SIAM, Philadelphia (1999)
Acknowledgements
This research was sponsored by the U.S. Department of Energy Office of Science and the U.S. Air Force Office of Scientific Research. This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan.
Funding
Advanced Scientific Computing Research, U.S. Air Force Office of Scientific Research.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Baraldi, R.J., Kouri, D.P. Local convergence analysis of an inexact trust-region method for nonsmooth optimization. Optim Lett 18, 663–680 (2024). https://doi.org/10.1007/s11590-023-02092-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-023-02092-8