Abstract
The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop “starting rules” can be just as effective as “stopping rules” for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.
Similar content being viewed by others
References
Goldstein, A.A.: Convex programming in Hilbert space. Bull. Am. Math. Soc. 70, 709–710 (1964)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005). https://doi.org/10.1137/050626090
Moreau, J.J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)
Combettes, P.L., Vũ, B.C.: Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 63(9), 1289–1318 (2014). https://doi.org/10.1080/02331934.2012.733883
Chouzenoux, E., Pesquet, J.C., Repetti, A.: Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162(1), 107–132 (2014). https://doi.org/10.1007/s10957-013-0465-7
Condat, L.: A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158 (2), 460–479 (2013). https://doi.org/10.1007/s10957-012-0245-9
Combettes, P.L., Condat, L., Pesquet, J.C., Vu, B.C.: A Forward-Backward View of Some Primal-Dual Optimization Methods in Image Recovery. In: 2014 IEEE International Conference on Image Processing (ICIP), pp 4141–4145 (2014), https://doi.org/10.1109/ICIP.2014.7025841
Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numerica 25, 161–319 (2016). https://doi.org/10.1017/S096249291600009X
Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused lasso. J. R. Stat. Soc., Ser. B, Stat. Methodol. 67(1), 91–108 (2005). https://doi.org/10.1111/j.1467-9868.2005.00490.x>
Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc., Ser. B, Stat. Methodol. 68(1), 49–67 (2006). https://doi.org/10.1111/j.1467-9868.2005.00532.x
Zhang, X., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on bregman iteration. J Sci Comput 46, 20–46 (2011). https://doi.org/10.1007/s10915-010-9408-8
Combettes, P.L., Pesquet, J.C.: Primal-dual splitting algorithm for solving inclusions with mixtures of composite, lipschitzian, and parallel-sum type monotone operators. Set-Valued and Variational Analysis 20(2), 307–330 (2012). https://doi.org/10.1007/s11228-011-0191-y
Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for minimization of the sum of three convex separable functions. Fixed Point Theory appl. 2016(1), 54 (2016). https://doi.org/10.1186/s13663-016-0543-2
Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal–dual algorithm. Math. Program. 159(1), 253–287 (2016). https://doi.org/10.1007/s10107-015-0957-3
Vũ, B.C.: A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 38(3), 667–681 (2013). https://doi.org/10.1007/s10444-011-9254-8
BoŢ, R.I., Csetnek, E.R., Heinrich, A., Hendrich, C.: On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems. Math. Program. 150(2), 251–279 (2015). https://doi.org/10.1007/s10107-014-0766-0
Chambolle, A.: An algorithm for total variation minimization and applications. J Math Imaging and Vision 20, 89–97 (2004). https://doi.org/10.1023/B:JMIV.0000011325.36760.1e
Chambolle, A.: Total variation minimization and a class of binary mrf models. In: Energy Minimization Methods in Computer Vision and Pattern Recognition, Lecture Notes in Computer Science, vol. 3757, pp 136–152. https://doi.org/10.1007/11585978_10 (2005)
Aujol, J.F.: Some first-order algorithms for total variation based image restoration. J. Math. Imaging. Vis 34, 307–327 (2009). https://doi.org/10.1007/s10851-009-0149-y
Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18 (11), 2419–2434 (2009). https://doi.org/10.1109/TIP.2009.2028250
Bonettini, S., Loris, I., Porta, F., Prato, M.: Variable metric inexact line-search based methods for nonsmooth optimization. Siam J. Optim. 26 (2), 891–921 (2016). https://doi.org/10.1137/15M1019325
Bonettini, S., Loris, I., Porta, F., Prato, M., Rebegoldi, S.: On the convergence of a linesearch based proximal-gradient method for nonconvex optimization. Inverse Problems 33(5), 055,005 (2017). https://doi.org/10.1088/1361-6420/aa5bfd
Salzo, S., Villa, S.: Inexact and accelerated proximal point algorithms. J. Convex Anal. 19(4), 1167–1192 (2012)
Schmidt, M., Roux, N.L., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, Curran Associates Inc., USA, pp 1458–1466 (2011)
Hu, Y., Chi, E.C., Allen, G.I.: Splitting methods in communication, imaging, science, and engineering, chap. ADMM algorithmic regularization paths for sparse statistical machine learning, pp 433–460. Springer, Berlin (2016). https://doi.org/10.1007/978-3-319-41589-5
Rose, S., Andersen, M., Sidky, E., Pan, X.: Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization. Med. Phys. 42(5), 2690–2698 (2015). https://doi.org/10.1118/1.4914148
Rose, S., Andersen, M.S., Sidky, E.Y., Pan, X.: Technical note: Proximal ordered subsets algorithms for TV constrained optimization in CT image reconstruction. Tech. rep., The University of Chicago, arXiv:http://arXiv.org/abs/1603.08889v1 (2016)
Loris, I., Verhoeven, C.: On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty. Inverse Problems 125(12), 007 (2011). https://doi.org/10.1088/0266-5611/27/12/125007
Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Problems 29(2), 025,011 (2013). https://doi.org/10.1088/0266-5611/29/2/025011
Drori, Y., Sabach, S., Teboulle, M.: A simple algorithm for a class of nonsmooth convex–concave saddle-point problems. Oper. Res. Lett. 43(2), 209–214 (2015). https://doi.org/10.1016/j.orl.2015.02.001
Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces. CMS book in mathematics. Springer, Berlin (2011)
Rockafellar, R.T.: Convex analysis. Princeton University Press, Princeton (1970)
Condat, L.: A generic proximal algorithm for convex optimization – application to total variation minimization. IEEE Signal Proc. Lett. 21(8), 1054–1057 (2014). https://doi.org/10.1109/LSP.2014.2322123
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011). https://doi.org/10.1007/s10851-010-0251-1
Hiriart-Urruty, J.B., Lemarechal, C.: Convex analysis and minimization algorithms. Springer, Berlin (1993)
Chaux, C., Pesquet, J.C., Pustelnik, N.: Nested iterative algorithms for convex constrained image recovery problems. SIAM J. Imaging Sci. 2(2), 730–762 (2009). https://doi.org/10.1137/080727749
Beck, A.: First order methods in optimization. MOS-SIAM series on optimization. SIAM, Philadelphia (2017)
Bonettini, S., Zanella, R., Zanni, L.: A scaled gradient projection method for constrained image deblurring. Inverse Problems 015(1), 002 (2009). https://doi.org/10.1088/0266-5611/25/1/015002
Nesterov, Y.E.: A method for solving a convex programming problem with convergence rate \(\mathcal {O}(1/k^{2})\). Soviet Math. Dokl. 27, 372–376 (1983)
Beck, A., Teboulle, M.: A fast iterative shrinkage-threshold algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009). https://doi.org/10.1137/080716542
Chambolle, A., Dossal, C.: On the convergence of the iterates of the fast iterative shrinkage/thresholding algorithm. J. Optim. Theory Appl. 166(3), 968–982 (2015). https://doi.org/10.1007/s10957-015-0746-4
Chen, J.: Domain decomposition methods and convex optimization with applications to inverse problems. Ph.D. thesis, East China Normal University and Université libre de Bruxelles (2018)
Funding
JC is sponsored by the China Scholarship Council. IL is a Research Associate of the Fonds de la Recherche Scientifique - FNRS and is also supported by a ULB ARC grant.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, J., Loris, I. On starting and stopping criteria for nested primal-dual iterations. Numer Algor 82, 605–621 (2019). https://doi.org/10.1007/s11075-018-0616-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-018-0616-x