[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

On starting and stopping criteria for nested primal-dual iterations

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop “starting rules” can be just as effective as “stopping rules” for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Goldstein, A.A.: Convex programming in Hilbert space. Bull. Am. Math. Soc. 70, 709–710 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  2. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005). https://doi.org/10.1137/050626090

    Article  MathSciNet  MATH  Google Scholar 

  3. Moreau, J.J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  4. Combettes, P.L., Vũ, B.C.: Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 63(9), 1289–1318 (2014). https://doi.org/10.1080/02331934.2012.733883

    Article  MathSciNet  MATH  Google Scholar 

  5. Chouzenoux, E., Pesquet, J.C., Repetti, A.: Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162(1), 107–132 (2014). https://doi.org/10.1007/s10957-013-0465-7

    Article  MathSciNet  MATH  Google Scholar 

  6. Condat, L.: A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158 (2), 460–479 (2013). https://doi.org/10.1007/s10957-012-0245-9

    Article  MathSciNet  MATH  Google Scholar 

  7. Combettes, P.L., Condat, L., Pesquet, J.C., Vu, B.C.: A Forward-Backward View of Some Primal-Dual Optimization Methods in Image Recovery. In: 2014 IEEE International Conference on Image Processing (ICIP), pp 4141–4145 (2014), https://doi.org/10.1109/ICIP.2014.7025841

  8. Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numerica 25, 161–319 (2016). https://doi.org/10.1017/S096249291600009X

    Article  MathSciNet  MATH  Google Scholar 

  9. Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused lasso. J. R. Stat. Soc., Ser. B, Stat. Methodol. 67(1), 91–108 (2005). https://doi.org/10.1111/j.1467-9868.2005.00490.x>

    Article  MathSciNet  MATH  Google Scholar 

  10. Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc., Ser. B, Stat. Methodol. 68(1), 49–67 (2006). https://doi.org/10.1111/j.1467-9868.2005.00532.x

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, X., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on bregman iteration. J Sci Comput 46, 20–46 (2011). https://doi.org/10.1007/s10915-010-9408-8

    Article  MathSciNet  MATH  Google Scholar 

  12. Combettes, P.L., Pesquet, J.C.: Primal-dual splitting algorithm for solving inclusions with mixtures of composite, lipschitzian, and parallel-sum type monotone operators. Set-Valued and Variational Analysis 20(2), 307–330 (2012). https://doi.org/10.1007/s11228-011-0191-y

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for minimization of the sum of three convex separable functions. Fixed Point Theory appl. 2016(1), 54 (2016). https://doi.org/10.1186/s13663-016-0543-2

    Article  MathSciNet  MATH  Google Scholar 

  14. Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal–dual algorithm. Math. Program. 159(1), 253–287 (2016). https://doi.org/10.1007/s10107-015-0957-3

    Article  MathSciNet  MATH  Google Scholar 

  15. Vũ, B.C.: A splitting algorithm for dual monotone inclusions involving cocoercive operators. Adv. Comput. Math. 38(3), 667–681 (2013). https://doi.org/10.1007/s10444-011-9254-8

    Article  MathSciNet  MATH  Google Scholar 

  16. BoŢ, R.I., Csetnek, E.R., Heinrich, A., Hendrich, C.: On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems. Math. Program. 150(2), 251–279 (2015). https://doi.org/10.1007/s10107-014-0766-0

    Article  MathSciNet  MATH  Google Scholar 

  17. Chambolle, A.: An algorithm for total variation minimization and applications. J Math Imaging and Vision 20, 89–97 (2004). https://doi.org/10.1023/B:JMIV.0000011325.36760.1e

    Article  MathSciNet  MATH  Google Scholar 

  18. Chambolle, A.: Total variation minimization and a class of binary mrf models. In: Energy Minimization Methods in Computer Vision and Pattern Recognition, Lecture Notes in Computer Science, vol. 3757, pp 136–152. https://doi.org/10.1007/11585978_10 (2005)

  19. Aujol, J.F.: Some first-order algorithms for total variation based image restoration. J. Math. Imaging. Vis 34, 307–327 (2009). https://doi.org/10.1007/s10851-009-0149-y

    Article  MathSciNet  MATH  Google Scholar 

  20. Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18 (11), 2419–2434 (2009). https://doi.org/10.1109/TIP.2009.2028250

    Article  MathSciNet  MATH  Google Scholar 

  21. Bonettini, S., Loris, I., Porta, F., Prato, M.: Variable metric inexact line-search based methods for nonsmooth optimization. Siam J. Optim. 26 (2), 891–921 (2016). https://doi.org/10.1137/15M1019325

    Article  MathSciNet  MATH  Google Scholar 

  22. Bonettini, S., Loris, I., Porta, F., Prato, M., Rebegoldi, S.: On the convergence of a linesearch based proximal-gradient method for nonconvex optimization. Inverse Problems 33(5), 055,005 (2017). https://doi.org/10.1088/1361-6420/aa5bfd

    Article  MathSciNet  MATH  Google Scholar 

  23. Salzo, S., Villa, S.: Inexact and accelerated proximal point algorithms. J. Convex Anal. 19(4), 1167–1192 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Schmidt, M., Roux, N.L., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, Curran Associates Inc., USA, pp 1458–1466 (2011)

  25. Hu, Y., Chi, E.C., Allen, G.I.: Splitting methods in communication, imaging, science, and engineering, chap. ADMM algorithmic regularization paths for sparse statistical machine learning, pp 433–460. Springer, Berlin (2016). https://doi.org/10.1007/978-3-319-41589-5

    Book  Google Scholar 

  26. Rose, S., Andersen, M., Sidky, E., Pan, X.: Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization. Med. Phys. 42(5), 2690–2698 (2015). https://doi.org/10.1118/1.4914148

    Article  Google Scholar 

  27. Rose, S., Andersen, M.S., Sidky, E.Y., Pan, X.: Technical note: Proximal ordered subsets algorithms for TV constrained optimization in CT image reconstruction. Tech. rep., The University of Chicago, arXiv:http://arXiv.org/abs/1603.08889v1 (2016)

  28. Loris, I., Verhoeven, C.: On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty. Inverse Problems 125(12), 007 (2011). https://doi.org/10.1088/0266-5611/27/12/125007

    Article  MathSciNet  MATH  Google Scholar 

  29. Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Problems 29(2), 025,011 (2013). https://doi.org/10.1088/0266-5611/29/2/025011

    Article  MathSciNet  Google Scholar 

  30. Drori, Y., Sabach, S., Teboulle, M.: A simple algorithm for a class of nonsmooth convex–concave saddle-point problems. Oper. Res. Lett. 43(2), 209–214 (2015). https://doi.org/10.1016/j.orl.2015.02.001

    Article  MathSciNet  MATH  Google Scholar 

  31. Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces. CMS book in mathematics. Springer, Berlin (2011)

  32. Rockafellar, R.T.: Convex analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  33. Condat, L.: A generic proximal algorithm for convex optimization – application to total variation minimization. IEEE Signal Proc. Lett. 21(8), 1054–1057 (2014). https://doi.org/10.1109/LSP.2014.2322123

    Article  Google Scholar 

  34. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011). https://doi.org/10.1007/s10851-010-0251-1

    Article  MathSciNet  MATH  Google Scholar 

  35. Hiriart-Urruty, J.B., Lemarechal, C.: Convex analysis and minimization algorithms. Springer, Berlin (1993)

  36. Chaux, C., Pesquet, J.C., Pustelnik, N.: Nested iterative algorithms for convex constrained image recovery problems. SIAM J. Imaging Sci. 2(2), 730–762 (2009). https://doi.org/10.1137/080727749

    Article  MathSciNet  MATH  Google Scholar 

  37. Beck, A.: First order methods in optimization. MOS-SIAM series on optimization. SIAM, Philadelphia (2017)

    Book  Google Scholar 

  38. Bonettini, S., Zanella, R., Zanni, L.: A scaled gradient projection method for constrained image deblurring. Inverse Problems 015(1), 002 (2009). https://doi.org/10.1088/0266-5611/25/1/015002

    Article  MathSciNet  MATH  Google Scholar 

  39. Nesterov, Y.E.: A method for solving a convex programming problem with convergence rate \(\mathcal {O}(1/k^{2})\). Soviet Math. Dokl. 27, 372–376 (1983)

    MATH  Google Scholar 

  40. Beck, A., Teboulle, M.: A fast iterative shrinkage-threshold algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009). https://doi.org/10.1137/080716542

    Article  MathSciNet  MATH  Google Scholar 

  41. Chambolle, A., Dossal, C.: On the convergence of the iterates of the fast iterative shrinkage/thresholding algorithm. J. Optim. Theory Appl. 166(3), 968–982 (2015). https://doi.org/10.1007/s10957-015-0746-4

    Article  MathSciNet  MATH  Google Scholar 

  42. Chen, J.: Domain decomposition methods and convex optimization with applications to inverse problems. Ph.D. thesis, East China Normal University and Université libre de Bruxelles (2018)

Download references

Funding

JC is sponsored by the China Scholarship Council. IL is a Research Associate of the Fonds de la Recherche Scientifique - FNRS and is also supported by a ULB ARC grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ignace Loris.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Loris, I. On starting and stopping criteria for nested primal-dual iterations. Numer Algor 82, 605–621 (2019). https://doi.org/10.1007/s11075-018-0616-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-018-0616-x

Keywords

Mathematics Subject Classification (2010)

Navigation