Open Access
Issue
ESAIM: COCV
Volume 28, 2022
Article Number 33
Number of page(s) 34
DOI https://doi.org/10.1051/cocv/2022032
Published online 02 June 2022
  1. T. Aspelmeier, C. Charitha and D.R. Luke, Local linear convergence of the ADMM/Douglas-Rachford algorithms without strong convexity and application to statistical imaging. SIAM J. Imaging Sci. 9 (2016) 842–868. [CrossRef] [MathSciNet] [Google Scholar]
  2. H. Attouch, Z. Chbani, J. Peypouquet and P. Redont, Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. Series B 168 (2018) 123–175. [CrossRef] [Google Scholar]
  3. H. Attouch, X. Goudou and P. Redont, The heavy ball with friction method, I. The continuous dynamical system: Global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 2 (2000) 1–34. [CrossRef] [Google Scholar]
  4. H. Attouch, J. Peypouquet and P. Redont, Fast convex optimization via inertial dynamics with Hessian driven damping. J. Differ. Equ. 261 (2016). [Google Scholar]
  5. A. Beck, First-Order Methods in Optimization, volume 1 of MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics and the Mathematical Optimization Society (2017). [Google Scholar]
  6. A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2 (2009) 183–202. [CrossRef] [Google Scholar]
  7. A. Beck and M. Teboulle, Gradient-based algorithms with applications to signal-recovery problems, in D. Palomar and Y. Eldar, editors, Convex Optimization in Signal Processing and Communications. Cambridge University Press, Cambridge (2009) 42–88. [CrossRef] [Google Scholar]
  8. D. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (2014). [Google Scholar]
  9. S. Bonettini and V. Ruggiero, On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration. J.Math. Imaging Vis. 44 (2012) 236–253. [CrossRef] [Google Scholar]
  10. S. Boyd, N. Parikh, E. Chu and J. Peleato, B. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3 (2010) 1–122. [CrossRef] [Google Scholar]
  11. H. Brezis, Operateurs Maximaux Monotones: Et Semi-Groupes De Contractions Dans Les Espaces De Hilbert, North-Holland Publishing Co., North-Holland Mathematics Studies, No. 5. Notas de Matematica (50) (1973). [Google Scholar]
  12. J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing. Math. Comput. 78 (2009) 1515–1536. [CrossRef] [Google Scholar]
  13. C. Clason and T. Valkonen, Nonsmooth Analysis and Optimization. Preprint https://arxiv.org/abs/2001.00216 (2020). [Google Scholar]
  14. E. Candes and M. Wakin, An introduction to compressive sampling. IEEE Signal Process. Mag. (2008) 21–30. [CrossRef] [Google Scholar]
  15. A. Chambolle, An algorithm for total variation minimization and applications. J. Math. Imag. Vision 20 (2004) 89–97. [Google Scholar]
  16. A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40 (2011) 120–145. [CrossRef] [Google Scholar]
  17. A. Chambolle and T. Pock, An introduction to continuous optimization for imaging. Acta Numer. 25 (2016) 161–319. [CrossRef] [MathSciNet] [Google Scholar]
  18. A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159 (2016) 253–287. [CrossRef] [MathSciNet] [Google Scholar]
  19. L. Chen and H. Luo, A unified convergence analysis of first order convex optimization methods via strong Lyapunov functions. Preprint arXiv:2108.00132 (2021). [Google Scholar]
  20. L. Chen and H. Luo, First order optimization methods based on Hessian-driven Nesterov accelerated gradient flow. Preprint arXiv:1912.09276 (2019). [Google Scholar]
  21. S. Chen, D. Donoho and M. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 (1999) 33–61. [CrossRef] [MathSciNet] [Google Scholar]
  22. A. Cherukuri, B. Gharesifard and J. Cortes, Saddle-point dynamics: conditions for asymptotic stability of saddle points. SIAM J. Control Optim. 55 (2017) 486–511. [CrossRef] [MathSciNet] [Google Scholar]
  23. A. Cherukuri, E. Mallada and J. Cortes, Asymptotic convergence of constrained primal-dual dynamics. Syst. Control Lett. 87 (2016) 10–15. [CrossRef] [Google Scholar]
  24. F. Clarke, Optimization and Nonsmooth Analysis. Number 5 in Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (1987). [Google Scholar]
  25. D. Davis and W. Yin, Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions. Preprint arXiv:1407.5210 (2015). [Google Scholar]
  26. W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66 (2016) 889–916. [CrossRef] [MathSciNet] [Google Scholar]
  27. J. Dennis and R. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Number 16 in Classics in applied mathematics. Society for Industrial and Applied Mathematics, Philadelphia (1996). [Google Scholar]
  28. B. Djafari-Rouhani and H. Khatibzadeh, Nonlinear Evolution and Difference Equations of Monotone Type in Hilbert Spaces. CRC Press, Boca Raton (2019), 1st edition. [Google Scholar]
  29. J. Douglas and H.H. Rachford, On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82 (1956) 421–439. [CrossRef] [Google Scholar]
  30. J. Eckstein, Augmented Lagrangian and alternating direction methods for convex optimization: a tutorial and some illustrative computational results, Technical report, Rutgers University (2012). [Google Scholar]
  31. E. Esser, X. Zhang and T.F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3 (2010) 1015–1046. [CrossRef] [Google Scholar]
  32. F. Facchinei and J. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 2. Springer, New York (2006). [Google Scholar]
  33. D. Feijer and F. Paganini, Stability of primal-dual gradient dynamics and applications to network optimization. Automatica 46 (2010) 1974–1981. [CrossRef] [MathSciNet] [Google Scholar]
  34. G. Franca, D. Robinson and R. Vidal, ADMM and accelerated ADMM as continuous dynamical systems. 35th Int. Conf. Mach. Learn. ICML 2018 4 (2018) 2528–2536. [Google Scholar]
  35. M. Fortin and R. Glowinski, On decomposition-coordination methods using an augmented Lagrangian, In Studies in Mathematics and Its Applications, volume 15 of Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems. North-Holland Publishing, Amsterdam (1983). [Google Scholar]
  36. D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2 (1976) 17–40. [CrossRef] [Google Scholar]
  37. P. Giselsson and S. Boyd, Linear convergence and metric selection for Douglas-Rachford splitting and ADMM. IEEE Trans. Automat. Contr. 62 (2017) 532–544. [CrossRef] [Google Scholar]
  38. J. Han and D. Sun, Newton and quasi-Newton methods for normal maps with polyhedral sets. J. Optim. Theory Appl. 94 (1997) 659–676. [CrossRef] [MathSciNet] [Google Scholar]
  39. D. Han, D. Sun and L. Zhang, Linear rate convergence of the alternating direction method of multipliers for convex composite quadratic and semi-definite programming. Preprint arXiv:1508.02134 (2015). [Google Scholar]
  40. A. Haraux, vol. 17 of Systemes dynamiques dissipatifs et applications, Recherches en Mathematiques Appliquees [Research in Applied Mathematics]. Masson, Paris (1991). [Google Scholar]
  41. X. He, R. Hu and Y. Fang, Convergence rates of inertial primal-dual dynamical methods for separable convex optimization problems. Preprint arXiv:2007.12428 (2020). [Google Scholar]
  42. B. He, F. Ma and X. Yuan, An algorithmic framework of generalized primal-dual hybrid gradient methods for saddle point problems. J. Math. Imag. Vis. 58 (2017) 279–293. [CrossRef] [Google Scholar]
  43. B. He, Y. You and X. Yuan, On the convergence of primal-dual hybrid gradient algorithm. SIAM J. Imaging Sci. 7 (2014) 2526–2537. [CrossRef] [MathSciNet] [Google Scholar]
  44. B. He and X. Yuan, Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imag. Sci. 5 (2012) 119–149. [CrossRef] [Google Scholar]
  45. B. Huang, S. Ma and D. Goldfarb, Accelerated linearized Bregman method. J. Sci. Comput. 54 (2013) 428–453. [CrossRef] [MathSciNet] [Google Scholar]
  46. F. Jiang, X. Cai, Z. Wu and D. Han, Approximate first-order primal-dual algorithms for saddle point problems. Math. Comput. 90 (2021) 1227–1262. [CrossRef] [Google Scholar]
  47. M. Kang, M. Kang and M. Jung, Inexact accelerated augmented Lagrangian methods. Comput. Optim. Appl. 62 (2015) 373–404. [CrossRef] [MathSciNet] [Google Scholar]
  48. M. Kang, S. Yun, H. Woo and M. Kang, Accelerated Bregman method for linearly constrained ℓ1-ℓ2 minimization. J. Sci. Comput. 56 (2013) 515–534. [CrossRef] [MathSciNet] [Google Scholar]
  49. G. Lan and R. Monteiro, Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138 (2013) 115–139. [CrossRef] [MathSciNet] [Google Scholar]
  50. Y.J. Lee, J. Wu, J. Xu and L. Zikatanov, Robust subspace correction methods for nearly singular systems. Math. Models Methods Appl. Sci. 17 (2007) 1937–1963. [CrossRef] [MathSciNet] [Google Scholar]
  51. D. Li, X. Sun and K. Toh, On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope. Math. Program. 179 (2020) 419–446. [CrossRef] [MathSciNet] [Google Scholar]
  52. H. Li, C. Fang and Z. Lin, Convergence rates analysis of the quadratic penalty method and its applications to decentralized distributed optimization. Print arXiv:1711.10802 (2017). [Google Scholar]
  53. X. Li, D. Sun and K. Toh, An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for linear programming. SIAM J. Optim. 30 (2020) 2410–2440. [CrossRef] [MathSciNet] [Google Scholar]
  54. T. Lin and M.I. Jordan, A control-theoretic perspective on optimal high-order optimization. Preprint arXiv:1912.07168 (2019). [Google Scholar]
  55. Y. Liu, X. Yuan, S. Zeng and J. Zhang, Partial error bound conditions and the linear convergence rate of the alternating direction method of multipliers. SIAM J. Numer. Anal. 56 (2018) 2095–2123. [CrossRef] [MathSciNet] [Google Scholar]
  56. H. Lu, An O(sr)-resolution ODE framework for discrete-time optimization algorithms and applications to convex-concave saddle-point problems. arXiv:2001.08826 (2020). [Google Scholar]
  57. H. Luo, Accelerated differential inclusion for convex optimization. Optimization (2021) https://doi.org/10.1080/02331934.2021.2002327. [PubMed] [Google Scholar]
  58. H. Luo and L. Chen, From differential equation solvers to accelerated first-order methods for convex optimization. Math. Program. (2021) https://doi.org/10.1007/s10107-021-01713-3. [Google Scholar]
  59. Y. Nesterov, Gradient methods for minimizing composite functions. Math. Program. Ser. B 140 (2013) 125–161. [CrossRef] [Google Scholar]
  60. D. Niu, C. Wang, P. Tang, Q. Wang and E. Song, A sparse semismooth Newton based augmented Lagrangian method for large-scale support vector machines. Preprint arXiv:1910.01312 (2019). [Google Scholar]
  61. D. O’Connor and L. Vandenberghe, On the equivalence of the primal-dual hybrid gradient method and Douglas–Rachford splitting. Math. Program. (2020). https://doi.org/10.1007/s10107-018-1321-1 [Google Scholar]
  62. S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, An iterative regularization method for total variation-based image restoration. SIAM J. Multiscale Model. Simul. 4 (2019) 460–489. [Google Scholar]
  63. N. Parikh and S. Boyd, Proximal algorithms. Found. Trends Optim. 1 (2014). [Google Scholar]
  64. T. Pock, D. Cremers, H. Bischof and A. Chambolle, An algorithm for minimizing the Mumford-Shah functional. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, Kyoto (2009) 1133–1140. [CrossRef] [Google Scholar]
  65. D.W. Peaceman and H.H. Rachford, The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3 (1955) 28–41. [CrossRef] [Google Scholar]
  66. L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18 (1993) 227–244. [CrossRef] [MathSciNet] [Google Scholar]
  67. L. Qi and J. Sun, A nonsmooth version of Newton’s method. Math. Program. 58 (1993) 353–367. [CrossRef] [Google Scholar]
  68. L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms. Phys. D 60 (1992) 259–268. [NASA ADS] [CrossRef] [Google Scholar]
  69. Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd. Society for Industrial and Applied Mathematics, USA (2003). [Google Scholar]
  70. S. Sabach and M. Teboulle, Faster Lagrangian-based methods in convex optimization. arXiv:2010.14314 (2020). [Google Scholar]
  71. W. Su, S. Boyd and E. Candèes, A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17 (2016) 1–43. [Google Scholar]
  72. M. Tao and X. Yuan, Accelerated Uzawa methods for convex optimization. Math. Comput. 86 (2016) 1821–1845. [CrossRef] [Google Scholar]
  73. Q. Tran-Dinh, A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm. Optim. Lett. (2021) https://doi.org/10.1007/s11590-021-01775-4. [Google Scholar]
  74. Q. Tran-Dinh, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization. Comput. Optim. Appl. 72 (2019) 1–43. [CrossRef] [MathSciNet] [Google Scholar]
  75. Q. Tran-Dinh and V. Cevher, Constrained convex minimization via model-based excessive gap. In In Proc. the Neural Information Processing Systems (NIPS), Vol. 27. Montreal, Canada (2014) 721–729. [Google Scholar]
  76. Q. Tran-Dinh, O. Fercoq and V. Cevher, A smooth primal-dual optimization framework for nonsmooth composite convex minimization. SIAM J. Optim. 28 (2018) 96–134. [CrossRef] [MathSciNet] [Google Scholar]
  77. Q. Tran-Dinh and Y. Zhu, Augmented Lagrangian-based decomposition methods with non-ergodic optimal rates. arXiv:1806.05280 (2018). [Google Scholar]
  78. Q. Tran-Dinh and Y. Zhu, Non-stationary first-order primal-dual algorithms with faster convergence rates. SIAM J. Optim. 30 (2020) 2866–2896. [CrossRef] [MathSciNet] [Google Scholar]
  79. T. Valkonen, Inertial, corrected, primal-dual proximal splitting. SIAM J. Optim. 30 (2020) 1391–1420. [CrossRef] [MathSciNet] [Google Scholar]
  80. A. Wibisono, A. Wilson and M. Jordan, A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. 113 (2016) E7351–E7358. [Google Scholar]
  81. A.C. Wilson, B. Recht and M.I. Jordan, A Lyapunov analysis of accelerated methods in optimization. J. Mach. Learn. Res. 22 (2021) 1–34. [Google Scholar]
  82. Y. Xu, Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27 (2017) 1459–1484. [CrossRef] [MathSciNet] [Google Scholar]
  83. J. Xu and L. Zikatanov, Algebraic multigrid methods. Acta Numer. 26 (2017) 591–721. [CrossRef] [MathSciNet] [Google Scholar]
  84. W.H. Yang and D. Han, Linear convergence of the alternating direction method of multipliers for a class of convex optimization problems. SIAM J. Numer. Anal. 54 (2016) 625–640. [CrossRef] [MathSciNet] [Google Scholar]
  85. M. Yan and W. Yin, Self equivalence of the alternating direction method of multipliers. arXiv:1407.7400 (2015). [Google Scholar]
  86. W. Yin, Analysis and generalizations of the linearized Bregman method. SIAM J. Imag. Sci. 3 (2010) 856–877. [CrossRef] [Google Scholar]
  87. X. Yuan, S. Zeng and J. Zhang, Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis. J. Mach. Learn. Res. 21 (2020) 1–74. [Google Scholar]
  88. X. Zeng, J. Lei and J. Chen, Dynamical primal-dual accelerated method with applications to network optimization. Print arXiv:1912.03690 (2019), pp. 1–22 [Google Scholar]
  89. M. Zhu and T. Chan, An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical report CAM Report 08-34, UCLA, Los Angeles, CA, USA (2008). [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.