Abstract
Classical extragradient schemes and their stochastic counterpart represent a cornerstone for resolving monotone variational inequality problems. Yet, such schemes have a per-iteration complexity of two projections onto a convex set and require two evaluations of the map, the former of which could be relatively expensive. We consider two related avenues where the per-iteration complexity is significantly reduced: (i) A stochastic projected reflected gradient method requiring a single evaluation of the map and a single projection; and (ii) A stochastic subgradient extragradient method that requires two evaluations of the map, a single projection onto the associated feasibility set, and a significantly cheaper projection (onto a halfspace) computable in closed form. Under a variance-reduced framework reliant on a sample-average of the map based on an increasing batch-size, we prove almost sure convergence of the iterates to a random point in the solution set for both schemes. Additionally, non-asymptotic rate guarantees are derived for both schemes in terms of the gap function; notably, both rates match the best-known rates obtained in deterministic regimes. To address feasibility sets given by the intersection of a large number of convex constraints, we adapt both of the aforementioned schemes to a random projection framework. We then show that the random projection analogs of both schemes also display almost sure convergence under a weak-sharpness requirement; furthermore, without imposing the weak-sharpness requirement, both schemes are characterized by the optimal rate in terms of the gap function of the projection of the averaged sequence onto the set as well as the infeasibility of this sequence. Preliminary numerics support theoretical findings and the schemes outperform standard extragradient schemes in terms of the per-iteration complexity.
Similar content being viewed by others
References
Antipin, A.S.: Method of convex programming using a symmetric modification of Lagrange function. Matekon 14, 23–38 (1978)
Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces. Springer Science & Business Media (2011)
Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129, 163–195 (2011)
Bertsekas, D.P.: Temporal difference methods for general projected equations. IEEE Trans. Autom. Control 56, 2128–2139 (2011)
Birge, J.R., Louveaux, F.: Introduction to stochastic programming. Springer Science & Business Media (2011)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Chen, X., Wets, R.J.-B., Zhang, Y.: Stochastic variational inequalities: residual minimization smoothing sample average approximations. SIAM J. Optim. 22, 649–673 (2012)
Chen, Y., Lan, G., Ouyang, Y.: Accelerated schemes for a class of variational inequalities. Math. Program. 165, 113–149 (2017)
Cui, S., Shanbhag, U.V.: On the analysis of reflected gradient and splitting methods for monotone stochastic variational inequality problems. In: 2016 IEEE 55th Conference on Decision and Control (CDC), IEEE, pp. 4510–4515 (2016)
Dang, C.D., Lan, G.: On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. 60, 277–310 (2015)
Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inform. Process. Syst. 27, 1646–1654 (2014)
Facchinei, F., Kanzow, C.: Generalized Nash equilibrium problems. Ann. Oper. Res. 175, 177–211 (2010)
Facchinei, F., Pang, J.S.: Finite-dimensional variational inequalities and complementarity problems. Springer Science & Business Media (2007)
Yousefian, F., Nedić, A., Shanbhag, U.V.: On stochastic mirror-prox algorithms for stochastic cartesian variational inequalities: Randomized block coordinate and optimal averaging schemes. Set-Valued and Variational Analysis, pp 789–819 (2018)
Hsieh, Y.-G., Iutzeler, F., Malick, J., Mertikopoulos, P. (2019)
Iusem, A.N., Jofré, A., Oliveira, R.I., Thompson, P.: Extragradient method with variance reduction for stochastic variational inequalities. SIAM J. Optim. 27, 686–724 (2017)
Iusem, A.N., Jofré, A., Thompson, P.: Incremental constraint projection methods for monotone stochastic variational inequalities. Math. Oper. Res. 44, 236–263 (2019)
Jalilzadeh, A., Shanbhag, U.V.: A proximal-point algorithm with variable sample-sizes (PPAWSS) for monotone stochastic variational inequality problems. In: 2019 Winter Simulation Conference (WSC), IEEE, pp 3551–3562 (2019)
Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53, 1462–1475 (2008)
Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. Adv. Neural Inform. Proces. Syst. 26, 315–323 (2013)
Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems 1, 17–58 (2011)
Kannan, A., Shanbhag, U.V.: Distributed computation of equilibria in monotone Nash games via iterative regularization techniques. SIAM J. Optim. 22, 1177–1205 (2012)
Kannan, A., Shanbhag, U.V.: Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Comput. Optim. Appl. 74, 779–820 (2019)
Kannan, A., Shanbhag, U.V., Kim, H.M.: Addressing supply-side risk in uncertain power markets: stochastic Nash models, scalable algorithms and error analysis. Optim. Methods Softw. 28, 1095–1138 (2013)
Konnov, I.: Equilibrium Models and Variational Inequalities. ISSN, Elsevier Science (2007)
Korpelevich, G.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Koshal, J., Nedic, A., Shanbhag, U.V.: Regularized iterative stochastic approximation methods for stochastic variational inequality problems. IEEE Trans. Autom. Control 58, 594–609 (2013)
Malitsky, Y.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502–520 (2015)
Mokhtari, A., Ozdaglar, A., Pattathil, S.: A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In: International Conference on Artificial Intelligence and Statistics, PMLR, pp 1497–1507 (2020)
Nedić, A.: Random projection algorithms for convex set intersection problems. In: 49th IEEE Conference on Decision and Control (CDC), IEEE, pp7655–7660 (2010)
Nedić, A.: Random algorithms for convex minimization problems. Math. program. 129, 225–253 (2011)
Nemirovski, A.: Prox-method with rate of convergence \(\mathcal {O}(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004)
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009)
Polyak, B.T.: Introduction to optimization. Optimization Software New York (1987)
Ravat, U., Shanbhag, U.V.: On the characterization of solution sets of smooth and nonsmooth convex stochastic Nash games. SIAM J. Optim. 21, 1168–1199 (2011)
Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)
Rockafellar, R.T., Wets, R. J.-B.: Variational analysis, vol. 317, Springer Science & Business Media (2009)
Shapiro, A., Dentcheva, D., Ruszcsynski, A.: Lectures on stochastic programming: modeling and theory, vol. 16, SIAM (2014)
Wang, M., Bertsekas, D.P.: Incremental constraint projection methods for variational inequalities. Math. Program. 150, 321–363 (2015)
Wang, M., Bertsekas, D.P.: Stochastic first-order methods with random constraint projection. SIAM J. Optim. 26, 681–717 (2016)
Wang, M., Chen, Y., Liu, J., Gu, Y.: Random multi-constraint projection: Stochastic gradient methods for convex optimization with many constraints. arXiv:1511.03760 (2015)
Xu, H.: Sample average approximation methods for a class of stochastic variational inequality problems. Asia-Pacific Journal of Operational Research 27, 103–119 (2010)
Yin, H., Shanbhag, U.V., Mehta, P.G.: Nash equilibrium problems with scaled congestion costs and shared constraints. IEEE Trans. Autom. Control 56, 1702–1708 (2011)
Yousefian, F., Nedić, A., Shanbhag, U.V.: A regularized smoothing stochastic approximation (RSSA) algorithm for stochastic variational inequality problems. In: Proceedings of the 2013 Winter Simulation Conference: Simulation: Making Decisions in a Complex World, IEEE Press, pp 933–944 (2013)
Yousefian, F., Nedić, A., Shanbhag, U.V.: Optimal robust smoothing extragradient algorithms for stochastic variational inequality problems. In: 53rd IEEE conference on decision and control, IEEE, pp5831–5836 (2014)
Yousefian, F., Nedić, A., Shanbhag, U.V.: Self-tuned stochastic approximation schemes for non-Lipschitzian stochastic multi-user optimization and Nash games. IEEE Trans. Autom. Control 61, 1753–1766 (2016)
Yousefian, F., Nedić, A., Shanbhag, U.V.: On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems. Math. Program. 165, 391–431 (2017)
Acknowledgements
This paper has greatly benefited from the helpful comments of both the associate editor as well as two the referees. In particular, we want to thank one of the referees whose careful reading aided in resolving errors and his/her recommendations helped add further clarity to the proofs.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The authors have been partly funded by NSF Grants 1246887 and 1400217, DOE ARPA-E DE-AR0001076 as well as the Gary and Sheila Bello Chair funds. A preliminary conference version of this work appeared in [9].
Rights and permissions
About this article
Cite this article
Cui, S., Shanbhag, U.V. On the Analysis of Variance-reduced and Randomized Projection Variants of Single Projection Schemes for Monotone Stochastic Variational Inequality Problems. Set-Valued Var. Anal 29, 453–499 (2021). https://doi.org/10.1007/s11228-021-00572-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11228-021-00572-6