[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

An infeasible-start framework for convex quadratic optimization, with application to constraint-reduced interior-point and other methods

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

A framework is proposed for solving general convex quadratic programs (CQPs) from an infeasible starting point by invoking an existing feasible-start algorithm tailored for inequality-constrained CQPs. The central tool is an exact penalty function scheme equipped with a penalty-parameter updating rule. The feasible-start algorithm merely has to satisfy certain general requirements, and so is the updating rule. Under mild assumptions, the framework is proved to converge on CQPs with both inequality and equality constraints and, at a negligible additional cost per iteration, produces an infeasibility certificate, together with a feasible point for an (approximately) \(\ell _1\)-least relaxed feasible problem, when the given problem does not have a feasible solution. The framework is applied to a feasible-start constraint-reduced interior-point algorithm previously proved to be highly performant on problems with many more inequality constraints than variables (“imbalanced”). Numerical comparison with popular codes (OSQP, qpOASES, MOSEK) is reported on both randomly generated problems and support-vector machine classifier training problems. The results show that the former typically outperforms the latter on imbalanced problems. Finally, application of the proposed infeasible-start framework to other feasible-start algorithms is briefly considered, and is tested on a simplex iteration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Note however that, in the context of feasible-direction methods for general nonlinear optimization problems, methods that craftily combine the two phases have been known for decades; see [34, 36].

  2. One option, not considered in the present paper, may be to resort to homogeneous self-dual (HSD) embedding methods, first proposed in [44] for the solution of linear optimization problems and later extended in [43] to that of linear complementarity problems (which include convex quadratic programming as a special case). We are unaware of studies combining HSD with a simplex algorithm or a constraint-reduction method. Development, analysis, and numerical testing of such combinations may be of much interest.

  3. Earlier algorithms that include such features include, in the context of sequential quadratic programming (in nonlinear optimization), that of [10].

  4. Only Proposition 1 invokes boundedness of the primal optimal solution set. It is not clear at this point whether such assumption is necessary indeed. In any case, if it is, it of course can be achieved by imposing large bounds to the components of \(\mathbf{x}\).

  5. Note that, while, for primal linear optimization problems in standard form, dual strict feasibility implies boundedness of the primal optimal solution set (see, e.g., [42, Theorem 2.3]), it does not for problem (P).

  6. An \(\ell _\infty \) penalty function can be substituted for this \(\ell _1\) penalty function with minor adjustments: see [22, 23] for details.

  7. Iterations that require a strictly primal-feasible \(x^\ell \) are automatically catered to, subject to providing a strictly feasible initial \(({\mathbb {x}},{\mathbb {z}})\) in the Initialization step of Meta-Algorithm IS. Due to the relaxation, such strictly feasible points for (P\(_\varphi \)) are readily available.

  8. A constraint-reduced version of Mehrotra’s Predictor Corrector.

  9. While the authors of [28] (who are also the authors of the present paper) were not able to do away with such linear-independence assumption in proving that Theorem 1 in [28] holds, intuition and extensive numerical testing suggest that Assumption 3 can be dropped.

  10. Numerical experimentation, including tests with small-size problems that do not satisfy Assumption 4, suggests that mere boundedness of the optimal solution set of (P\('\)), as implied by Assumption 1\('\), is sufficient for the results to hold. A proof of this is elusive at this time though. In any case, boundedness of the feasible set of course can be achieved by imposing appropriately large bounds to the components of \(\mathbf{x}\).

  11. Alternatively, \(\mathbf{x}^{\hat{k}}\) is also feasible for the adjusted (rather than relaxed) problem obtained by still replacing \(\mathbf{b}\) with \(\mathbf{b}'\) but then including instead the equality constraints \(\mathbf{C}\mathbf{x}=\mathbf{d}'\), with \(\mathbf{d}'=\mathbf{C}\mathbf{x}^{\hat{k}}\).

  12. Approximate primal feasibility \((\mathbf{s},\mathbf{t}_+,\mathbf{t}_-)\ge \mathbf{0}\) (or \(\approx \mathbf{0}\)) is implicitly taken into account in the last three terms in the numerator.

  13. A possibility would be to freeze \(\mathbf{z}\) at zero when \(\mathbf{x}^0\) is primal feasible and indeed, when a component \(z_i^k\) of \(\mathbf{z}^k\) reaches zero at some iteration k, freeze that component to zero thereafter. This was not done in the tests reported here.

  14. Here we denote the CR-MPC algorithm with constraint reduction turned off as MPC\(^{*}\) (rather than MPC) to avoid confusion with the original MPC algorithm in [31].

  15. To avoid biases due to different stopping criteria, we also experimented with tolerances set to \(10^{-6}\) (while keeping \(\texttt {tol} = 10^{-8}\) for all MPC versions) and observed results with slightly fewer iterations and nearly identical computation time.

  16. Here the iteration counts of OSQP and qpOASES are significantly higher than other solver, which is as expected. However, OSQP failed to converge within 10 000 iterations in several problem instances. We observed that OSQP converges on all problem instances when the tolerance is relaxed from \(10^{-8}\) to \(10^{-4}\), but the computation time is still higher than other compared solvers.

  17. We used an implementation due to Luke Winternitz, who kindly made it available to us.

  18. Two-phase RPS is not included in Fig. 1b because when \(p\not =0\), the dual of (P) is not in standard dual form, a form it requires.

  19. Alternatively, following the suggestion made at the very end of Sect. 5, an \((n+1)\)-variable problem with feasible start could be solved.

  20. A Matlab-formatted version of these data sets was kindly made available to us by Jin Jung, first author of [26].

  21. Typical base iterations accept, some require, problems to be expressed in standard form.

  22. Slightly modified versions of penalty_parameter_update(), of the Reqs PU’ of Sect. 4 and, accordingly, of the penalty-parameter updating rule of Sect. 6.1 are to be used here. Specifically, all instances of \(\mathbf{z}\) and the associated dual variable \(\varvec{\xi }\) must be removed.

  23. This unbounded feasible primal sequence can be constructed by taking \(\{{\mathbb {x}}^{\ell }\}\) such that \({\mathbb {x}}_B^{\ell }:= \hat{x}_B + t^\ell \Delta \hat{x}_B\) and \({\mathbb {x}}_{B^c}^{\ell }:= t^\ell e_j\), where \(\hat{x}\), \(\Delta \hat{x}_B\), and B are respectively the primal BFS, direction, and basis at which unboundedness is detected in Iteration RPS (Pseudocode 6), \(e_j\) is a standard unit vector that takes value 1 at the entering column index j and 0 elsewhere, and \(t^\ell \rightarrow \infty \) as \(\ell \rightarrow \infty \). Here the direction \(\Delta \hat{x}_B\) and the entering column index j would be appended to the output of RPS_iteration() as part of \(\texttt {info}^{\mathrm{out}}\) (together with \(\varvec{\lambda }_1^{\mathrm{out}}\) and \(\varvec{\lambda }_2^{\mathrm{out}}\)) when unboundedness is detected. In Meta-Algorithm IS, this unbounded sequence would trigger penalty_parameter_update() to increase the penalty parameter \(\varphi \). After the increment of \(\varphi \), the primal variable then needs to be reinitialized to the BFS \(\hat{x}\) (at which unboundedness was detected) so that RPS_iteration() can be applied to solve the updated, possibly bounded problem.

References

  1. Anczak, T.: Exact penalty functions method for mathematical programming problems involving invex functions. Eur. J. Oper. Res. 198, 29–36 (2009)

    Article  MathSciNet  Google Scholar 

  2. Andersen, E.D., Andersen, K.D.: The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm. In: Frenk, H., Roos, K., Terlaky, T., Zhang, S. (eds) High performance optimization. Applied optimization, vol 33. Springer, Boston, MA (2000). https://doi.org/10.1007/978-1-4757-3216-0_8

  3. Andersen, E.D., Roos, C., Terlaky, T.: On implementing a primal-dual interior-point method for conic quadratic optimization. Math. Program. 95(2), 249–277 (2003)

    Article  MathSciNet  Google Scholar 

  4. Andersson, J.A.E., Gillis, J., Horn, G., Rawlings, J.B., Diehl, M.: CasADi—a software framework for nonlinear optimization and optimal control. Math. Program. Comput. 11(1), 1–36 (2019). https://doi.org/10.1007/s12532-018-0139-4

    Article  MathSciNet  MATH  Google Scholar 

  5. Anstreicher, K.: A standard form variant, and safeguarded linesearch, for the modified Karmarkar algorithm. Math. Program. 47, 337–351 (1990)

    Article  MathSciNet  Google Scholar 

  6. Benson, H.Y., Shanno, D.F.: An exact primal-dual penalty method approach to warmstarting interior-point methods for linear programming. Comput. Optim. Appl. 38(3), 371–399 (2007). https://doi.org/10.1007/s10589-007-9048-6

    Article  MathSciNet  MATH  Google Scholar 

  7. Bertsekas, D., Nedic, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003)

    MATH  Google Scholar 

  8. Bertsimas, D., Tsitsiklis, J.: Introduction to Linear Optimization. Athena, Belmont (1997)

    Google Scholar 

  9. Boyd, S., Parikh, N., Chu, E.: Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Now Publishers Inc, Boston (2011)

    MATH  Google Scholar 

  10. Byrd, R., Curtis, F., Nocedal, J.: Infeasibility detection and SQP methods for nonlinear optimization. SIAM J. Optim. 4(5), 2281–2299 (2010)

    Article  MathSciNet  Google Scholar 

  11. Byrd, R.H., Nocedal, J., Waltz, R.A.: Steering exact penalty methods for nonlinear programming. Optim. Methods Softw. 23(2), 197–213 (2008). https://doi.org/10.1080/10556780701394169

    Article  MathSciNet  MATH  Google Scholar 

  12. Chapelle, O.: Training a support vector machine in the primal. Neural Comput. 19(5), 1155–1178 (2007)

    Article  MathSciNet  Google Scholar 

  13. Coleman, T., Conn, A.: Nonlinear programming via an exact penalty method: asymptotic analysis. Math. Program. 24, 123–136 (1982)

    Article  Google Scholar 

  14. Conn, A.: Constrained optimization using a nondifferentiable penalty function. SIAM J. Numer. Anal. 10(4), 760–784 (1973)

    Article  MathSciNet  Google Scholar 

  15. Conn, A.R., Mongeau, M.: Discontinuous piecewise linear optimization. Math. Program. 80(3), 315–380 (1998). https://doi.org/10.1007/BF01581171

    Article  MathSciNet  MATH  Google Scholar 

  16. Coulibaly, Z., Orban, D.: An l1 elastic interior-point method for mathematical programs with complementarity constraints. SIAM J. Optim. 22, 187–211 (2012)

    Article  MathSciNet  Google Scholar 

  17. Di Pillo, G., Facchinei, F., Grippo, L.: An RQP algorithm using a differentiable exact penalty function for inequality constrained problems. Math. Program. 25, 49–68 (1992)

    Article  MathSciNet  Google Scholar 

  18. Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Program. Comput. 6(4), 327–363 (2014)

    Article  MathSciNet  Google Scholar 

  19. Fine, S., Scheinberg, K.: Efficient SVM training using low-rank kernel representations. J. Mach. Learn. Res. 2, 243–264 (2001)

    MATH  Google Scholar 

  20. Gertz, E.M., Griffin, J.D.: Support vector machine classifiers for large data sets. Technical Report ANL/MCS-TM-289, Argonne National Laboratory (2005). https://doi.org/10.2172/881587

  21. Hassan, H., Baharum, A.: Generalized logarithmic penalty function method for solving smooth nonlinear programming involving invex functions. Arab J. Basic Appl. Sci. 26, 202–214 (2019)

    Article  Google Scholar 

  22. He, M.: Infeasible constraint reduction for linear and convex quadratic optimization. Ph.D. thesis, University of Maryland (2011). http://hdl.handle.net/1903/12772. Accessed 2019

  23. He, M.Y., Tits, A.L.: Infeasible constraint-reduced interior-point methods for linear optimization. Optim. Methods Softw. 27(4–5), 801–825 (2012). https://doi.org/10.1080/10556788.2011.589056

    Article  MathSciNet  MATH  Google Scholar 

  24. Hettich, C.B.S., Merz, C.: UCI repository of machine learning databases (1998). http://www.ics.uci.edu/~mlearn/MLRepository.html. Accessed 2019

  25. Hungerländer, P.: Algorithms for convex quadratic programming. Ph.D. thesis, Alpen-Adria-Universität Klagenfurt (2009)

  26. Jung, J.H., O’Leary, D.P., Tits, A.L.: Adaptive constraint reduction for training support vector machines. Electron. Trans. Numer. Anal. 31, 156–177 (2008)

  27. Jung, J.H., O’Leary, D.P., Tits, A.L.: Adaptive constraint reduction for convex quadratic programming. Comput. Optim. Appl. 51(1), 125–157 (2012)

  28. Laiu, M., Tits, A.: A constraint-reduced MPC algorithm for convex quadratic programming, with a modified active set identification scheme. Comput. Optim. Appl. 72, 727–768 (2019). https://doi.org/10.1007/s10589-019-00058-0

    Article  MathSciNet  MATH  Google Scholar 

  29. Matušek, J., Gärtner, R.: Understanding and Using Linear Programming. Springer, New York (2007)

    Google Scholar 

  30. Mayne, D.Q., Polak, E.: Feasible directions algorithms for optimization problems with equality and inequality constraints. Math. Program. 11(1), 67–80 (1976). https://doi.org/10.1007/BF01580371

    Article  MathSciNet  MATH  Google Scholar 

  31. Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)

    Article  MathSciNet  Google Scholar 

  32. Melo, T., Matias, J., Monteiro, M.: A penalty methods for solving the MPCC problem. J. Math. Anal. 7(1), 91–101 (2016)

    MathSciNet  MATH  Google Scholar 

  33. Monteiro, R., Adler, I.: Interior path following primal-dual algorithms. Part II: convex quadratic programming. Math. Program. 44, 43–66 (1989)

    Article  Google Scholar 

  34. Polak, E., He, L.: Unified steerable phase I–phase II method of feasible directions for semi-infinite optimization. J. Optim. Theory Appl. 69(1), 83–107 (1991). https://doi.org/10.1007/BF00940462

    Article  MathSciNet  MATH  Google Scholar 

  35. Polak, E., Tits, A.L.: A globally convergent, implementable multiplier method with automatic penalty limitation. Appl. Math. Optim. 6(1), 335–360 (1980). https://doi.org/10.1007/BF01442901

    Article  MathSciNet  MATH  Google Scholar 

  36. Polak, E., Trahan, R., Mayne, D.Q.: Combined phase I–phase II methods of feasible directions. Math. Program. 17(1), 61–73 (1979). https://doi.org/10.1007/BF01588225

    Article  MathSciNet  MATH  Google Scholar 

  37. Schölkopf, B., Smola, A.J., Bach, F., et al.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  38. Stellato, B., Banjac, G., Goulart, P., Bemporad, A., Boyd, S.: OSQP: an operator splitting solver for quadratic programs. Math. Program. Comput. 12(4), 637–672 (2020). https://doi.org/10.1007/s12532-020-00179-2

    Article  MathSciNet  MATH  Google Scholar 

  39. Tits, A., Wächter, A., Bakhtiari, S., Urban, T., Lawrence, C.: A primal-dual interior-point method for nonlinear programming with strong global and local convergence properties. SIAM J. Optimiz. 14(1), 173–199 (2003). https://doi.org/10.1137/S1052623401392123

    Article  MathSciNet  MATH  Google Scholar 

  40. Winternitz, L.: Primal-dual interior-point algorithms for linear programming problems with many inequality constraints. Ph.D. thesis, University of Maryland (2010). http://hdl.handle.net/1903/10400

  41. Wright, S.: On the convergence of the Newton/log-barrier method. Math. Program. 90, 71–100 (2001). https://doi.org/10.1007/PL00011421

  42. Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)

    Book  Google Scholar 

  43. Ye, Y.: On homogeneous and self-dual algorithms for LCP. Math. Program. 76, 211–221 (1996)

    Article  MathSciNet  Google Scholar 

  44. Ye, Y., Todd, M., Mizuno, S.: An \({O}(\sqrt{n}{L})\)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19, 53–67 (1994)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Paul Laiu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This manuscript has been authored, in part, by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

M. Paul Laiu’s research was sponsored by the Office of Advanced Scientific Computing Research and performed at the Oak Ridge National Laboratory, which is managed by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725.

Appendices

Appendix

Two algorithms considered in this paper are presented here for the sake of completeness. Some notation used in the appendix is “local”, i.e., only applies to this appendix; indeed, some of the same symbols are used in the main body of the paper for unrelated quantities.

A Feasible-start algorithm CR-MPC

In this appendix, we present a brief description of Algorithm CR-MPC in [28], as applied to a generic inequality-constrained CQP of the form (6). We refer the reader to [28] for the complete version of Algorithm CR-MPC in full detail.

For ease of reference, we restate the generic inequality-constrained CQP (6) here:

$$\begin{aligned} \mathop {\mathrm{minimize}}\limits _{x} \, f(x):=\frac{1}{2}x^THx+c^Tx \text{ s.t. } \, Ax \ge b. \end{aligned}$$
(58)

Algorithm CR-MPC is a feasible-start, descent CQP variant of S. Mehrotra’s predictor–corrector algorithm [31]; computational efficiency is enhanced on imbalanced problems by solving a mere approximate Newton-KKT system for search directions (“constraint reduction”). Specifically, the Newton-KKT system is approximated by including only a selected subset of constraints, which aims to estimate the active constraint set at a solution to (58). Also, an adaptive positive definite regularization W of the original Hessian H is substituted in that system.

Let Q denote the index set of the selected set of constraints, then the affine-scaling search direction is given by the solution of the approximate Newton-KKT system for (58)

$$\begin{aligned} \left[ \begin{array}{ccc} W &{} \quad -A_Q^T &{} \quad \mathbf{0}\\ A_Q &{} \quad \mathbf{0}&{} \quad -I\\ \mathbf{0}&{} \quad S_Q &{} \quad \Lambda _Q \end{array}\right] \left[ \begin{array}{c} \Delta x^{\mathrm{a}}\\ \Delta \lambda _Q^{\mathrm{a}}\\ \Delta s_Q^{\mathrm{a}} \end{array}\right] =\left[ \begin{array}{c} -\nabla f(x)+(A_Q)^T\lambda _Q \\ \mathbf{0}\\ -S_Q\lambda _Q \end{array}\right] \,, \end{aligned}$$
(59)

and the corrector direction is computed by solving

$$\begin{aligned} \left[ \begin{array}{ccc} W &{} \quad -A_Q^T &{} \quad \mathbf{0}\\ A_Q &{} \quad \mathbf{0}&{} \quad -I\\ \mathbf{0}&{} S_Q &{} \Lambda _Q \end{array}\right] \left[ \begin{array}{c} \Delta x^{\mathrm{c}}\\ \Delta \lambda _Q^{\mathrm{c}}\\ \Delta s_Q^{\mathrm{c}} \end{array}\right] =\left[ \begin{array}{c} \mathbf{0}\\ \mathbf{0}\\ \sigma \mu _{(Q)}\mathbf {1}-\Delta S_Q^{\mathrm{a}}\Delta \lambda _Q^{\mathrm{a}} \end{array}\right] \end{aligned}$$
(60)

where \(\mu _{(Q)}:={\left\{ \begin{array}{ll} s_Q^T\lambda _Q/|Q|\,, &{} \quad \text {if } Q\ne \emptyset \\ 0\,, &{} \quad \text {otherwise} \end{array}\right. }\) and \(\sigma =(1-\alpha ^{\mathrm{a}})^3\) with \(\alpha ^{\mathrm{a}}\) the maximum feasible step for the affine-scaling direction. A pseudocode for an iteration in Algorithm CR-MPC is given in Pseudocode 5, which can serve as base_iteration() in Meta-Algorithm IS with minimal modifications that tailor the code for penalized relaxed problems of the form (P\(_\varphi \)).

figure u
figure v

B Revised primal simplex (RPS) algorithm

In this appendix, we give a brief description of the revised primal simplex method, which is considered in Sect. 6.3.1 in the comparison to the CR-MPC method discussed in Appendix A, as well as in Sect. 7 as a base iteration in the IS framework. We consider the RPS method that handles linear optimization problems in primal standard form

$$\begin{aligned} \mathop {\mathrm{minimize}}\limits _{x} \, c^Tx \text{ s.t. } \, Cx = d,\, x\ge \mathbf{0}. \end{aligned}$$
(61)

The RPS method considered here starts from an initial primal BFS for (61) and, at each iteration, it reduces the objective function value by moving to an adjacent primal BFS with lower value. Specifically, each RPS iteration forms a descent search direction by adding an entering column to the basis associated to the BFS, and takes a step towards the descent direction until one of the BFS entries becomes zero, the index of which is then identified as the leaving column and is removed from the basis. Pseudocode 6 presents the main steps of an RPS iteration, which can be used as iteration_RPS() in the wrapped RPS base iteration in Pseudocode 4 by incorporating the penalty parameter \(\varphi \) into the input.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Laiu, M.P., Tits, A.L. An infeasible-start framework for convex quadratic optimization, with application to constraint-reduced interior-point and other methods. Math. Program. 195, 327–366 (2022). https://doi.org/10.1007/s10107-021-01692-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-021-01692-5

Keywords

Mathematics Subject Classification

Navigation