Abstract
We introduce a feasible corrector–predictor interior-point algorithm (CP IPA) for solving linear optimization problems which is based on a new search direction. The search directions are obtained by using the algebraic equivalent transformation (AET) of the Newton system which defines the central path. The AET of the Newton system is based on the map that is a difference of the identity function and square root function. We prove global convergence of the method and derive the iteration bound that matches best iteration bounds known for these types of methods. Furthermore, we prove the practical efficiency of the new algorithm by presenting numerical results. This is the first CP IPA which is based on the above mentioned search direction.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Karmarkar (1984) presented the first projective IPA for solving LO problems with polynomial-time complexity. After that, several results related to this theory have been published. The theory and practice of IPAs can be found in monographs written by Roos et al. (1997), Wright (1997), Ye (1997) and Nesterov and Nemirovski (1994). The IPAs for LO can be classified in multiple ways. One way to classify these algorithms is based on the step length. In this way, we can distinguish between short- and long-step IPAs. In theory, short-update algorithms give usually more efficient theoretical results with simpler analysis, while in practice, the large-step versions perform generally better. Another way to categorize the IPAs is related to feasibility of the iterates, hence, we can consider feasible and infeasible IPAs. Another type of IPAs that have proven to be efficient in practice are predictor–corrector IPAs. These algorithms consist of iterations using two types of steps, one predictor step and one or more corrector steps. For further details on classification of different IPAs see Illés and Terlaky (2002), Roos et al. (1997), Wright (1997) and Ye (1997).
The determination of the search directions plays a key role in the theory of IPAs. The most widely used technique for obtaining the search directions is based on barrier functions. By considering self-regular functions, Peng et al. (2002) reduced the theoretical complexity of large-step IPAs. Darvay (2002) introduced a new technique for finding search directions in case of these algorithms, namely the algebraic equivalent transformation of the system which defines the central path. Central path has been introduced independently by Sonnevend (1985, 1986) and by Megiddo (1989). The importance of the central path in the literature of IPAs can be highlighted by the fact that it is unique, see Roos et al. (1997), Terlaky (2001), Wright (1997) and Ye (1997). Sonnevend (1985, 1986) proved that the central path led to a unique optimal solution called the analytic center. General idea of IPA is to approximately trace the central path and to compute an interior point, called \(\varepsilon \)-optimal solution, that well approximates the analytic center. From an \(\varepsilon \)-optimal solution with small enough \(\varepsilon > 0\), using the so-called rounding procedure (Illés and Terlaky 2002; Roos et al. 1997), an optimal solution can be computed in strongly polynomial time.
The function of AET is applied to both sides of the nonlinear equation of the system that defines the unique central path. After the transformation the central path remains unique, and the Newton’s method is applied to the transformed system in order to determine the displacements. In the literature, the most widely used function for AET is the identity map, namely in most of IPAs the central path has not been transformed. In the papers of Darvay (2002, 2003) \(\psi (t)=\sqrt{t}\) is used while in Darvay et al. (2016) the authors introduced an IPA for LO based on the direction using a new function, namely \(\psi (t)=t-\sqrt{t}\), where the domain is \(D_{\psi }=\left( \frac{1}{4},\infty \right) .\) IPAs based on AET (Darvay 2002, 2003; Darvay et al. 2016) achieve usually the best known iteration bounds, alongside many other IPAs (Roos et al. 1997; Wright 1997; Ye 1997). Further research is needed to investigate whether IPAs that use AETs are more efficient than IPAs that are not. It would be also interesting to identify some class of LO problems for which application of AET based IPA may be beneficial.
The first PC IPA has been independently developed by Mehrotra (1992) and Sonnevend et al. (1990). The PC IPAs consist of a predictor and several corrector steps in a main iteration. The aim of the predictor step is to approach the optimal solution of the problem in a greedy way. The usual consequence of the greedy predictor step is that the obtained strictly feasible solution no longer belongs to the given neighborhood of the central path. The goal of the corrector steps is to return the iterate back to the designated neighborhood. Mizuno et al. (1993) proposed the first PC IPA which uses only one corrector step in a main iteration. Darvay (2005, 2009) introduced PC IPAs for LO that are based on the AET technique and he used the function \(\psi (t)=\sqrt{t}\) with domain \(D_{\psi }=\left( 0, \infty \right) \) in order to determine the transformed central path and the modified Newton system. The unique solution of this system led to a new search direction. Kheirfam (2016, 2015) proposed CP IPAs for convex quadratic symmetric cone optimization and second-order cone optimization, respectively.
Before summarizing the structure and results of this paper, it is worthwhile to mention that IPAs for LO have been extensively generalized to linear complementary problems (LCPs) (Cottle et al. 1992; Illés et al. 2010a, b; Kojima et al. 1991; Lešaja and Roos 2010; Potra and Sheng 1996; Yoshise 1996). There are several generalizations of the Mizuno-Todd-Ye PC IPA (Mizuno et al. 1993) from LO to sufficient LCPs like that of Potra (2002), and Illés and Nagy (2007). Recently, Potra (2014) published a new PC IPA for sufficient LCPs using wide neighborhood with optimal iteration complexity. The AET method for determining search directions for IPAs has been also extended to LCPs (Achache 2010; Asadi and Mansouri 2013; Kheirfam 2014; Wang et al. 2009) and LCPs over symmetric cones (Asadi et al. 2017a, b; Mohammadi et al. 2015; Wang 2012).
In this paper, a new CP IPA for LO is introduced. We use the AET method for the system which defines the central path that is based on the function \(\psi (t)=t-\sqrt{t}\). Newton’s method is then applied to the transformed system in order to find the search directions. The analysis of the algorithm is more complicated with this function. Nevertheless, we were able to prove global convergence of the method and derive the iteration bound that matches best-known iteration bound for these types of methods. We also present some numerical results and we compare our CP IPA with the classical primal-dual method, which is based on the same search direction and uses only one step in each iteration.
The paper is organized as follows. In Sect. 2, the primal-dual LO problem and the main concepts of the AET of the system defining the central path are given. In the following section the new CP IPA is presented. Section 4 contains the analysis of the proposed algorithm, while in Sect. 5 the iteration bound for the algorithm is derived. In Sect. 6, we provide some numerical results that prove the efficiency of this algorithm. in the last Section some concluding remarks are provided.
2 Preliminaries
Consider the LO problem in the standard form
and its dual problem
where \(A\in {R}^{m\times n}\) with \(rank(A)=m, b\in R^m\) and \(c\in R^n\). We assume that the interior-point condition (IPC) holds for both problems; that is, there exists \((x^0, y^0, s^0)\) such that
Using the self-dual embedding model presented by Ye et al. (1994), Roos et al. (1997) and Terlaky (2001) we conclude that the IPC can be assumed without loss of generality. In this case, the all-one vector can be considered as a starting point.
Under the IPC, finding an optimal solution of the primal-dual pair is equivalent to solving the following system
The main idea of primal-dual IPAs is to replace the third equation in (1), the so-called complementarity condition for (P) and (D), by the perturbed equation \(xs=\mu e\) with \(\mu >0\). Hence, we obtain the following system of equations:
It is proved in Roos et al. (1997) that there is a unique solution \((x(\mu ), y(\mu ), s(\mu ))\) to the system (2) for any \(\mu >0\), assuming the IPC holds. The set of all such solutions constructs a homotopy path, which is called the central path (see Megiddo 1989; Sonnevend 1986). If \(\mu \) tends to zero, then the central path converges to the optimal solution of the problem.
In what follows, we recall the AET introduced by Darvay et al. (2016) for LO that leads to calculation of new search direction for IPAs. For this purpose, we consider the continuously differentiable function \(\psi : \mathbb {R}_+\rightarrow \mathbb {R}_+\), and assume that its inverse \(\psi ^{-1}\) exists. Note that the system (2) can be rewritten in the following form:
where \(\psi \) is applied componentwisely. Applying Newton’s method to system (3) for a strictly feasible solution (x, y, s) produces the following system for search direction \((\varDelta x, \varDelta y, \varDelta s)\)
Let
Defining the scaled search directions as
one easily verifies that system (4) can be written in the form
where \({\bar{A}}:=A \, \mathrm{diag}(\frac{x}{v})\) and \(p_v:=\frac{\psi (e)-\psi (v^2)}{v\psi ^{'}(v^2)}.\) For different \(\psi \) functions (see Darvay 2002, 2003; Darvay et al. 2016; Roos et al. 1997), one get different values for the \(p_{\nu }\) vector that lead to different search directions. Based on the idea of Darvay et al. (2016) idea, we take \(\psi (t)=t-\sqrt{t}\), which gives
For analysis of our algorithm, we define a norm-based proximity measure \(\delta (x, s; \mu )\) as follows:
which has been considered for feasible IPAs for the first time in Darvay et al. (2016). Considering (6) we have \( d_x^T d_s=0\). Thus, the vectors \( d_x\) and \( d_s\) are orthogonal. Using (8) and \(v>0\), one can easily verify that
Hence, the value of \(\delta (v)\) can be considered as an appropriate measure for the distance between the given triple (x, y, s) and \((x(\mu ), y(\mu ), s(\mu ))\). Moreover, note that if \(\psi (t)=t\) then \(p_v=v^{-1}-v\) and we obtain the standard proximity measure \(\delta (v)=\frac{1}{2}\Vert v-v^{-1}\Vert \) given in Roos et al. (1997). From \(\psi (t)=\sqrt{t}\) it follows that \(p_v=2(e-v)\), thus \(\delta (v)=\Vert e-v\Vert \), which was discussed in Darvay (2002, 2003). Let
Then, the orthogonality of the vectors \( d_x\) and \( d_s\) implies
As an effect of this relation, we can also express the proximity measure using \(q_v\), thus
Furthermore,
thus
holds.
The lower and upper bounds on the components of the vector v are given in the following lemma.
Lemma 1
[cf. Lemma 2 in Kheirfam (2018)] If \(\delta :=\delta (v)\), then
where \(\rho (\delta )=\delta +\sqrt{\frac{1}{4}+\delta ^2}\).
3 Corrector–predictor algorithm
In this section, we present a CP path-following algorithm for LO problems based on Darvay et al.’s idea. For this purpose, we define an \(\tau \)-neighborhood of the central path as follows:
where \(0< \tau < 1\). The algorithm begins with a given strictly feasible primal-dual solution \((x^0, y^0,s^0) \in {{\mathcal {N}}}(\tau )\). If for the current iterate (x, y, s), \(n\mu >\epsilon \), then the algorithm calculates a new iterate by performing corrector and predictor steps. In the corrector step, we define \(v=\sqrt{\frac{xs}{\mu }}\), \({{\bar{A}}}=A \, \mathrm{diag}(\frac{x}{v})\) and we obtain the scaled search directions \(d_x\) and \(d_s\) by solving (6) with \(p_v\) given in (7), namely
Newton directions of the original system (4), i.e., \(\varDelta x=\frac{x}{v}d_x, \varDelta s=\frac{s}{v}d_s\) can be expressed easily and the corrector iterate is obtained by a full-Newton step as follows:
In the predictor step, we define
and we obtain the search directions \(d^p_x\) and \(d^p_s\) by solving the following system:
Note that the right-hand side of the system (12) is inspired by the predictor step proposed in Darvay (2005). Similarly to \(\varDelta x\) and \(\varDelta s\), we define \(\varDelta ^px=\frac{x^+}{v^+}d^p_x, \varDelta ^ps=\frac{s^+}{v^+}d^p_s\) and the predictor iterate is obtained by
where \(\theta \in (0, \frac{1}{2})\) and also \(\mu ^p=(1-2\theta )\mu \). At the beginning of the algorithm, we assume that \((x^0, y^0,s^0) \in {{\mathcal {N}}}(\tau )\). We would like to determine the values of \(\tau \) and \(\theta \) in such a way that after a corrector step \((x^+,y^+,s^+) \in {{\mathcal {N}}}(\omega (\tau ))\) (where \(\omega (\tau ) < \tau \) will be defined later) and after a predictor step \((x^p,y^p,s^p) \in {{\mathcal {N}}}(\tau )\). The algorithm repeats corrector and predictor steps alternatively until \(x^Ts\le \epsilon \) is satisfied. A formal description of the algorithm is given in Fig. 1.
4 Analysis of the algorithm
The following technical lemma introduced by Wright (1997), is a generalization of Lemma C.4 (first \(u{-}v\) Lemma) in Roos et al. (1997). We will use it to estimate the norm of the product of scaled search directions.
Lemma 2
[Lemma 5.3 in Wright (1997)] Let u and v be two arbitrary vectors in \(\mathbb {R}^n\) with \(u^Tv\ge 0\). Then
In the following two sections, we will analyse the predictor and the corrector steps in detail, respectively. Note that the first step performed by the algorithm is in fact a corrector step.
4.1 The predictor step
The next lemma will prove the strict feasibility after a predictor step.
Lemma 3
Let \((x^+, y^+, s^+)\) be a strictly feasible primal-dual solution obtained after a corrector step and \(\mu >0\). Furthermore, let \(0<\theta <\frac{1}{2}\), and
denote the iterates after a predictor step. Then \((x^p, y^p, s^p)\) is a strictly feasible primal-dual solution if
where \(\delta _+:=\delta (x^+, s^+; \mu ).\)
Proof
For each \(0\le \alpha \le 1\), denote \(x^p(\alpha )=x^++\alpha \theta \varDelta ^px\) and \(s^p(\alpha )=s^++\alpha \theta \varDelta ^ps.\) Therefore, using the third equation in (12), we obtain
From (13), it follows that
The second inequality is due to the fact that \(f(\alpha ):=\frac{\alpha ^2\theta ^2}{1-2\alpha \theta }\) is monotonically increasing with respect to \(\alpha \); that is, \(f(\alpha )\le f(1)\). The third inequality follows from Lemmas 1 and 2. The second equality can be derived from the third equation of (12). The inequality before the last line follows from the upper bound given in Lemma 1.
The above inequality implies that \(x^p(\alpha )s^p(\alpha )>0\), for all \(0\le \alpha \le 1\). Therefore, \(x^p(\alpha )\) and \(s^p(\alpha )\) are not changing sign on \(0\le \alpha \le 1\). Since \(x^p(0)=x^+>0\) and \(s^p(0)=s^+>0\), thus we conclude that \(x^p(1)=x^++\theta \varDelta ^px=x^p>0\) and \(s^p(1)=s^++\theta \varDelta ^ps=s^p>0\) and the proof is complete. \(\square \)
We define
It follows from (13), with \(\alpha =1\), that
and
Lemma 4
Let \((x^+, y^+, s^+)\) be a strictly feasible primal-dual solution and \(\mu ^p=(1-2\theta )\mu \) with \(0<\theta <\frac{1}{2}\). Moreover, let \(h(\delta _+, \theta , n)> \frac{1}{4}\) and assume that \((x^p, y^p, s^p)\) denotes the iterate after a predictor step. Then \(v^p > \frac{1}{2} e\) and
Proof
Since \(h(\delta _+, \theta , n)>\frac{1}{4}\), from (16) we have \(\min \big (v^p\big )^2\ge \frac{1}{4}\), which yields \(v^p>\frac{1}{2}e\). Moreover, from Lemma 3 we deduce that the predictor step is strictly feasible; \(x^p>0\) and \(s^p>0\). Now, by the definition of proximity measure, we have
where the first two inequalities are due to Lemma 5.2 in Darvay et al. (2016) and (16), respectively. The last equality follows from (15) and the last inequality is due to the triangle inequality.
We will give an upper bound for \(\big \Vert e-\left( v^+\right) ^2 \big \Vert \). Using the definition of \(v^{+}=\sqrt{\frac{x^+ s^+}{\mu }}\) and Eq. (10), we have
Moreover, the third equation of system (11) yields
Using (18), (19) and the fact that \(\Vert x^2\Vert \le \Vert x\Vert ^2\), we have
Thus, using (20) and Lemmas 1 and 2, we obtain
Substitution of this bound into (17) yields the desired inequality. \(\square \)
4.2 The corrector step
In this subsection, we deal with the corrector step. One can observe that the algorithm presented in Fig. 1 performs a full-Newton step as a corrector step, which can be obtained in the same way as the one presented in the primal-dual algorithm of Darvay et al. (2016). Thus, for the analysis of this case the lemmas proved in Darvay et al. (2016) can be applied. The next lemma gives a condition for strict feasibility of full-Newton step.
Lemma 5
[Lemma 5.1 in Darvay et al. (2016)] Let \(\delta :=\delta (x, s, \mu )<1\) and assume that \(v\ge \frac{1}{2}e\). Then, \(x^+>0\) and \(s^+>0\).
In the next lemma we show local quadratic convergence of the Newton process.
Lemma 6
[Lemma 5.3 in Darvay et al. (2016)] Suppose that \(\delta :=\delta (x, s, \mu )<\frac{1}{2}\) and \(v\ge \frac{1}{2}e\). Then, \(v^+>\frac{1}{2}e\) and
The following lemma gives an upper bound of the duality gap after a corrector step.
Lemma 7
[Lemma 5.4 in Darvay et al. (2016)] Let \(\delta :=\delta (x, s, \mu )<\frac{1}{2}\) and \(v\ge \frac{1}{2}e\). Then
In the next subsection, we analyse the update of the duality gap after a main iteration of the algorithm.
4.3 The effect on duality gap after a main iteration
The purpose of the algorithm is to reduce the produced duality gap of the primal-dual pair. Therefore, in order to measure this reduction, in the next lemma we give an upper bound for the gap obtained after performing an iteration of the algorithm.
Lemma 8
Suppose that \(\delta :=\delta (x, s, \mu )<\frac{1}{2}\) and \(v\ge \frac{1}{2}e\). Moreover, let \(0<\theta <\frac{1}{2}\). Then
Proof
Using (14) and (15), we obtain
where the inequality is due to Lemma 7 and \((d^p_x)^Td^p_s=0\). This completes the proof. \(\square \)
4.4 Determining appropriate values of parameters
In this section, we want to fix the parameters \(\tau \) and \(\theta \) with suitable values, which guarantee that after a main iteration, the proximity measure will not exceed \(\tau \).
Let \((x, y, s) \in {{\mathcal {N}}}(\tau )\) be the iterate at the start of a main iteration with \(x>0\) and \(s>0\) such that \(\delta =\delta (x, s; \mu )\le \tau <\frac{1}{2}\). After a corrector step, by Lemma 6, we have
It is obvious that the right-hand side of the above inequality is monotonically increasing with respect to \(\delta \), and this implies that
Following the predictor step and the \(\mu \)-update, by Lemma 4, we have
where \(\delta _+\) and \(h(\delta _+, \theta , n)\) are defined as in Lemma 3. It can be easily verified that \(h(\delta _+, \theta , n)\) is decreasing with respect to \(\delta _+\), so \(h(\delta _+, \theta , n)\ge h(\omega (\tau ), \theta , n)\). Let us consider the function \(f(t)=\frac{\sqrt{t}}{2t+\sqrt{t}-1}\), for \(t>\frac{1}{4}\). From \(f^{'}(t)<0\), it follows that f is decreasing, therefore
Using (22), (23), (24) and the fact that \(\rho \) is increasing with respect to \(\delta _+\), we obtain
when \(h(\omega (\tau ), \theta , n)>\frac{1}{4}.\) If we take \(\tau =\frac{1}{4}\) and \(\theta = \frac{1}{5\sqrt{n}}\), then \(\delta ^p < \tau \) and \(h(\delta _+,\theta ,n) > \frac{1}{4}\). It should be mentioned, that the iterates after the corrector steps are in the \(\mathcal {N}(\omega (\tau ))\) neighbourhood, while the iterates after the predictor steps are in the \(\mathcal {N}(\tau )\) neighbourhood.
5 Iteration bound
The next lemma gives an upper bound for the number of iterations produced by the algorithm.
Lemma 9
Let \((x^0, y^0, s^0)\) be a strictly feasible primal-dual solution, \(\mu ^0 = \frac{\left( x^0\right) ^Ts^0}{n}\) and \(\delta (x^0, s^0, \mu ^0 )\le \tau \). Moreover, let \(x^k\) and \(s^k\) be the iterates obtained after k iterations. Then, \(\left( x^k\right) ^T s^k \le \epsilon \) for
Proof
From Lemma 8, it follows that
This means that \((x^k)^Ts^k\le \epsilon \) holds if
If we take logarithms, we get
Since \(\log (1+\xi )\le \xi \), \(\xi >-1\), using \(\xi =-2\theta \), we obtain that the above inequality holds if
This proves the lemma. \(\square \)
Using the above result follows the main result of our paper.
Theorem 1
Let \(\tau =\frac{1}{4}\) and \(\theta =\frac{1}{5\sqrt{n}}\). Then, the corrector–predictor interior point algorithm given in Fig. 1 is well defined and the algorithm requires at most
iterations. The output is a strictly feasible primal-dual solution (x, y, s) satisfying \(x^Ts\le \epsilon \).
It is worth mentioning that this CP algorithm has advantages compared to the one-step method presented in Darvay et al. (2016), because in the case of the one-step algorithm \(\theta =\frac{1}{27\sqrt{n}}\), which is smaller than the value of \(\theta \) used by us. Hence, this paper leads to a slightly better complexity result.
6 Numerical results
In order to demonstrate the efficiency of our CP algorithm we implemented it in the C++ programming language (Darvay and Takó 2012) in such a way to be able to compare it with the primal-dual (PD) method proposed in Darvay et al. (2016). To do so, we made some changes in the algorithm. In case of both algorithms, we used the normalized duality gap \(\frac{x^T s}{n}\) to obtain the value of the barrier parameter \(\mu \) in the next iterate. In the implementation of the PD algorithm, we multiplied the normalized duality gap by \(\sigma = 0.95\) in order to be able to reduce the value of \(\mu \) in each iterate and to maintain the short-step strategy of the method. As it is usual in the case of the implementation of predictor–corrector algorithms, for our CP variant we applied Mehrotra’s (1992) heuristics to get the value of \(\mu \) for the corrector step. Our CP algorithm performed one corrector and one predictor step in each iteration. After calculating the search direction, we determined the maximal step size to the boundary of the feasible region. We reduced the obtained step by multiplying it by the parameter \(\rho = 0.5\). For both algorithms, we set the value of the proximity parameter \(\epsilon \) to \(10^{-5}\). We tested both algorithms on two set of problems. The first set contains randomly generated problems with maximum size \(50\times 50\). We generated ten problems of maximum size \(10\times 10\), \(20\times 20\) and \(50\times 50\), respectively. In each case, we calculated the average number of iterations (Avg. It.) and CPU times (Avg. CPU) in seconds. The obtained results are presented in Table 1.
The second set of problems on which we tested the algorithms was chosen from the Netlib test collection (Gay 1985). The number of iterations (Nr. It.) and CPU times (CPU) in seconds are summarized in Table 2.
Based on the obtained results for the two set of problems we conclude that our CP algorithm outperforms the classical primal-dual method which uses the same search direction.
7 Conclusions and future research
We proposed a CP IPA for LO. We used the AET method for the system which defines the central path that is based on the function \(\psi (t)=t-\sqrt{t}\) (Darvay et al. 2016). We then applied Newton’s method to the transformed system in order to get the new search directions. Furthermore, we presented the analysis of the proposed algorithm and proved that the method finds an \(\varepsilon \)-optimal solution in polynomial time. To our best knowledge, this is the first CP IPA where the function \(\psi (t)=t-\sqrt{t}\) is used to derive the search directions. The novelty of this paper consists of the techniques used in the analysis of the algorithm. We had to assure that the components of v-vectors of the scaled space are greater than \(\frac{1}{2}\). We highlighted the practical efficiency of the method by providing numerical results on the selected set of test problems from NETLIB. We made a comparison between the number of iterations performed by our algorithm and the classical one-step method. We concluded that our algorithm is more efficient than the classical one.
Future theoretical research direction includes extending our CP IPA to more general problems. The computational direction for further research includes implementation with different functions \(\psi \) used for AET. We would like to compare the effect of these functions on the computational results of the algorithms on quite large and well selected LO test problems. Moreover, this kind of computational study might give some insight into the good selection strategy of the target point of the search direction for different LO problems. Ideal situation would be, if the algorithm could be able to choose automatically among different \(\varphi \) functions depending on the structure of the problem. Furthermore, such computational study on the effect of different AETs could be based on the similar LO test set, like in the study of Illés and Nagy (2014) for pivot algorithms.
References
Achache M (2010) Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems. Appl Math Comput 216(7):1889–1895
Asadi S, Mansouri H (2013) Polynomial interior-point algorithm for \({P}_*(\kappa )\) horizontal linear complementarity problems. Numer Algorithms 63(2):385–398
Asadi S, Mansouri H, Darvay Zs (2017a) An infeasible full-NT step IPM for horizontal linear complementarity problem over Cartesian product of symmetric cones. Optimization 66(2):225–250
Asadi S, Zangiabadi M, Mansouri H (2017b) A predictor–corrector interior-point algorithm for \(P_*(\kappa )-HLCPs\) over Cartesian product of symmetric cones. Numer Funct Anal Optim 38:20–38
Cottle RW, Pang J-S, Stone RE (1992) The linear complementarity problem, computer science and scientific computing. Academic Press Inc., Boston
Darvay Zs (2002) A new algorithm for solving self-dual linear optimization problems. Studia Univ Babeş-Bolyai Ser Inform 47(1):15–26
Darvay Zs (2003) New interior-point algorithms in linear programming. Adv Model Optim 5(1):51–92
Darvay Zs (2005) A new predictor–corrector algorithm for linear programming. Alkalm Mat Lapok 22:135–161 (in Hungarian)
Darvay Zs (2009) A predictor–corrector algorithm for linearly constrained convex optimization. Studia Univ Babeş-Bolyai Ser Inform 54(2):121–138
Darvay Zs, Takó I (2012) Computational comparison of primal-dual algorithms based on a new software. Unpublished manuscript (2012)
Darvay Zs, Papp IM, Takács PR (2016) Complexity analysis of a full-Newton step interior-point method for linear optimization. Period Math Hung 73(1):27–42
Gay D (1985) Electronic mail distribution of linear programming test problems. Math Program Soc COAL Newsl 3:10–12
Illés T, Nagy M (2007) A new variant of the Mizuno–Todd–Ye predictor–corrector algorithm for sufficient matrix linear complementarity problem. Eur J Oper Res 181(3):1097–1111
Illés T, Nagy A (2014) Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules. Optimization 63(1):49–66
Illés T, Terlaky T (2002) Pivot versus interior point methods: pros and cons. Eur J Oper Res 140:6–26
Illés T, Nagy M, Terlaky T (2010a) A polynomial path-following interior point algorithm for general linear complementarity problems. J Glob Optim 47(3):329–342
Illés T, Nagy M, Terlaky T (2010b) Polynomial interior point algorithms for general linear complementarity problems. Algorithmic Oper Res 5:1–12
Karmarkar NK (1984) A new polynomial-time algorithm for linear programming. Combinatorica 4(4):373–395
Kheirfam B (2014) A predictor–corrector interior-point algorithm for \(P_{*}(\kappa )\)-horizontal linear complementarity problem. Numer Algorithms 66(2):349–361
Kheirfam B (2015) A corrector–predictor path-following method for convex quadratic symmetric cone optimization. J Optim Theory Appl 164(1):246–260
Kheirfam B (2016) A corrector–predictor path-following method for second-order cone optimization. Int J Comput Math 93(12):2064–2078
Kheirfam B (2018) An infeasible interior point method for the monotone SDLCP based on a transformation of the central path. J Appl Math Comput 57(1):685–702
Kojima M, Megiddo N, Noma T, Yoshise A (1991) A unified approach to interior point algorithms for linear complementarity problems, vol 538. Lecture notes in computer science. Springer, Berlin
Lešaja G, Roos C (2010) Unified analysis of kernel-based interior-point methods for \(P_*(\kappa )\)-linear complementarity problems. SIAM J Optim 20(6):3014–3039
Mohammadi N, Mansouri H, Zangiabadi M, Asadi S (2015) A full Nesterov–Todd step infeasible-interior-point algorithm for Cartesian \(P_*(\kappa )\) horizontal linear complementarity problem over symmetric cones. Optimization 65:539–565
Megiddo N (1989) Pathways to the optimal set in linear programming. In: Megiddo N (ed) Progress in mathematical programming. Interior-point and related methods. Springer, New York, pp 131–158
Mehrotra S (1992) On the implementation of a primal-dual interior point method. SIAM J Optim 2(4):575–601
Mizuno S, Todd MJ, Ye Y (1993) On adaptive-step primal-dual interior-point algorithms for linear programming. Math Oper Res 18:964–981
Nesterov YE, Nemirovski A (1994) Interior point polynomial methods in convex programming. SIAM studies in applied mathematics. SIAM Publications, Philadelphia
Peng J, Roos C, Terlaky T (2002) Self-regular functions: a new paradigm for primal-dual interior-point methods. Princeton University Press, Princeton
Potra FA (2002) The Mizuno-Todd-Ye algorithm in a larger neighborhood of the central path. Eur J Oper Res 143:257–267
Potra FA (2014) Interior point methods for sufficient LCP in a wide neighborhood of the central path with optimal iteration complexity. SIAM J Optim 24(1):1–28
Potra FA, Sheng R (1996) Predictor–corrector algorithm for solving \(P_*(\kappa )\)-matrix LCP from arbitrary positive starting points. Math Program 76(1):223–244
Roos C, Terlaky T, Vial J-Ph (1997) Theory and algorithms for linear optimization, an interior-point approach. Wiley, Chichester
Sonnevend Gy (1985) A new method for solving a set of linear (convex) inequalities and its applications. Technical report, Department of Numerical Analysis, Institute of Mathematics, Eötvös Loránd University, Budapest
Sonnevend Gy (1986) An ”analytic center” for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In: Prékopa A, Szelezsán J, Strazicky B (eds) System modelling and optimization: proceedings of the 12th IFIP-conference held in Budapest, Hungary, Sept 1985. Lecture notes in control and information sciences, vol 84. Springer, Berlin, pp 866–876
Sonnevend Gy, Stoer J, Zhao G (1990) On the complexity of following the central path by linear extrapolation in linear programming. Methods Oper Res 62:19–31
Terlaky T (2001) An easy way to teach interior-point methods. Eur J Oper Res 130(1):1–19
Wang GQ (2012) A new polynomial interior-point algorithm for the monotone linear complementarity problem over symmetric cones with full NT-steps. Asia-Pac J Oper Res 29(2)
Wang GQ, Yue YJ, Cai XZ (2009) Weighted-path-following interior-point algorithm to monotone mixed linear complementarity problem. Fuzzy Inf Eng 1(4):435–445
Wright SJ (1997) Primal-dual interior-point methods. SIAM, Philadelphia
Ye Y (1997) Interior point algorithms, Wiley-interscience series in discrete mathematics and optimization. Theory and analysis. Wiley, New York
Ye Y, Todd M, Mizuno S (1994) An \(O(\sqrt{n} L)\)-iteration homogeneous and self-dual linear programming algorithm. Math Oper Res 19(1):53–67
Yoshise A (1996) Complementarity problems. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer Academic Publishers, Dordrecht, pp 297–367
Acknowledgements
Open access funding provided by Budapest University of Technology and Economics (BME). This research has been partially supported by the Hungarian Research Fund, OTKA (Grant No. NKFIH 125700). The research of T. Illés and P.R. Rigó has been partially supported by the Higher Education Excellence Program of the Ministry of Human Capacities in the frame of Artificial Intelligence research area of Budapest University of Technology and Economics (BME FIKP-MI/FM). The research of Zs. Darvay and P.R. Rigó was supported by a Grant of Romanian Ministry of Research and Innovation, CNCS - UEFISCDI, Project Number PN-III-P4-ID-PCE-2016-0190, within PNCDI III.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Darvay, Z., Illés, T., Kheirfam, B. et al. A corrector–predictor interior-point method with new search direction for linear optimization. Cent Eur J Oper Res 28, 1123–1140 (2020). https://doi.org/10.1007/s10100-019-00622-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10100-019-00622-3