Abstract
This paper generalizes inverse optimization for multi-objective linear programming where we are looking for the least problem modifications to make a given feasible solution a weak efficient solution. This is a natural extension of inverse optimization for single-objective linear programming with regular “optimality” replaced by the “Pareto optimality”. This extension, however, leads to a non-convex optimization problem. We prove some special characteristics of the problem, allowing us to solve the non-convex problem by solving a series of convex problems.
Keywords: Multi-objective linear programming, Linear programming, Inverse optimization, Efficiency
1. Introduction
Multi-objective linear programming (MOLP) deals with multi-objective optimization problems where all the objectives and constraints are linear. These problems arise in many fields, including engineering, finance, and medicine [3, 12, 21]. When it comes to multi-objective optimization, a typical optimality concept used in single-objective optimization is replaced with the Pareto optimality. A feasible solution is Pareto optimal if it is impossible to improve some objective functions without compromising others.
Inverse optimization (IO), studied by Burton and Toint [6] in 1992, is a new field of optimization dealing with problems in which some of the parameters may not be precisely known, only estimates of these parameters may be known. Instead, it is possible that some information about the problem, such as some solutions or some values for objective function, are given from experience or from some experiments. Thus, the aim is to determine values of the parameters by using this information. In the past years, several types of IOs have been studied by the researchers [7, 13, 16, 23].
This paper deals with an important type of IO in which we are looking for the least problem modifications in order to make a given feasible solution of the problem an optimal solution. In another words, if is a feasible solution of the problem, how can we modify the problem, for example by changing the cost function, as least as possible according to some metrics so that becomes an optimal solution. This problem has been well studied for different types of single-objective optimization problems including linear programming [1, 2, 22, 23], combinatorial optimization [13], conic programming [14], integer programming [19], and countably infinite linear programming [11]. Roland et al. [18] studied inverse optimization for a special class of multi-objective combinatorial optimization problems where the least modification of the criteria matrix is sought to turn a set of feasible points to a set of Pareto optimal points. They demonstrated that the inverse multi-objective combinatorial optimization under some norms can be solved by algorithms based on mixed integer programming. This paper deals with inverse multi-objective linear programming (IMOLP) where the least criteria matrix modification is sought to turn a given feasible solution into a Pareto optimal solution.
For IO problems studied in this paper, and all the aforementioned papers, while a feasible point could be in principle an interior point of feasible region, this would turn the IO problem into an evident problem where the cost function is modified to zero so that all feasible solutions, including , would become weakly efficient. There is a new type of IO, inspired by noisy data appear in some applications, that has been recently studied by Chan et al. [9] where could be an interior point of the feasible region. For this type of IO, both cost vector and are modified to achieve the optimality. Chan et al. [7,8] have also studied a very special multi-objective version of their IO, inspired by application in radiotherapy cancer treatment, where the criteria matrix is remained unchanged and the objective weights are sought to turn a given point , that could be an interior point or even infeasible point, into a near-optimal solution. This paper deals with the natural extension of a classical IO problem, introduced earlier, for multi-objective programs where the least perturbations in criteria matrix is sought to turn a feasible solution into a Pareto optimal solution.
The paper is organized as follows. Section 2 includes some preliminaries. Section 3 reviews inverse linear programming (ILP), and Section 4 generalizes ILP to inverse multi-objective linear programming (IMOLP) and discusses the non-convexity of the problem. Afterwards, some special characteristics of IMOLP are proved, providing necessary tools to develop an efficient convex-optimization-based algorithm. A simple numerical example and a geometrical interpretation are also provided in Section 4. And finally, Section 5 concludes the paper.
2. Preliminaries
In this paper, the row and column vectors are distinguishable from each other by text. Let and denote the set of all real n-vectors and m × n matrices respectively, and ai denotes the i-th row of the matrix A. For , ‖x‖p denotes the ℓp-norm of the vector x. Consider the following two problems:
(LP(c)) |
(MOLP(C)) |
where , , , , and is the feasible region. A feasible point is called weak efficient or weak Pareto optimal of MOLP(C) if there exists no x ∈ S such that . All the ordering in this paper are component-wise. The set of all weak efficient points of MOLP(C) is denoted by Swe(C). The set of all optimal solutions of LP(c) is denoted by So(c).
For a set , cone(y) and conv(y) denote the conic and convex hull of y defined as:
For a feasible point , let be the set of all active constraints indices at . The conic hull of the set is:
As we would see in the next sections, plays an important role in the optimality of for LP(c) and MOLP(C).
Definition 1
The distance between two non-empty sets is defined as:
It is well-known that for a compact set A and a closed set B, we have [5]:
d(A,B)= min{‖x−y‖p |x ∈ A,y ∈ B}.
A∩B = ∅ if and only if d(A,B) > 0.
3. Inverse single-objective linear programming
This section reviews an inverse single-objective linear programming (ILP) problem under ℓp-norm with the cost vector modification. This problem has been initially studied by Zhang and Liu [22, 23] under ℓ1-norm and ℓ∞-norm, and later been extended to the weighted norms by Ahuja and Orlin [1].
An ILP problem under ℓp-norm for LP(C) looks for the minimal modification in the cost vector in order to make a given feasible point an optimal solution. This can be formulated as follows:
ILP(c,^𝑥) |
Using the Karush-Kuhn-Tucker (KKT) optimality conditions of LP [4], ILP(c,^𝑥) can be re-written as:
(1) |
Problem (1) is a convex non-linear optimization problem which could be transferred into LP for p = 1, and ∞ [1,22,23]. The problem is always feasible with an optimal solution, and (y∗,c) is an optimal solution if and only if .
Yet another equivalent formulation of ILP(c,^𝑥) can be obtained, based on the conic hull concept introduced in the last section, by employing the following Lemma.
Lemma 1
[4, 17] Let be a feasible point of LP(). Then, if and only if .
Using Lemma 1, we can replace the constraint in ILP(c,^𝑥) with , which would result into the following equivalent problem:
(2) |
4. Inverse multi-objective linear programming
This section provides a natural extension of inverse single-objective linear programming provided in the previous section for multi-objective programs. In the following subsections, we first introduce the inverse MOLP (IMOLP) and then prove two special characteristics of IMOLPs. Afterwards, we introduce the new algorithm followed by a numerical example and a geometric interpretation.
4.1. Problem formulation and characteristics
Given that optimality for LP is replaced by Pareto optimality in MOLP and the cost vector is replaced by the criteria matrix, it then makes sense to define the inverse MOLP as a minimal modifications in the criteria matrix in order to make a given feasible solution a weak efficient solution. This leads to the following problem:
IMOLP(C,^𝑥) |
where is a set of all weak efficient points of MOLP(), and ‖.‖ denotes a matrix norm on , gauging the modifications between two matrices C and . It is worth mentioning that IMOLP(C,^𝑥) is simply reduced to ILP(c,^𝑥) if there is only one objective function.
Now we take advantage of the following weighted-sum theorem to make connection between ILPs and IMOLPs.
Theorem 1
[10, 15, 20] is a weak efficient solution of MOLP() if and only if there exists a vector such that is an optimal solution of the weighted-sum LP.
Let conv(C) denotes the convex hull of the rows of the matrix C. The following lemma can be immediately obtained from Lemma 1 and Theorem 1.
Lemma 2
The following statements are equivalent.
.
There exists w ∈ W such that .
(iii).
By the second part of Lemma 2, IMOLP(C,^𝑥) is equivalent to:
(3) |
Problem (3) provides an interpretation for IMOLPs. If is not a weak efficient solution, then IMOLP looks for the least modification of the criteria matrix C such that a convex combination of its rows belongs to which is the convex cone created by the active constraints at . We study Problem (3) under the matrix norm , although one may study other norms (e.g., Forbenius, maximum absolute row sum,...).
Now, by simply injecting the definition of the conic hull into (3), we obtain:
(4) |
In analogy to ILP(c,) (1), IMOLP(C,) (4) is always feasible with an optimal solution. For instance, an obvious feasible point is (, , for i = 1, . . . ,k,β = 0). Furthermore, by employing Lemma 2, the objective value of IMOLP(C,) (4) is zero if and only if . However, as opposed to ILP(c,) (1), IMOLP(C,) (4) is not convex due to the presence of the variable multiplication .
The following two theorems show that even though IMOLP(C,) (4) is a non-convex optimization problem, it can be solved using a series of convex optimization problems. The first part of Theorem 2 reveals that there always exists an optimal solution for which only one objective function has been modified, and the second part provides a lower bound for the optimal objective value of IMOLP(C,) (4) that can be later on employed in the algorithm design as a termination criterion.
Theorem 2
For IMOLP(C,) (4) if , then
IMOLP(C,) (4) has an optimal solution for which for i ∈ {1, . . . ,k}, and i ≠ j.
provides a lower bound for the optimal value of IMOLP(C,) (4), i.e.,.
Proof
Let be an optimal solution of IMOLP(C,) (4) with the optimal objective value ρ′. We prove part (i) by constructing a new optimal solution for which only differs C in one row. And we prove part (ii), by showing .
-
i
⇒ Since ξ′ is a feasible solution of IMOLP(C,) (4), we have:
(5) |
Now we define as follows:
(6) |
where and . Now we have:
So, ξ∗ is a feasible solution of IMOLP(C,) (4). Now we just need to show that ξ∗ is at least as good as ξ′ in terms of objective function:
-
ii
⇒ Let us define h and as follows:
Given the definition of in (6), we have
So, h ∈ conv(C). On the other hand, since ξ∗ is a feasible solution of IMOLP(C,) (4), and so:
which completes the proof. □
If we assume that only the j-th objective function in IMOLP(C,) (4) is modified and the rest are the same, then we reach the following optimization problem:
(Pj) |
According to the first part of Theorem 2, we just need to solve Pj for all j and then the problem with the smallest optimal objective value will provide the optimal solution of the IMOLP(C,^𝑥). There is only one problem. Pj is still a non-convex problem due to the presence of , although Pj is already much easier than Problem (4) which has for all i ∈ {1, . . . ,k}. The following theorem reveals that Pj can be transferred to the equivalent convex optimization problem. It is worth mentioning that wj > 0 in Pj, otherwise the optimal objective value of Pj is zero meaning is already a weak efficient solution.
Theorem 3
If , then Pj is equivalent to the following convex optimization problem, Qj:
(Qj) |
Proof
We prove by showing that there is a one-to-one correspondence between the points in the feasible region of the problem Pj and Qj. Let be a feasible point of Pj. By defining (i = 1, . . . ,k,i ≠ j), and , is a feasible point of Qj with the same objective function. □
Now let be a feasible point of Qj. By defining , , and where , we obtain as a feasible point of Pj with the same objective function. □
Now we provide a geometric interpretation of Qj. Let Kj be the conic hull of . Then, Qj can be equivalently re-written as
So, the optimal solution of Qj is the ℓp-projection of cj onto Kj. This will be demonstrated in the numerical example in the next section.
4.2. Algorithm and a numerical example
The algorithm consists of two phases. The first phase calculates which could be used to determine whether or not is already a weak efficient solution. It can also be served as a termination criterion by employing the second part of Theorem 2. However, Phase I could be eliminated if one already knows that is not weak efficient and looking for the exact optimal solution.
Phase I:
Solve the following convex optimization problem
(7) |
Let be the optimal objective value of Problem (7). If d∗ = 0, it means and go to Step 4, otherwise let j = 1 and go to Step 1.
Phase II:
Step 1: Solve Problem (Qj). If , then let j∗ = j and go to Step 3, otherwise go to Step 2.
Step 2: If j = k, let , and go to Step 3, otherwise let j = j + 1 and go to Step 1.
Step 3: Let be an optimal solution of . Then, is an optimal solution of IMOLP(C,) (4) where w∗ and β∗ defined as follow, and and C agree in all rows except the j∗-th one.
Step 4: End.
Example 1 Consider the MOLP problem min {Cx | Ax ≥ b} and its IMOLP(C,) counterpart with and the following data:
We have and , where a1 = (−2, −1) and a2 =(−3, −4).
It is obvious from Figure 1a that the distance between —the hatched area—and conv(C)—the triangle area—is positive and so is not weak efficient. The distance is depicted as .
IMOLP(C,) is:
We apply the proposed algorithm for p = 2. We assume that we are looking for the exact solution with the threshold ε= 0.
Phase I:
Solving the convex problem (7) yields , where and (see Figure 1a). Since d∗ > 0, let j = 1 and go to Step 1.
Phase II:
Step 1: We solve Q1 as follows:
Figure 1b illustrates the geometric interpretation of Q1. This problem finds the minimum distance between c1 and . The optimal objective value is and since , we solve Q2:
Figure 1c corresponds to Q2 and the optimal solution here is the minimum distance between c2 and . The optimal objective value is , so we need to solve Q3.
Figure 1d corresponds to Q3, and the optimal objective value is . Since Q3 has the smallest objective value among all the problems, j∗= 3. The optimal solution of Q3 is . Therefore, the optimal solution of IMOLP(C,) is .
Figure 1e illustrates the new criteria matrix , obtained by moving c3 to . It is seen that intersects , which according to Lemma 2 means that is weakly efficient for the new criteria matrix. Before we close this session, we would like to show that, through a simple example, that the results presented in this paper do not necessarily hold true if there are some constraints on how much the criteria matrix can be changed. In the above example, let us assume that c3 cannot be modified more than 0.95, according to the ℓ2-norm (i.e.,). This is less than what c3 needs to move in order for to intersect . Now, we show that moving c1 and c3 simultaneously leads to less modification than moving c1 or c2 individually. If c1 moves to and c3 moves to , then as it can be seen in Figure 1f, intersects , meaning is a weakly efficient point. Therefore, moving c1 and c3 at the same time, results in modification in the criteria matrix which is less than the modification required by moving only or only . Since moving c3 alone is not an option due to the imposed restriction on the criteria matrix, individual modification of neither of the objective functions results in the least required modification in the criteria matrix to turn into a weakly efficient point.
5. Conclusion
We generalized the existing inverse linear programming to inverse multi-objective linear programming. The generalized version specializes to the existing one if there is only one objective function. We discussed the non-convexity challenge of the resulting problem, and explained how it could be circumvented by exploiting some very special characteristics of the problem and taking advantage of the fact that there is always an optimal solution for which only one objective function is modified. The results only hold true if there is no constraints on how much the criteria matrix could be modified.
Footnotes
Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.
Contributor Information
Mostafa Naghavi, Department of Mathematics, University of Qom, Qom, Iran.
Ali Asghar Foroughi, Department of Mathematics, University of Qom, Qom, Iran.
Masoud Zarepisheh, Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
References
- 1.Ahuja RK, Orlin JB: Inverse optimization. Operations Research 49(5), 771–783 (2001). DOI 10.1287/opre.49.5.771.10607. URL 10.1287/opre.49.5.771.10607 [DOI] [Google Scholar]
- 2.Akbari Z, Peyghami MR: An interior-point algorithm for solving inverse linear optimization problem. Optimization 61(4), 373–386 (2012). DOI 10.1080/02331934.2011.637111. URL 10.1080/02331934.2011.637111 [DOI] [Google Scholar]
- 3.Annetts JE, Audsley E: Multiple objective linear programming for environmental farm planning. Journal of the Operational Research Society 53(9), 933–943 (2002). DOI 10.1057/palgrave.jors.2601404. URL 10.1057/palgrave.jors.2601404 [DOI] [Google Scholar]
- 4.Bazaraa MS, Jarvis JJ, Sherali HD: Linear programming and network flows. John Wiley & Sons; (2011) [Google Scholar]
- 5.Boyd S, Vandenberghe L: Convex optimization. Cambridge university press; (2004) [Google Scholar]
- 6.Burton D, Toint PL: On an instance of the inverse shortest paths problem. Mathematical Programming 53(1), 45–61 (1992). DOI 10.1007/BF01585693. URL 10.1007/BF01585693 [DOI] [Google Scholar]
- 7.Chan TCY, Craig T, Lee T, Sharpe MB: Generalized inverse multiobjective optimization with application to cancer therapy. Operations Research 62(3), 680–695 (2014). DOI 10.1287/opre.2014.1267. URL 10.1287/opre.2014.1267 [DOI] [Google Scholar]
- 8.Chan TCY, Lee T: Trade-off preservation in inverse multi-objective convex optimization. European Journal of Operational Research 270(1), 25–39 (2018). DOI 10.1016/j.ejor.2018.02.045. URL http://sienediret.om/siene/artile/pii/S0377221718301784 [DOI] [Google Scholar]
- 9.Chan TCY, Lee T, Terekhov D: Inverse optimization: Closed-form solutions, geometry, and goodness of fit. Management Science pp. 1–21 (2018). DOI 10.1287/mnsc.2017.2992. URL [DOI] [Google Scholar]
- 10.Ehrgott M: Multicriteria optimization. Springer Science & Business Media; (2006) [Google Scholar]
- 11.Ghate A: Inverse optimization in countably infinite linear programs. Operations Research Letters 43(3), 231–235 (2015). DOI 10.1016/j.orl.2015.02.004. URL http://sienediret.om/siene/artile/pii/S0167637715000231 [DOI] [Google Scholar]
- 12.Hamacher HW, Küfer KH: Inverse radiation therapy planning — a multiple objective optimization approach. Discrete Applied Mathematics 118(1), 145–161 (2002). DOI 10.1016/S0166-218X(01)00261-X. URL http://sciencedirect.com/science/article/pii/S0166218X0100261X [DOI] [Google Scholar]
- 13.Heuberger C: Inverse combinatorial optimization: A survey on problems, methods, and results. Journal of Combinatorial Optimization 8(3), 329–361 (2004). DOI 10.1023/B:JOCO.0000038914.26975.9b. URL 10.1023/B:JOCO.0000038914.26975.9b [DOI] [Google Scholar]
- 14.Iyengar G, Kang W: Inverse conic programming with applications. Operations Research Letters 33(3), 319–330 (2005). DOI 10.1016/j.orl.2004.04.007. URL http://sienediret.om/siene/artile/pii/S016763770400063X [DOI] [Google Scholar]
- 15.Luc DT: Multiobjective Linear Programming. Springer International Publishing; (2016) [Google Scholar]
- 16.Mostafaee A, Hladík M, Čern M: Inverse linear programming with interval coefficients. Journal of Computational and Applied Mathematics 292, 591–608 (2016). DOI 10.1016/j.cam.2015.07.034. URL http://sienediret.om/siene/artile/pii/S0377042715004069 [DOI] [Google Scholar]
- 17.Murty KG: Linear Programming. John Wiley & Sons; (1983) [Google Scholar]
- 18.Roland J, Smet YD, Figueira JR: Inverse multi-objective combinatorial optimization. Discrete Applied Mathematics 161(16), 2764–2771 (2013). DOI 10.1016/j.dam.2013.04.024. URL http://sienediret.om/siene/artile/pii/S0166218X13002266 [DOI] [Google Scholar]
- 19.Schaefer AJ: Inverse integer programming. Optimization Letters 3(4), 483–489 (2009). DOI 10.1007/s11590-009-0131-z. URL 10.1007/s11590-009-0131-z [DOI] [Google Scholar]
- 20.Steuer RE: Multiple criteria optimization: theory, computation, and applications. Wiley; (1986) [Google Scholar]
- 21.Steuer RE, Na P: Multiple criteria decision making combined with finance: A categorized bibliographic study. European Journal of Operational Research 150(3), 496–515 (2003). DOI 10.1016/S0377-2217(02)00774-9. URL http://sciencedirect.com/science/article/pii/S0377221702007749 [DOI] [Google Scholar]
- 22.Zhang J, Liu Z: Calculating some inverse linear programming problems. Journal of Computational and Applied Mathematics 72(2), 261–273 (1996). DOI 10.1016/0377-0427(95)00277-4. URL http://sienediret.om/siene/artile/pii/0377042795002774 [DOI] [Google Scholar]
- 23.Zhang J, Liu Z: A further study on inverse linear programming problems. Journal of Computational and Applied Mathematics 106(2), 345–359 (1999). DOI 10.1016/S0377-0427(99)00080-1. URL http://sciencedirect.com/science/article/pii/S0377042799000801 [DOI] [Google Scholar]