[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jun 8.
Published in final edited form as: Optim Lett. 2019;13(2):281–294. doi: 10.1007/s11590-019-01394-0

Inverse optimization for multi-objective linear programming

Mostafa Naghavi 1, Ali Asghar Foroughi 2, Masoud Zarepisheh 3
PMCID: PMC9175571  NIHMSID: NIHMS1784924  PMID: 35685276

Abstract

This paper generalizes inverse optimization for multi-objective linear programming where we are looking for the least problem modifications to make a given feasible solution a weak efficient solution. This is a natural extension of inverse optimization for single-objective linear programming with regular “optimality” replaced by the “Pareto optimality”. This extension, however, leads to a non-convex optimization problem. We prove some special characteristics of the problem, allowing us to solve the non-convex problem by solving a series of convex problems.

Keywords: Multi-objective linear programming, Linear programming, Inverse optimization, Efficiency

1. Introduction

Multi-objective linear programming (MOLP) deals with multi-objective optimization problems where all the objectives and constraints are linear. These problems arise in many fields, including engineering, finance, and medicine [3, 12, 21]. When it comes to multi-objective optimization, a typical optimality concept used in single-objective optimization is replaced with the Pareto optimality. A feasible solution is Pareto optimal if it is impossible to improve some objective functions without compromising others.

Inverse optimization (IO), studied by Burton and Toint [6] in 1992, is a new field of optimization dealing with problems in which some of the parameters may not be precisely known, only estimates of these parameters may be known. Instead, it is possible that some information about the problem, such as some solutions or some values for objective function, are given from experience or from some experiments. Thus, the aim is to determine values of the parameters by using this information. In the past years, several types of IOs have been studied by the researchers [7, 13, 16, 23].

This paper deals with an important type of IO in which we are looking for the least problem modifications in order to make a given feasible solution of the problem an optimal solution. In another words, if x^ is a feasible solution of the problem, how can we modify the problem, for example by changing the cost function, as least as possible according to some metrics so that x^ becomes an optimal solution. This problem has been well studied for different types of single-objective optimization problems including linear programming [1, 2, 22, 23], combinatorial optimization [13], conic programming [14], integer programming [19], and countably infinite linear programming [11]. Roland et al. [18] studied inverse optimization for a special class of multi-objective combinatorial optimization problems where the least modification of the criteria matrix is sought to turn a set of feasible points to a set of Pareto optimal points. They demonstrated that the inverse multi-objective combinatorial optimization under some norms can be solved by algorithms based on mixed integer programming. This paper deals with inverse multi-objective linear programming (IMOLP) where the least criteria matrix modification is sought to turn a given feasible solution into a Pareto optimal solution.

For IO problems studied in this paper, and all the aforementioned papers, while a feasible point x^ could be in principle an interior point of feasible region, this would turn the IO problem into an evident problem where the cost function is modified to zero so that all feasible solutions, including x^, would become weakly efficient. There is a new type of IO, inspired by noisy data appear in some applications, that has been recently studied by Chan et al. [9] where x^ could be an interior point of the feasible region. For this type of IO, both cost vector and x^ are modified to achieve the optimality. Chan et al. [7,8] have also studied a very special multi-objective version of their IO, inspired by application in radiotherapy cancer treatment, where the criteria matrix is remained unchanged and the objective weights are sought to turn a given point x^, that could be an interior point or even infeasible point, into a near-optimal solution. This paper deals with the natural extension of a classical IO problem, introduced earlier, for multi-objective programs where the least perturbations in criteria matrix is sought to turn a feasible solution into a Pareto optimal solution.

The paper is organized as follows. Section 2 includes some preliminaries. Section 3 reviews inverse linear programming (ILP), and Section 4 generalizes ILP to inverse multi-objective linear programming (IMOLP) and discusses the non-convexity of the problem. Afterwards, some special characteristics of IMOLP are proved, providing necessary tools to develop an efficient convex-optimization-based algorithm. A simple numerical example and a geometrical interpretation are also provided in Section 4. And finally, Section 5 concludes the paper.

2. Preliminaries

In this paper, the row and column vectors are distinguishable from each other by text. Let n and m×n denote the set of all real n-vectors and m × n matrices respectively, and ai denotes the i-th row of the matrix A. For xn, ‖xp denotes the p-norm of the vector x. Consider the following two problems:

min{cxxS}, (LP(c))
min{Cx=(c1x,,ckx)xS}, (MOLP(C))

where Am×n, Ck×n, cn, bm, and S={xnAxb} is the feasible region. A feasible point x^S is called weak efficient or weak Pareto optimal of MOLP(C) if there exists no xS such that Cx<Cx^. All the ordering in this paper are component-wise. The set of all weak efficient points of MOLP(C) is denoted by Swe(C). The set of all optimal solutions of LP(c) is denoted by So(c).

For a set y={y1,,yl}n, cone(y) and conv(y) denote the conic and convex hull of y defined as:

cone(y)={xnx=i=1lβiyi,βi0},
conv(y)={xnx=i=1lβiyi,i=1lβi=1,βi0}.

For a feasible point x^S, let I(x^)={iaix^=bi} be the set of all active constraints indices at x^. The conic hull of the set {aiiI(x^)} is:

K^=cone({aiiI(x^)})={xnx=iI(x^)βiai,βi0}.

As we would see in the next sections, K^ plays an important role in the optimality of x^ for LP(c) and MOLP(C).

Definition 1

The distance between two non-empty sets A,Bn is defined as:

d(A,B)=inf{xypxA,yB}.

It is well-known that for a compact set A and a closed set B, we have [5]:

  1. d(A,B)= min{‖xyp |xA,yB}.

  2. AB = ∅ if and only if d(A,B) > 0.

3. Inverse single-objective linear programming

This section reviews an inverse single-objective linear programming (ILP) problem under p-norm with the cost vector modification. This problem has been initially studied by Zhang and Liu [22, 23] under 1-norm and -norm, and later been extended to the weighted norms by Ahuja and Orlin [1].

An ILP problem under p-norm for LP(C) looks for the minimal modification in the cost vector in order to make a given feasible point x^ an optimal solution. This can be formulated as follows:

mincc^ps.t.x^So(c^),c^n. ILP(c,^𝑥)

Using the Karush-Kuhn-Tucker (KKT) optimality conditions of LP [4], ILP(c,^𝑥) can be re-written as:

mincc^ps.t.y(Ax^b)=0,(complementaryslackness),yA=c^,(dualfeasibility),y0,(dualfeasibility),c^n. (1)

Problem (1) is a convex non-linear optimization problem which could be transferred into LP for p = 1, and ∞ [1,22,23]. The problem is always feasible with an optimal solution, and (y∗,c) is an optimal solution if and only if x^So(c).

Yet another equivalent formulation of ILP(c,^𝑥) can be obtained, based on the conic hull concept introduced in the last section, by employing the following Lemma.

Lemma 1

[4, 17] Let x^S be a feasible point of LP(c^). Then, x^So(c^) if and only if c^K^.

Using Lemma 1, we can replace the constraint x^So(c^) in ILP(c,^𝑥) with c^K^, which would result into the following equivalent problem:

mincc^ps.t.c^K^,c^n. (2)

4. Inverse multi-objective linear programming

This section provides a natural extension of inverse single-objective linear programming provided in the previous section for multi-objective programs. In the following subsections, we first introduce the inverse MOLP (IMOLP) and then prove two special characteristics of IMOLPs. Afterwards, we introduce the new algorithm followed by a numerical example and a geometric interpretation.

4.1. Problem formulation and characteristics

Given that optimality for LP is replaced by Pareto optimality in MOLP and the cost vector is replaced by the criteria matrix, it then makes sense to define the inverse MOLP as a minimal modifications in the criteria matrix in order to make a given feasible solution x^ a weak efficient solution. This leads to the following problem:

minCC^s.t.x^Swe(C^),C^k×n, IMOLP(C,^𝑥)

where Swe(C^) is a set of all weak efficient points of MOLP(C^), and ‖.‖ denotes a matrix norm on k×n, gauging the modifications between two matrices C and C^. It is worth mentioning that IMOLP(C,^𝑥) is simply reduced to ILP(c,^𝑥) if there is only one objective function.

Now we take advantage of the following weighted-sum theorem to make connection between ILPs and IMOLPs.

Theorem 1

[10, 15, 20] x^S is a weak efficient solution of MOLP(C^) if and only if there exists a vector wW={wki=1κwi=1,wi0} such that x^ is an optimal solution of the weighted-sum LPmin{wC^xxS}.

Let conv(C) denotes the convex hull of the rows of the matrix C. The following lemma can be immediately obtained from Lemma 1 and Theorem 1.

Lemma 2

The following statements are equivalent.

  1. x^Swe(C^).

  2. There exists wW such that wC^K^.

  3. (iii)d(conv(C^),K^)=0.

By the second part of Lemma 2, IMOLP(C,^𝑥) is equivalent to:

minCC^s.t.wC^K^,wW,C^k×n. (3)

Problem (3) provides an interpretation for IMOLPs. If x^ is not a weak efficient solution, then IMOLP looks for the least modification of the criteria matrix C such that a convex combination of its rows belongs to K^ which is the convex cone created by the active constraints at x^. We study Problem (3) under the matrix norm CC^=i=1kcic^ip1p, although one may study other norms (e.g., Forbenius, maximum absolute row sum,...).

Now, by simply injecting the definition of the conic hull K^ into (3), we obtain:

minρ=i=1kcic^ips.t.i=1kwic^irI(x^)βrar=0,i=1kwi=1,wi0,i=1,,k,βr0,rI(x^),c^in,i=1,,k. (4)

In analogy to ILP(c,x^) (1), IMOLP(C,x^) (4) is always feasible with an optimal solution. For instance, an obvious feasible point is (C^=0, wi=1k, for i = 1, . . . ,k,β = 0). Furthermore, by employing Lemma 2, the objective value of IMOLP(C,x^) (4) is zero if and only if x^Swe(C). However, as opposed to ILP(c,x^) (1), IMOLP(C,x^) (4) is not convex due to the presence of the variable multiplication wic^i.

The following two theorems show that even though IMOLP(C,x^) (4) is a non-convex optimization problem, it can be solved using a series of convex optimization problems. The first part of Theorem 2 reveals that there always exists an optimal solution for which only one objective function has been modified, and the second part provides a lower bound for the optimal objective value of IMOLP(C,x^) (4) that can be later on employed in the algorithm design as a termination criterion.

Theorem 2

For IMOLP(C,x^) (4) if d(conv(C),K^)>0, then

  1. IMOLP(C,x^) (4) has an optimal solution C^* for which c^i*=ci for i ∈ {1, . . . ,k}, and ij.

  2. d(conv(C),K^) provides a lower bound for the optimal value of IMOLP(C,x^) (4), i.e.,ρ*d(conv(C),K^).

Proof

Let ξ=(C^,w,β) be an optimal solution of IMOLP(C,x^) (4) with the optimal objective value ρ′. We prove part (i) by constructing a new optimal solution ξ*=(C^*,w,β) for which C^* only differs C in one row. And we prove part (ii), by showing ρd(conv(C),K^).

  • i

    ⇒ Since ξ′ is a feasible solution of IMOLP(C,x^) (4), we have:

i=1kwic^irI(x^)βrar=0. (5)

Now we define C^* as follows:

c^i*={ciiscs+i=1kwiwsvii=s (6)

where vi=c^ici and ws=max{wii=1,,k}>0. Now we have:

i=1kwic^i*=wsc^s*+i=1iskwic^i*=ws(cs+i=1kwiws(c^ici))+i=1iskwici=wscs+i=1kwi(c^ici)+i=1iskwici=i=1kwic^i=rI(x^)βrar(accordingto(5))

So, ξ∗ is a feasible solution of IMOLP(C,x^) (4). Now we just need to show that ξ∗ is at least as good as ξ′ in terms of objective function:

CC^*=i=1kcic^i*p=i=1kwiwsvip(accordingto(6))i=1kwiwsvip(accordingtotriangleinequalityinnorms)=i=1k(wiws)vipi=1kvip=CC^.(sincewswi)
  • ii

    ⇒ Let us define h and k^ as follows:

h=i=1kwic^i*ws(c^s*cs),
k^=rI(x^)βrar.

Given the definition of C^* in (6), we have

h=i=1kwic^i*ws(c^s*cs)=i=1iskwic^i*+(wsc^s*wsc^s*)+wscs=i=1iskwici+wscs=i=1kwici

So, hconv(C). On the other hand, since ξ∗ is a feasible solution of IMOLP(C,x^) (4), i=1kwic^i*=rI(x^)βrar and so:

d(conv(C),K^)hk^p(accordingtoDefinition1inSection2)=wscsc^s*pcsc^s*p=CC^*(sincews1)

which completes the proof. □

If we assume that only the j-th objective function in IMOLP(C,x^) (4) is modified and the rest are the same, then we reach the following optimization problem:

mincjc^jps.t.wjc^j+i=1ijkwicirI(x^)βrar=0,i=1kwi=1,wi0,i=1,,k,βr0,rI(x^),c^jn. (Pj)

According to the first part of Theorem 2, we just need to solve Pj for all j and then the problem with the smallest optimal objective value will provide the optimal solution of the IMOLP(C,^𝑥). There is only one problem. Pj is still a non-convex problem due to the presence of wjc^j, although Pj is already much easier than Problem (4) which has wic^i for all i ∈ {1, . . . ,k}. The following theorem reveals that Pj can be transferred to the equivalent convex optimization problem. It is worth mentioning that wj > 0 in Pj, otherwise the optimal objective value of Pj is zero meaning x^ is already a weak efficient solution.

Theorem 3

If d(conv(C),K^)>0, then Pj is equivalent to the following convex optimization problem, Qj:

ρj*=mincjc^jps.t.c^j+i=1ijkλicirI(x^)αrar=0,λi0,i=1,,k,ij,αr0,rI(x^),c^jn. (Qj)

Proof

We prove by showing that there is a one-to-one correspondence between the points in the feasible region of the problem Pj and Qj. Let ξj=(c^j,w,β) be a feasible point of Pj. By defining λi=wiwj (i = 1, . . . ,k,ij), and αr=βrwj(rI(x^)), ζj=(c^j,λ,α) is a feasible point of Qj with the same objective function. □

Now let ζj=(c^j,λ,α) be a feasible point of Qj. By defining wj=1τ, wi=λiτ(i=1,,k,ij), and βr=αrτ(rI(x^)) where τ=1+i=1ijkλi, we obtain ξj=(c^j,w,β) as a feasible point of Pj with the same objective function. □

Now we provide a geometric interpretation of Qj. Let Kj be the conic hull of {{ar}rI(x^),{ci}i=1,jk}. Then, Qj can be equivalently re-written as

d(cj,Kj)=mincjc^jps.t.c^jKj,c^jn.

So, the optimal solution of Qj is the p-projection of cj onto Kj. This will be demonstrated in the numerical example in the next section.

4.2. Algorithm and a numerical example

The algorithm consists of two phases. The first phase calculates d(conv(C),K^) which could be used to determine whether or not x^ is already a weak efficient solution. It can also be served as a termination criterion by employing the second part of Theorem 2. However, Phase I could be eliminated if one already knows that x^ is not weak efficient and looking for the exact optimal solution.

Phase I:

Solve the following convex optimization problem

d(conv(C),K^)=min{xypxconv(C),yK^}. (7)

Let d*:=d(conv(C),K^) be the optimal objective value of Problem (7). If d∗ = 0, it means x^Swe(C) and go to Step 4, otherwise let j = 1 and go to Step 1.

Phase II:

Step 1: Solve Problem (Qj). If |ρj*d*|ε, then let j∗ = j and go to Step 3, otherwise go to Step 2.

Step 2: If j = k, let j*=argminj=1,,kρj*, and go to Step 3, otherwise let j = j + 1 and go to Step 1.

Step 3: Let ζj*=(c^j**,λ*,α*) be an optimal solution of Qj*. Then, ξ*=(C^*,w*,β*) is an optimal solution of IMOLP(C,x^) (4) where w∗ and β∗ defined as follow, and C^* and C agree in all rows except the j∗-th one.

wi*=λi*1+i=1ij*kλi*,i=1,,k,ij*,
wj**=11+i=1ij*kλi*,
βr*=αr*1+i=1ij*kλi*,rI(x^).

Step 4: End.

Example 1 Consider the MOLP problem min {Cx | Axb} and its IMOLP(C,x^) counterpart with x^=(8,7) and the following data:

C=(61.530.521.5),A=(213410011001),andb=(2352101000).

We have I(x^)={1,2} and K^={x2x=β1a1+β2a2,β1,β20}, where a1 = (−2, −1) and a2 =(−3, −4).

It is obvious from Figure 1a that the distance between K^—the hatched area—and conv(C)—the triangle area—is positive and so x^ is not weak efficient. The distance is depicted as d*=hk^p.

Fig. 1:

Fig. 1:

(a)-(d): Different steps of the algorithm for Example 1. (e): The final result of the algorithm. (f): Changing multiple objective functions could result in less modification if there are restrictions on the criteria matrix modifications.

IMOLP(C,x^) is:

minρ=c1c^1p+c2c^2p+c3c^3ps.t.w1c^1+w2c^2+w3c^3β1a1β2a2=0,w1+w2+w3=1,w1,w2,w3,β1,β20,c^1,c^2,c^32.

We apply the proposed algorithm for p = 2. We assume that we are looking for the exact solution with the threshold ε= 0.

Phase I:

Solving the convex problem (7) yields d*=hk^2=673, where h=(1873,4873) and k^=(0,0) (see Figure 1a). Since d∗ > 0, let j = 1 and go to Step 1.

Phase II:

Step 1: We solve Q1 as follows:

minρ1=c1c^12s.t.c^1+λ2c2+λ3c3α1a1α2a2=0,λ2,λ3,α1,α20,c^12.

Figure 1b illustrates the geometric interpretation of Q1. This problem finds the minimum distance between c1 and K1=cone({a1,a2,c2,c3})=cone({a1,c2}). The optimal objective value is ρ1*=35 and since ρ1*d*>0, we solve Q2:

minρ2=c2c^22s.t.c^2+λ1c1+λ3c3α1a1α2a2=0,λ1,λ3,α1,α20,c^22.

Figure 1c corresponds to Q2 and the optimal solution here is the minimum distance between c2 and K2=cone({a1,a2,c1,c3})=cone({a1,c1}). The optimal objective value is ρ2*=45>d*, so we need to solve Q3.

minρ3=c3c^32s.t.c^3+λ1c1+λ2c2α1a1α2a2=0,λ1,λ2,α1,α20,c^32.

Figure 1d corresponds to Q3, and the optimal objective value is ρ3*=417. Since Q3 has the smallest objective value among all the problems, j∗= 3. The optimal solution of Q3 is ζ3*=(c^3*,λ1*,λ2*,α1*,α2*)=((3817,1934),1951,0,0,0). Therefore, the optimal solution of IMOLP(C,x^) is ξ*=(c^1*,c^2*,c^3*,w1*,w2*,w3*,β1*,β2*)=(c1,c2,c^3*,1970,0,5170,0,0).

Figure 1e illustrates the new criteria matrix C^*, obtained by moving c3 to c^3*. It is seen that conv(C^*) intersects K^, which according to Lemma 2 means that x^ is weakly efficient for the new criteria matrix. Before we close this session, we would like to show that, through a simple example, that the results presented in this paper do not necessarily hold true if there are some constraints on how much the criteria matrix can be changed. In the above example, let us assume that c3 cannot be modified more than 0.95, according to the 2-norm (i.e.,c3c^320.95). This is less than what c3 needs to move (4170.97) in order for conv(C^) to intersect K^. Now, we show that moving c1 and c3 simultaneously leads to less modification than moving c1 or c2 individually. If c1 moves to c^1=(2610449,1827898) and c3 moves to c^3=(1010449,707898), then as it can be seen in Figure 1f, conv(C^) intersects K^, meaning x^ is a weakly efficient point. Therefore, moving c1 and c3 at the same time, results in c1c^12+c3c^321.32 modification in the criteria matrix which is less than the modification required by moving only c1(ρ1*=351.34) or only c2(ρ2*=451.78). Since moving c3 alone is not an option due to the imposed restriction on the criteria matrix, individual modification of neither of the objective functions results in the least required modification in the criteria matrix to turn x^ into a weakly efficient point.

5. Conclusion

We generalized the existing inverse linear programming to inverse multi-objective linear programming. The generalized version specializes to the existing one if there is only one objective function. We discussed the non-convexity challenge of the resulting problem, and explained how it could be circumvented by exploiting some very special characteristics of the problem and taking advantage of the fact that there is always an optimal solution for which only one objective function is modified. The results only hold true if there is no constraints on how much the criteria matrix could be modified.

Footnotes

Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.

Contributor Information

Mostafa Naghavi, Department of Mathematics, University of Qom, Qom, Iran.

Ali Asghar Foroughi, Department of Mathematics, University of Qom, Qom, Iran.

Masoud Zarepisheh, Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA.

References

  • 1.Ahuja RK, Orlin JB: Inverse optimization. Operations Research 49(5), 771–783 (2001). DOI 10.1287/opre.49.5.771.10607. URL 10.1287/opre.49.5.771.10607 [DOI] [Google Scholar]
  • 2.Akbari Z, Peyghami MR: An interior-point algorithm for solving inverse linear optimization problem. Optimization 61(4), 373–386 (2012). DOI 10.1080/02331934.2011.637111. URL 10.1080/02331934.2011.637111 [DOI] [Google Scholar]
  • 3.Annetts JE, Audsley E: Multiple objective linear programming for environmental farm planning. Journal of the Operational Research Society 53(9), 933–943 (2002). DOI 10.1057/palgrave.jors.2601404. URL 10.1057/palgrave.jors.2601404 [DOI] [Google Scholar]
  • 4.Bazaraa MS, Jarvis JJ, Sherali HD: Linear programming and network flows. John Wiley & Sons; (2011) [Google Scholar]
  • 5.Boyd S, Vandenberghe L: Convex optimization. Cambridge university press; (2004) [Google Scholar]
  • 6.Burton D, Toint PL: On an instance of the inverse shortest paths problem. Mathematical Programming 53(1), 45–61 (1992). DOI 10.1007/BF01585693. URL 10.1007/BF01585693 [DOI] [Google Scholar]
  • 7.Chan TCY, Craig T, Lee T, Sharpe MB: Generalized inverse multiobjective optimization with application to cancer therapy. Operations Research 62(3), 680–695 (2014). DOI 10.1287/opre.2014.1267. URL 10.1287/opre.2014.1267 [DOI] [Google Scholar]
  • 8.Chan TCY, Lee T: Trade-off preservation in inverse multi-objective convex optimization. European Journal of Operational Research 270(1), 25–39 (2018). DOI 10.1016/j.ejor.2018.02.045. URL http://sienediret.om/siene/artile/pii/S0377221718301784 [DOI] [Google Scholar]
  • 9.Chan TCY, Lee T, Terekhov D: Inverse optimization: Closed-form solutions, geometry, and goodness of fit. Management Science pp. 1–21 (2018). DOI 10.1287/mnsc.2017.2992. URL [DOI] [Google Scholar]
  • 10.Ehrgott M: Multicriteria optimization. Springer Science & Business Media; (2006) [Google Scholar]
  • 11.Ghate A: Inverse optimization in countably infinite linear programs. Operations Research Letters 43(3), 231–235 (2015). DOI 10.1016/j.orl.2015.02.004. URL http://sienediret.om/siene/artile/pii/S0167637715000231 [DOI] [Google Scholar]
  • 12.Hamacher HW, Küfer KH: Inverse radiation therapy planning — a multiple objective optimization approach. Discrete Applied Mathematics 118(1), 145–161 (2002). DOI 10.1016/S0166-218X(01)00261-X. URL http://sciencedirect.com/science/article/pii/S0166218X0100261X [DOI] [Google Scholar]
  • 13.Heuberger C: Inverse combinatorial optimization: A survey on problems, methods, and results. Journal of Combinatorial Optimization 8(3), 329–361 (2004). DOI 10.1023/B:JOCO.0000038914.26975.9b. URL 10.1023/B:JOCO.0000038914.26975.9b [DOI] [Google Scholar]
  • 14.Iyengar G, Kang W: Inverse conic programming with applications. Operations Research Letters 33(3), 319–330 (2005). DOI 10.1016/j.orl.2004.04.007. URL http://sienediret.om/siene/artile/pii/S016763770400063X [DOI] [Google Scholar]
  • 15.Luc DT: Multiobjective Linear Programming. Springer International Publishing; (2016) [Google Scholar]
  • 16.Mostafaee A, Hladík M, Čern M: Inverse linear programming with interval coefficients. Journal of Computational and Applied Mathematics 292, 591–608 (2016). DOI 10.1016/j.cam.2015.07.034. URL http://sienediret.om/siene/artile/pii/S0377042715004069 [DOI] [Google Scholar]
  • 17.Murty KG: Linear Programming. John Wiley & Sons; (1983) [Google Scholar]
  • 18.Roland J, Smet YD, Figueira JR: Inverse multi-objective combinatorial optimization. Discrete Applied Mathematics 161(16), 2764–2771 (2013). DOI 10.1016/j.dam.2013.04.024. URL http://sienediret.om/siene/artile/pii/S0166218X13002266 [DOI] [Google Scholar]
  • 19.Schaefer AJ: Inverse integer programming. Optimization Letters 3(4), 483–489 (2009). DOI 10.1007/s11590-009-0131-z. URL 10.1007/s11590-009-0131-z [DOI] [Google Scholar]
  • 20.Steuer RE: Multiple criteria optimization: theory, computation, and applications. Wiley; (1986) [Google Scholar]
  • 21.Steuer RE, Na P: Multiple criteria decision making combined with finance: A categorized bibliographic study. European Journal of Operational Research 150(3), 496–515 (2003). DOI 10.1016/S0377-2217(02)00774-9. URL http://sciencedirect.com/science/article/pii/S0377221702007749 [DOI] [Google Scholar]
  • 22.Zhang J, Liu Z: Calculating some inverse linear programming problems. Journal of Computational and Applied Mathematics 72(2), 261–273 (1996). DOI 10.1016/0377-0427(95)00277-4. URL http://sienediret.om/siene/artile/pii/0377042795002774 [DOI] [Google Scholar]
  • 23.Zhang J, Liu Z: A further study on inverse linear programming problems. Journal of Computational and Applied Mathematics 106(2), 345–359 (1999). DOI 10.1016/S0377-0427(99)00080-1. URL http://sciencedirect.com/science/article/pii/S0377042799000801 [DOI] [Google Scholar]

RESOURCES