A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems
"> Figure 1
<p>Possibilities for two constraints combined to limit the objective function of two-dimensional problems. (<b>a</b>): Two constraints blocking the objective from their upper regions (blue areas). The objective function is not bounded. (<b>b</b>): Two constraints blocking the objective from their lower regions (red areas). The objective function is not bounded. (<b>c</b>): A constraint blocks the objective from entering its upper region (blue area), and another constraint blocks the objective from entering its lower region (red area). The objective function is bounded. The figure also shows the closest points for several constraints and their corresponding coordinates <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>c</mi> <msub> <mi>p</mi> <mrow> <mi>R</mi> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>c</mi> <msub> <mi>p</mi> <mrow> <mi>R</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> over the axes <span class="html-italic">i</span> and <span class="html-italic">g</span>.</p> "> Figure 2
<p>Identifying upper and lower-constraints for the 2D-projected problems. (<b>a</b>): A constraint projected of on the principal plane <span class="html-italic">ig</span> is a 2D-upper-constraint if it is in the range of angles between zero and <math display="inline"><semantics> <mrow> <mfrac bevelled="true"> <mi>π</mi> <mn>2</mn> </mfrac> </mrow> </semantics></math> radians with respect to the objective-function’s vector. (<b>b</b>): For each 2D-upper-constraint identified, the projection on the principal plane <span class="html-italic">ig</span> is a 2D-lower-constraint if the projected angle is within the range from zero to −<span class="html-italic">π</span> radians respect to 2D-upper-constraint’s normal vector.</p> "> Figure 3
<p>Graphic representation of the elements of a generic two-dimensional linear optimization problem. The shaded areas indicate the feasible region for each constraint. (<b>a</b>) <span class="html-italic">m</span> constraints defining a multidimensional feasible space (<b>b</b>) constraints 1 to <span class="html-italic">k</span> representing inward constraints. (<b>c</b>) constraints <span class="html-italic">k</span> + 1 to <span class="html-italic">m</span> representing outward constraints.</p> "> Figure 4
<p>Moving the extreme point in the vicinity of the intersection of the objective function vector and constraint <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>u</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) Shows a perspective of the constraint <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>u</mi> </msub> </mrow> </semantics></math>, and the direction of the closest point <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">x</mi> <mi mathvariant="bold-italic">c</mi> <mi mathvariant="bold-italic">p</mi> </mrow> </semantics></math>. The direction of movements, represented by the blue arrow, are limited to the surface of generic constraint <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>u</mi> </msub> </mrow> </semantics></math> and only varying values for dimension <span class="html-italic">i</span> and the dominant dimension <span class="html-italic">g</span>. Thus, these movements are parallel to plane <span class="html-italic">ig</span>. The violet arrow represents these movements projected on the plane <span class="html-italic">ig</span>. (<b>b</b>) Shows these movements’ projections and the projection factor <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>F</mi> <mi>u</mi> </msub> </mrow> </semantics></math>.</p> "> Figure 5
<p>Identifying possibly active upper-constraints. Shaded (upper-constraints) areas indicate unfeasible regions. The white area represents the region not restricted by upper-constraints. Constraints <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>u</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>u</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math> both can be the active upper-constraint because they cut within the quadrant <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>g</mi> </msub> </mrow> </semantics></math> > 0 and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </semantics></math> > 0. Constraint <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>k</mi> </msub> </mrow> </semantics></math> is a redundant constraint in this context, since it does not cut the shortest-hypotenuse constraint <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>u</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math> within the quadrant <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>g</mi> </msub> </mrow> </semantics></math> > 0 and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </semantics></math> > 0.</p> "> Figure 6
<p>Identifying the active lower-constraint. (<b>a</b>) The criterion to identify the active lower-constraint is the value <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>C</mi> <mi>r</mi> <mi>i</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </semantics></math> of dim <span class="html-italic">g</span> at the intersection of the projected point-movement around the lower-constraint closest point and axis <span class="html-italic">g</span>. The smallest value <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>C</mi> <mi>r</mi> <mi>i</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </semantics></math> is an indication of the constraint to be selected as active. (<b>b</b>) For lower-constraint passing over the origin, the value <math display="inline"><semantics> <mrow> <mi>w</mi> <mi>C</mi> <mi>r</mi> <mi>i</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </semantics></math> is considered equal to the value of dim <span class="html-italic">g</span> at the intersection of the upper-constraint and axis <span class="html-italic">g</span>.</p> "> Figure A1
<p>Spatial representation of Problem A.1. (<b>Left</b>) Objective function’s normal vector. (<b>Right</b>) Represents the eight constraint boundaries. The planes representing these constraint boundaries are infinite. They are shown truncated to allow the viewing of hidden objects.</p> "> Figure A2
<p>Constraints R0, R3, R4 and R5 projected over the principal planes to work as upper and lower constraints for two-dimensional problems. Projected upper-constraints are blue, and projected lower-constraints are red. Bright red is used for constraint <span class="html-italic">R</span>0 projected (<span class="html-italic">wCritDistG</span> = 5.174) over plane <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math> to indicate it has priority over projected constraint <span class="html-italic">R</span>3 projected (<span class="html-italic">wCritDistG</span> = 8) shown with dark red.</p> ">
Abstract
:1. Introduction
2. The Method
2.1. General Approach and Definitions
2.2. The Dominant Dimension
2.3. Problem Decomposition
2.4. Constraint Closest Point
2.5. Upper and Lower Constraints
2.6. Inward and Outward Constraints
2.7. Principal Planes and Constraint Projections
2.7.1. Identifying the 2D Projected Problem Active Upper-Constraints
2.7.2. Identifying the 2D Projected Problem Active Lower-Constraints
2.8. Selecting Active Constraints and Solving the Problem
3. The Algorithm
3.1. Define and Dimension set Variables and Arrays
- 3.1.1.
- Check dimensional coherence. Define variables and arrays. Allocating values for matrixes and vectors describing the problem takes + . This includes allocating values for the objective function’s coefficients , the constraints’ coefficients , the resources and the vector of constraint operator-comparators (greater than or equal, lower or than or equal).
- 3.1.2.
- Determine dominant dimension g. Fastest growing dimension g. Identifying the fastest growing dimension g is performed by applying Criterion (2) to each dimensional component of each constraints. Thus, Criterion (2) is evaluated times.
- 3.1.3.
- Determine the coordinate for each dimension of the closest point for all constraints . In this segment, the code first computes the shortest distance from the origin to each constraint. Then the coordinate value for each dimension is determined. This process requires steps.
- 3.1.4.
- Determine the objective-function’s angles and the on-principal-planes-projected-constraint angles .
3.2. Study the 2D-Projected Problem’s Upper-Constraints
- 3.2.1.
- Recognize upper-constraints for all projections over the principal planes. Apply Equation (3). The variable UpperBoundsDIMg[j, i] is set to true or false for each constraint and for each dimension except for the dominant dimension g. The computational complexity of this segment is .
- 3.2.2.
- Identify the upper-constraint with the shortest hypotenuse for each principal plane projection. Apply Equation (7). 3.2.2.a. Compute the projection-factor for upper-constraints. Equation (6). In this segment the hypotenuse length is determined for all constraint-projections on having a value UpperBoundsDIMg[j, i] = true. There are n − 1 constraint-projections for each constraint, then the worst case scenario leads to steps.
- 3.2.3.
- Build array with the possible active upper-constraints in each projections over the principal planes. Rank upper constraints according to the constraint gradient criterion.
3.3. Study the 2D-Projected Problem’s Lower-Constraints
3.4. Select Active Constraints
3.5. Find the Extreme Vertex’s Coordinates by Intersecting Active Constraints
4. Computational Study
5. Discussion
- A better understanding of the effect each constraint exerts over the problem’s feasible region by classifying the role of each constraint.
- Returning lists of active, non-active and redundant constraints.
- Ranking non-active constraints according to their distance to the feasible extreme vertex found. This ranking provides a useful scope of the sequence of conditions bounding a real situation beyond the found extreme point.
6. Conclusions
Funding
Conflicts of Interest
Appendix A
Appendix A.1. Example 1: Problem A.1: 3 Dims, 8 Constraints. Cradle
max z = | , | ||||
subject to: | R1 | ≤ | 4.0 | , | |
R2 | ≤ | 8.0 | , | ||
R3 | ≤ | 5.2 | , | ||
R4 | ≤ | 8.0 | , | ||
R5 | ≤ | 10.0 | , | ||
R6 | ≤ | 0 | , | ||
R7 | ≤ | 0 | , | ||
R8 | ≤ | 0 | . |
Appendix A.1.1. Define and Dimension Set Variables and Arrays
- A.1.1.1. Check dimensional coherence. Define variables and arrays.
- A.1.1.2. Determine dominant dimension g. Fastest growing g dimension g. Applying Equation (2) the dominant dimension is , thus g = 2.
- A.1.1.3. and A.1.1.4. Determine the coordinate xcpi for each dimension of the closest point for all constraints j. Determine the objective-function’s angles , and the on-principal-planes-projected-constraint angles .
Closest Point Coordinates | Projection Angles to Axes (Rad) | ||||||
---|---|---|---|---|---|---|---|
Constraint | Dimension i | Dimension i | |||||
Rj | By Objective Function Direction Distance to Origin | xcp0 | xcp1 | xcp2 | |||
R0 | 6.5652 | 2.296 | 5.714 | 2.296 | π/4 | 0 | 0 |
R1 | 22.9783 | 8 | 20 | 8 | π/2 | 0 | ∞ |
R2 | 14.9359 | 5.2 | 13 | 5.2 | π/2 | 0 | ∞ |
R3 | 9.1913 | 3.2 | 8 | 3.2 | 0 | 0 | 0 |
R4 | 28.7228 | 10 | 25 | 10 | ∞ | 0 | π/2 |
R5 | NR | NR | NR | NR | −π/2 | 0 | ∞ |
R6 | NR | NR | NR | NR | −π | 0 | −π |
R7 | NR | NR | NR | NR | ∞ | 0 | −π/2 |
Appendix A.1.2. Study the 2D-Projected Problem’s Upper-Constraints and Lower-Constraints
Over Principal Planes Projected Extreme-Point-Movement Hypotenuse Length | True if Projected Upper-Constraint Limits Dim i. Stack Position (1 = Top) Indicates Higher Probability | |||||
---|---|---|---|---|---|---|
Constraint | Dimension i | Dimension i | ||||
Rj | 0 | 1 (*) | 2 | 0 | 1 (*) | 2 |
R0 | 11.3137 | NR | DNUC | true (1) | NR | false |
R1 | ∞ | NR | DNUC | false | NR | false |
R2 | ∞ | NR | DNUC | true (2) | NR | false |
R3 | DNUC | NR | DNUC | false | NR | false |
R4 | DNUC | NR | ∞ | false | NR | true (1) |
R5 | DNUC | NR | DNUC | false | NR | false |
R6 | DNUC | NR | DNUC | false | NR | false |
R7 | DNUC | NR | DNUC | false | NR | false |
Appendix A.1.3. Study the 2D-Projected Problem’s Lower-Constraints
Over Principal Planes Criterion Distance wCritDistG | True if Projected Lower-Constraint Limits Dim i | |||||
---|---|---|---|---|---|---|
Constraint | Dimension i | Dimension i | ||||
Rj | 0 | 1 (*) | 2 | 0 | 1 (*) | 2 |
R0 | DNWC | NR | 5.174 | false | NR | true |
R1 | DNWC | NR | DNWC | false | NR | false |
R2 | DNWC | NR | DNWC | false | NR | false |
R3 | 8 | NR | 8 | true | NR | true |
R4 | DNWC | NR | DNWC | false | NR | false |
R5 | 8 | NR | DNWC | true | NR | false |
R6 | DNWC | NR | DNWC | false | NR | false |
R7 | DNWC | NR | DNWC | false | NR | false |
Appendix A.1.4. Select Active Constraints
Appendix A.1.5. Find the Extreme Vertex’s Coordinates by Intersecting Active Constraints
References
- Dantzig, G.B. Origins of The Simplex Method; Technical Report SOL 87-5; Stanford University: Stanford, CA, USA, 1987. [Google Scholar] [CrossRef]
- Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
- Shamos, M.I.; Hoey, D. CLOSEST-POINT PROBLEMS. 1975. Available online: https://pdfs.semanticscholar.org/dfba/35c318f0fc244c6d6cad98c1ad33f82d16ad.pdf (accessed on 15 October 2018).
- Shamos, M.I.; Hoey, D. Geometric Intersection Problems. 1976. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.9983&rep=rep1&type=pdf (accessed on 15 October 2018).
- Shamos, M.I. Computational Geometry [Internet]. Yale Universiy. 1978. Available online: http://euro.ecom.cmu.edu/people/faculty/mshamos/1978ShamosThesis.pdf (accessed on 15 October 2018).
- Khachiyan, L. Polynomial algorithm in linear programming. Sov. Math. Dokl. 1979, 20, 191–194. [Google Scholar] [CrossRef]
- Karmarkar, N. A new polynomial-time algorithm for linear programming. Combinatorica 1984, 4, 373–395. [Google Scholar] [CrossRef]
- Paparrizos, K. An infeasible (exterior point) simplex algorithm for assignment problems. Math. Program. 1991, 51, 45–54. [Google Scholar] [CrossRef]
- Anstreicher, K.M.; Terlaky, T. A Monotonic Build-Up Simplex Algorithm for Linear Programming. Oper. Res. 1994, 42, 556–561. [Google Scholar] [CrossRef] [Green Version]
- Andrus, J.F.; Schaferkotter, M.R. An exterior-point method for linear programming problems. J. Optim. Theory Appl. 1996, 91, 561–583. [Google Scholar] [CrossRef]
- Hassan, A.S.O.; Abdel-Malek, H.L.; Sharaf, I.M. An exterior point algorithm for some linear complementarity problems with applications. Eng. Optim. 2007, 39, 661–677. [Google Scholar] [CrossRef]
- Paparrizos, K.; Samaras, N.; Sifaleras, A. Exterior point simplex-type algorithms for linear and network optimization problems. Ann. Oper. Res. 2015, 229, 607–633. [Google Scholar] [CrossRef]
- Glavelis, T.; Ploskas, N.; Samaras, N. Improving a primal–dual simplex-type algorithm using interior point methods. Optimization 2018, 67, 2259–2274. [Google Scholar] [CrossRef]
- Elble, J.M.; Sahinidis, N.V. Scaling linear optimization problems prior to application of the simplex method. Comput. Optim. Appl. 2012, 52, 345–371. [Google Scholar] [CrossRef]
- Gondzio, J. Another Simplex-type Method for Large Scale 1 Introduction; Technical Report ZTSW-1-G244/94; Systems Research Institute, Polish Academy of Sciences: Warsaw, Poland, 1996. [Google Scholar]
- Kalai, G.A. Subexponential randomized simplex algorithm. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing, STOC’92, Victoria, BC, Canada, 4–6 May 1992; pp. 475–482. [Google Scholar]
- Hansen, T.D.; Zwick, U. An Improved Version of the Random-Facet Pivoting Rule for the Simplex Algorithm Categories and Subject Descriptors. In Proceedings of the 47th ACM Symposium of Theory Compution, Portland, OR, USA, 14–17 June 2015; pp. 209–218. [Google Scholar] [CrossRef]
- Kelner, J.A.; Spielman, D.A. A randomized polynomial-time simplex algorithm for linear programming. In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing—STOC’06, Seattle, WA, USA, 21–23 May 2006; Volume 51. [Google Scholar] [CrossRef]
- Ploskas, N.; Samaras, N. GPU accelerated pivoting rules for the simplex algorithm. J. Syst. Softw. 2014, 96, 1–9. [Google Scholar] [CrossRef]
- Clarkson, K.L. Las Vegas algorithms for linear and integer programming when the dimension is small. J. ACM 1995, 42, 488–499. [Google Scholar] [CrossRef] [Green Version]
- Brise, Y.; Gartner, B. Clarkson’s Algorithm for Violator Spaces. In Proceedings of the 21st Annual Canadian Conference on Computational Geometry, Vancouver, BC, Canada, 30 June 2009. [Google Scholar]
- RinnooyKan, A.H.G.; Telgen, J. The Complexity of Linear Programming. Stat. Neerl. 1981, 35, 91–107. [Google Scholar] [CrossRef]
- Febres, G.L. Non-Iterative Linear Maximization Algorithm [Internet]. figshare.com. 2019. Available online: https://figshare.com/articles/Non-Iterative_Linear_Maximization_Algorithm/7928948 (accessed on 20 May 2019).
- Febres, G.L. Basis to Develop a Platform for Multiple-Scale Complex Systems Modeling and Visualization: MoNet v3. arXiv 2017, arXiv:1701.04064. [Google Scholar]
- Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes in C: The Art of Scientific Computing, 2nd ed.; Cambridge University Press: Cambridge, UK, 1992; Available online: http://www.jstor.org/stable/1269484?origin=crossref (accessed on 15 May 2019).
- Febres, G.L. Gerardo Luis Febres’s Homepage. 2019. Available online: Gfebres.net (accessed on 20 May 2019).
- Klee, V.; Minty, G.J. How Good Is the Simplex Method. In Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 159–175. [Google Scholar]
© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Febres, G.L. A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems. Axioms 2019, 8, 92. https://doi.org/10.3390/axioms8030092
Febres GL. A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems. Axioms. 2019; 8(3):92. https://doi.org/10.3390/axioms8030092
Chicago/Turabian StyleFebres, Gerardo L. 2019. "A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems" Axioms 8, no. 3: 92. https://doi.org/10.3390/axioms8030092
APA StyleFebres, G. L. (2019). A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems. Axioms, 8(3), 92. https://doi.org/10.3390/axioms8030092