Abstract
Secure energy transport is considered as highly relevant for the basic infrastructure of nowadays society and economy. To satisfy increasing demands and to handle more diverse transport situations, operators of energy networks regularly expand the capacity of their network by building new network elements, known as the expansion planning problem. A key constraint function in expansion planning problems is a nonlinear and nonconvex potential loss function. In order to improve the algorithmic performance of state-of-the-art MINLP solvers, this paper presents an algebraic description for the convex envelope of this function. Through a thorough computational study, we show that this tighter relaxation tremendously improves the performance of the MINLP solver SCIP on a large test set of practically relevant instances for the expansion planning problem. In particular, the results show that our achievements lead to an improvement of the solver performance for a development version by up to 58%.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Expansion planning problems of energy networks (see Sect. 2) are becoming more relevant as well as more challenging due to increasing demands of modern society. The goal of this paper is to improve the algorithmic performance of an LP-based branch-and-bound MINLP solver for the expansion planning problem. In particular, we focus on building a tighter LP relaxation by exploiting the convex envelope of the main nonlinearity present in this problem. To this end, we compute the convex envelope and implement it in the MINLP solver SCIP.
The state of the art for solving nonconvex MINLPs to global optimality is spatial branch-and-bound. Within spatial branch-and-bound one of the most important ingredients is the construction of tight convex relaxations. For a nonconvex constraint \(\{ x \in D \, | \, f(x) \le 0 \}\), where \(D \subseteq \mathbb {R}^n\) is convex, the most common method for constructing a convex relaxation is to compute a convex underestimator, that is, a function \(g :D \rightarrow \mathbb {R}\) that satisfies \(g(x) \le f(x)\) for all \(x \in D\). Then, \(g(x) \le 0\) yields a convex relaxation of the constraint \(f(x) \le 0\).
The best possible convex underestimator of a nonconvex function f over the domain D is given by the convex envelope \(\text {vex}_D(f)\). For general functions, computing the convex envelope is a difficult task but for specific functions it might be tractable. Note that constraints can also be of the form \(f(x) \ge 0\), in which case concave overestimators are required to build a convex relaxation. In particular, for equality constraints, both the convex under and concave overestimators are needed.
In this paper, we analytically derive the convex and concave envelope of the potential loss function \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) with \(\alpha > 1\). This is the only source of nonlinearity and nonconvexity of the expansion planning problems that we consider, see Sect. 2. Thus, the convex relaxations depend on the convexification of these constraints. As we will see in Sect. 4.1, the concave envelope of f can be deduced from the convex envelope of f, and thus, we concentrate on the convex envelope in the following. To the best of our knowledge, the convex envelope of this function is unknown in the literature.
Contributions
-
(i)
We derive an explicit description for the convex and concave envelope of f over a rectangular domain \(D = [\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ]\) with \(\smash [b]{\underline{x}}< 0 < \smash [t]{\overline{x}} \) and \(0< \smash [b]{\underline{y}} < \smash [t]{\overline{y}} \).
-
(ii)
We perform an exhaustive computational study, we show that these tighter relaxations tremendously improve the performance of the MINLP solver SCIP on a large test set of practically relevant instances that stem from real-world applications. Our results show that the performance improves up to 58% for hard instances.
Outline The rest of the paper is structured as follows: Sect. 2 introduces the expansion planning problem. In Sect. 3, we introduce preliminary theoretical results from convex analysis that we need to derive the convex envelope of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\). In Sect. 4, we present our main contribution, which is the derivation of a closed-form expression for the convex envelope of f. In Sect. 5, we compare the convex envelope with the standard factorable relaxation for the constraint function f followed by an extensive computational study in Sect. 6. We test the impact of the convex envelope over real-world expansion planning problem instances using the MINLP solver SCIP. To this end, we first study the impact on the dual bound in the root node and secondly the overall performance within the spatial branch-and-bound search. Finally, Sect. 7 provides conclusions.
2 Expansion Planning Problem
Transmission system operators of energy networks such as natural gas, hydrogen and water networks regularly face both increasing demand and more diverse transport situations. To deal with these challenges, they must expand the capacity of the existing networks without having to resort to the expensive options of setting up new pipeline corridors or replacing pipelines by larger ones. In practice, operators preferably build new pipelines parallel to existing ones. This process is referred to as “looping” in the terminology of the industry. It is significantly cheaper than, for example, building completely new pipelines, since existing rights of way can be used simplifying the regulatory and the planning processes and, moreover, the additional encroachment on the environment is rather moderate.
In the literature, two different strands have been considered for modeling the looping problem and related expansion planning problems: the split-pipe and the discrete approach. In the first approach, the entire length of the pipeline is split into several segments each of which may have its own length and diameter (see, e.g., [9, 18, 41]), and the discrete approach, where the optimal diameters are chosen from a set of commercially available diameters (see, e.g., [4, 7, 17]).
In this paper, we consider expansion planning based on the split-pipe approach. More precisely, we use the formulation of Model E for the looping problem from [21], since the authors show that it performed best among all studied model formulations. In the following, we briefly explain the split-pipe approach and state its model formulation for the loop expansion planning problem.
Split-pipe approach The starting point is the concept of the so-called equivalent diameter. It allows to replace multiple pipelines in parallel by a single pipeline without changing any physical properties of the network, see, for example, [4, 7, 19] for two parallel pipelines in gas and water networks. For a generalization of the equivalent diameter to more than two pipelines in parallel, we refer to [20]. A type of equivalent diameter also exists for the case of multiple serial pipelines with no intermediate source or sink. When modeling split-pipes, it suffices to consider only those equivalent diameters resulting from parallel pipelines (multiple loops) and serial pipelines (split-pipe setting) that are not dominated by others with lower costs. These nondominated points together with their nondominated convex combinations form the so-called efficient frontier. This means that all points on the efficient frontier represent the cost minimal way of equipping a pipeline with a certain equivalent diameter. For more detailed information, we refer to Section 2 in [21].
Model formulation for the split-pipe approach Let \(\mathcal {G} = \left( \mathcal {V}, \mathcal {A} \right) \) be a directed and connected graph with node set \(\mathcal {V}\) and pipe set \(\mathcal {A}\). A balanced demand vector \(b \in \mathbb {R}^{\left| \mathcal {V}\right| }\) specifies nodal inflow and outflow values of the network. Here, we deal with a stationary model, this means that the overall network demand satisfies \(\sum _{v \in \mathcal {V}} b_{v} = 0\). The physical state of the network is described by flow variables \(x_a \in [\underline{x}_a, \overline{x}_a]\) for all arcs \(a = (v,w) \in \mathcal {A}\) and nonnegative potential variables \(\pi _v \in [\underline{\pi }_v, \overline{\pi }_v]\) at each node \(v \in \mathcal {V}\). The arc flows arise from a friction-induced potential difference between the arcs’ adjacent nodes. In stationary models, it is a common approach to represent this interdependency by virtue of the Weymouth equation, see [38], with \(\alpha = 2\) for gas network operations and by virtue of Darcy–Weisbach with \(\alpha = 2\) or Hazen–Williams with \(\alpha = 1.852\) for water network operations, see [37], in any case taking the shape of Eq. (1a). Note that \(\text {sgn}(x_a)\) indicates the direction of the arc flow. Given that, in general, bi-directional arc flow is possible, we assume that \(\smash [b]{\underline{x}} < 0\) and \(\smash [t]{\overline{x}} > 0\). Here, for each pipe \(a \in \mathcal {A}\), positive variables \(y_a \in [\smash [b]{\underline{y}} _a, \smash [t]{\overline{y}} _a]\) and \(c_a \ge 0\) model the efficient frontier of the split-pipe approach described above, where \(y_a\) represents the physical impact of the expansion on the potential loss along arc a with corresponding costs \(c_{a}\) that can be assumed as proportional to the pipe length \(L_a\). Lenz [20] shows that the best expansion candidates in the split-pipe setting are determined by the efficient frontier [Eq. (1b)], which is given by \(k_a\)-many nondominated points in the \((y_a,c_a)\)-system. In the case that pipe a is not looped, then by construction it follows \(y_a = \smash [b]{\underline{y}} _a\) with associated costs \(c_a = 0\). Otherwise, \(y_a > \smash [b]{\underline{y}} \) and \(c_a > 0\). Finally, as in classical network flow problems, Eq. (1c) models the flow conservation at every network node. Then, the model reads,
In this paper, we compute the convex hull of the constraint (1a) by studying the convex and concave envelope of the nonlinear function \((x,y) \mapsto y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) and then investigating its impact on the performance of expansion planning problem instances.
Finally, let us mention that the nonlinear nature of the expansion planning problem stimulated a number of different model formulations and solution approaches. For example, [3] introduced the linear programming gradient (LPG) method, a two-stage heuristic that alternates between solving a linear program with fixed flows in order to obtain pipeline diameters and modifying the flow values on the basis of a sensitivity analysis. André et al. [4] tackle the discrete version of the problem heuristically in two stages, where a first step solves the continuous relaxation in order to identify pipes to be looped and a second step determines discrete-valued diameters for these selected pipes based on branch-and-bound. Bragalli et al. [7] utilize a continuous reformulation of the cost function to solve the model directly with an MINLP solver. Raghunathan [27] proposes a disjunctive program together with a convex relaxation and then solves it to global optimality. Humpola [15] formulates a model and solves it to global optimality by using convex reformulation techniques and special tailored cuts. Borraz-Sánchez et al. [6] explicitly model forward and backward flow directions by using binary variables and present a solution approach based on a second-order cone relaxation.
3 Preliminaries on Convex Envelopes
In general, the derivation of convex and concave envelopes is a challenging task depending on the nonlinear constraint function and on the domain of the variables involved, see, for example, [14]. It can be shown that for a lower semicontinuous function f over a compact and convex domain D, \(\min _{x \in D} f(x) = \min _{x \in D} \text {vex}_{D}[f](x)\) and hence computing the convex envelope is in general \(\mathcal {NP}\)-hard. Unless \(\mathcal {P} = \mathcal {NP}\), no efficient method exists to compute convex envelopes of general nonlinear functions.
In order to build tight convex relaxations, convex envelopes have been widely studied for particular functions or classes of functions. For example, [25] constructs a linear relaxation over a box domain for the bilinear term \(x \, y\), which is proven to be the convex envelope by [2]. In the last decade, several approaches arose to convexify the bilinear term \(x \, y\) over special domains, see, e.g., [13, 23, 24, 26, 30]. The convex envelope of the fractional term y/x has been studied, e.g., in [33, 40].
Moreover, [33] computes the convex envelope of the function \(f(x,y) = g(y) \, x^2\) over a rectangular domain \(D \subseteq \mathbb {R}^2 \), where \(f :D \rightarrow \mathbb {R}\) is convex in x and concave in y. Even though this function is close to our constraint function \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\), this approach is not applicable in our setting, given that our function involves the nonconvex term \(\text {sgn}(x) \left| x\right| ^{\alpha }\) instead of \(x^2\).
Note that \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) is twice differentiable only for \(\alpha > 2\) and thus it is not possible to compute the convex envelope using techniques based on the Hessian, as done, for example, for \((n-1)\)-convex functions in [16]. In principle, when \(f \in C^2\) (i.e., \(\alpha > 2\)), we could generate valid tangential hyperplanes of function f to separate violated LP solutions without explicitly calculating the convex envelope, as done in [5] for twice continuously differentiable functions with fixed convexity behavior over a box. However, our analysis is valid for any \(\alpha > 1\).
In the remainder of this section, we introduce some mathematical notation and necessary theoretical results that are required to derive \(\text {vex}_D(f)\). Even though our function is continuous, many of the following results hold for lower semicontinuous functions, which we define as follows.
Definition 3.1
Let \(f :D \rightarrow \mathbb {R}\) with \(D \subseteq \mathbb {R}^n\). The epigraph of f is
Definition 3.2
A function \(f :D \rightarrow \mathbb {R}\) is lower semicontinuous (l.s.c.) if its epigraph is closed.
We continue by formally defining the convex envelope of a function.
Definition 3.3
The convex envelope of a function \(f :D \rightarrow \mathbb {R}\) over a convex set \(D \subseteq \mathbb {R}^n\) is given by the tightest convex underestimator \(\eta :D \rightarrow \mathbb {R}\) of f, defined pointwise as
The concave envelope of a function \(f :D \rightarrow \mathbb {R}\) over \(D \subseteq \mathbb {R}^n\) is given by the tightest concave overestimator \(\eta :D \rightarrow \mathbb {R}\) of f, defined pointwise as
A geometrical characterization of the convex envelope is as follows.
Theorem 3.1
[28, Theorem 5.3] Let E be any convex set in \(\mathbb {R}^{n+1}\), and let
Then, \(g :\mathbb {R}^n \rightarrow \mathbb {R}\) is a convex function.
The convex envelope of a l.s.c. function f corresponds to the function g, obtained by applying Theorem 3.1 to the convex set \(\text {conv}(\text {epi}_D f)\). Therefore, it follows
see also [28]. In order to determine the convex envelope of a l.s.c. function f at point \(x \in D\), its representation in (2) leads to the following minimization problem
Note that the \(n+1\) is justified by Caratheodory’s theorem [28, Theorem 17.1]. Problem (3) is nonconvex and nonlinear as the multipliers \(\lambda _k\) and the variables \(x_k\) are unknown. However, it is possible to reduce the degree of nonlinearity and nonconvexity of Problem (3) by studying the convex set \(\text {conv}(\text {epi}_D f)\) in more detail. In particular, we reduce the domain D by determining a subset \(A \subseteq D\), such that \(\text {conv}(\text {epi}_D f) = \text {conv}(\text {epi}_A f)\) holds.
Definition 3.4
Let D be compact and convex subset of \(\mathbb {R}^n\), and let \(f :D \rightarrow \mathbb {R}\) be a l.s.c. function.
The generating set of the convex envelope of f is defined by
Given that a closed convex set containing no lines is characterized by the set of its extreme points and extreme directions [28, Theorem 18.5], we have the following result, see also [34].
Theorem 3.2
Let D be compact and convex subset of \(\mathbb {R}^n\), \(f :D \rightarrow \mathbb {R}\) be a l.s.c. function, and \(A \subseteq D\). Then, \(\text {conv}\, (\text {epi}_D f) = \text {conv}\, (\text {epi}_A f)\) if and only if \(\mathcal {G}_D(f) \subseteq A\).
Even though there exists no general formula to compute \(\mathcal {G}_D(f)\) of a nonconvex function f, [34] provides a condition that determines when a point \(x \in D\) is not in \(\mathcal {G}_D(f)\). For this, we need the relative interior of a convex set D, \({{\,\mathrm{ri}\,}}(D) := \{ x \in D : \forall y \in D \; \exists \, \lambda < 1 : \lambda \, x + (1 - \lambda ) \, y \in D \} \).
Corollary 3.1
[34, Corollary 5] Let \(f :D \rightarrow \mathbb {R}\) be l.s.c. and \(D \subseteq \mathbb {R}^n\) compact and convex. If for \(x \in D\) there exists a line segment \(l \subseteq D\) such that \(x \in {{\,\mathrm{ri}\,}}(l)\) and f restricted to l is concave, then \(x \notin \mathcal {G}_D(f)\).
4 Computing the Convex Envelope of \(f(x,y) = y \, \mathrm{sgn}(x) \left| x\right| ^{\alpha }\)
In this section, we show how to compute the convex and concave envelope of \(f :D \subseteq \mathbb {R}^2 \rightarrow \mathbb {R}\), with \(f :(x,y) \mapsto y \,\text {sgn}(x) \left| x\right| ^{\alpha }\), where \(\alpha > 1\), over the box \(D = [\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ] \subseteq \mathbb {R}^2\). In the remainder of this paper, we assume \(\smash [b]{\underline{y}} > 0\), \(\smash [b]{\underline{x}} < 0\) and \(\smash [t]{\overline{x}} >0\), as it typically occurs in practical applications, see Sect. 2. For a visualization of the nonconvex function f with \(\alpha = 2\), we refer to Fig. 5.
4.1 Preliminaries about the Envelopes of \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\)
Problem (3) for \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) over the domain \(D = [\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ] \subseteq \mathbb {R}^2\) reads as
Note that the above problem has nine variables (\(x_k, y_k, \lambda _k\) for \(1 \le k \le 3\)). In the next section, we show how to reduce this problem to a one-dimensional problem.
Here, the structure of the function \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) allows us to deduce both envelopes from each other, see Fig. 1. The function \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) is odd in x when y is fixed. This allows us to retrieve the concave envelope from the convex envelope, and thus, from now on, we restrict ourselves to the derivation of the convex envelope.
Proposition 4.1
Let \(f :\mathbb {R} \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ] \rightarrow \mathbb {R}\) be an odd function in x, i.e., \(f(x,y) = -f(-x,y)\) and let \(D := [\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ]\). Let \(D_g := [-\smash [t]{\overline{x}},-\smash [b]{\underline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ]\) and \(g :D_g \rightarrow \mathbb {R}\) be given by \(g(x,y) := -f(-x,y)\). Then, the concave envelope of f over D is given by \(\text {cave}_{D}[f](x,y) = -\text {vex}_{D_g}[g](-x,y)\).
Proof
By definition of g and \(\text {vex}_{D_g}[g](x,y)\), we have that
Thus,
\(\square \)
4.2 Problem Reduction by Using the Generating Set
To simplify Problem (3), we use Corollary 3.1 to find a set \(A \subseteq D\) such that \(\mathcal {G}_D(f) \subseteq A\) for \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\).
First, notice that after fixing x to a value in \([\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ]\), the function f(x, y) is linear and thus concave over the line segment \(l = \left\{ (x,y) \, | \, y \in [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ] \right\} \). Then, Corollary 3.1 implies that all points in \({{\,\mathrm{ri}\,}}(l) = \{ (x,y) \, | \, y \in (\smash [b]{\underline{y}}, \smash [t]{\overline{y}}) \}\) are not in \(\mathcal {G}_D(f)\). Thus,
Let \(D_{\smash [b]{\underline{y}}} := \{(x,\smash [b]{\underline{y}}) \, | \, \smash [b]{\underline{x}} \le x \le \smash [t]{\overline{x}} \}\) and \(D_{\smash [t]{\overline{y}}} := \{(x,\smash [t]{\overline{y}}) \, | \, \smash [b]{\underline{x}} \le x \le \smash [t]{\overline{x}} \}\). Given that \(\mathcal {G}_D(f) \subset D_{\smash [b]{\underline{y}}} \cup D_{\smash [t]{\overline{y}}}\), Theorem 3.2 allows us to express the convex hull of \(\text {epi}_D f\) as
The convex hulls \(\text {conv}(\text {epi}_{D_{\smash [b]{\underline{y}}}} f)\) and \(\text {conv}(\text {epi}_{D_{\smash [t]{\overline{y}}}} f)\) correspond to the epigraphs of the convex envelopes of the one-dimensional functions \(f_{\smash [b]{\underline{y}}} (x) := f(x,\smash [b]{\underline{y}})\) and \(f_{\smash [t]{\overline{y}}} (x) := f(x,\smash [t]{\overline{y}})\). The following proposition introduces these one-dimensional convex envelopes, see Fig. 2 for an illustration.
Proposition 4.2
[35, Section 7.5.2] The convex envelope of the function \(f_y :[\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \rightarrow \mathbb {R}\) is given by \(f_y (x) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\), where \(\smash [b]{\underline{x}}< 0 < \smash [t]{\overline{x}} \), \(\alpha > 1\), and \(y > 0\) is given by \(\varphi _y :[\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \rightarrow \mathbb {R}\), with
where \(\gamma = \min \{\smash [t]{\overline{x}}, \beta \smash [b]{\underline{x}} \}\) and \(\beta \) is the unique negative root of \((\alpha -1) (-\beta )^{\alpha } + \alpha (-\beta )^{\alpha -1} -1\).
Remark 4.1
The function \((x,y) \mapsto \varphi _y(x)\) is not the convex envelope of f(x, y) as can be seen from Fig. 6 (middle). This function is not even convex! \(\square \)
Given that \(\text {conv}(\text {epi}_{D_{\smash [b]{\underline{y}}}} f) = \text {epi}_{D_{\smash [b]{\underline{y}}}} \varphi _{\smash [b]{\underline{y}}}\) and \(\text {conv}(\text {epi}_{D_{\smash [t]{\overline{y}}}} f) = \text {epi}_{D_{\smash [t]{\overline{y}}}} \varphi _{\smash [t]{\overline{y}}}\), we can further rewrite (4) to
Therefore, we have
Equivalently,
Notice that \(\text {vex}_D[f](x,\smash [b]{\underline{y}}) = \varphi _{\smash [b]{\underline{y}}}(x)\) and \(\text {vex}_D[f](x,\smash [t]{\overline{y}}) = \varphi _{\smash [t]{\overline{y}}}(x)\). Hence, we just need to determine the convex envelope of f for \(y \in (\smash [b]{\underline{y}}, \smash [t]{\overline{y}})\). In the following, we assume that \(y \in (\smash [b]{\underline{y}}, \smash [t]{\overline{y}})\).
4.3 Reduction to a One-Dimensional Optimization Problem
We can further reduce the optimization Problem (6) to a one-dimensional problem.
Firstly, (6b) enables us to fix the multiplier \(\lambda \)
Given that \(\lambda \) is fixed, we can use (6a) to define \(x_2\) in terms of \(x_1\). For this, we define the functions \(t_{x,y}, T_{x,y} :[\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \rightarrow \mathbb {R}\) given by
These functions satisfy \(x_2 = t_{x,y}(x_1)\) and \(x_1 = T_{x,y}(x_2)\) for every feasible solution \(x_1, x_2\) of Problem (6). Both functions are affine linear, strictly decreasing and inverse to each other. For an interpretation of \(T_{x,y}\), see Fig. 3. Note that \(t_{x,y}\) and \(T_{x,y}\) are well defined, since we assume \(\lambda _y \in (0,1)\), as described above.
Projecting out \(x_2\) in Problem (6) yields a one-dimensional problem, which we can express with the help of \(t_{x,y}\) and \(T_{x,y}\) as
The expression for the bounds come from combining both bounds of \(x_1\), namely \(\smash [b]{\underline{x}} \le x_1 \le \smash [t]{\overline{x}} \) and \(T_{x,y}(\smash [t]{\overline{x}}) \le x_1 \le T_{x,y}(\smash [b]{\underline{x}})\). The latter bounds come from \(\smash [b]{\underline{x}} \le x_2 \le \smash [t]{\overline{x}} \), which after projecting out \(x_2\) becomes \(\underline{x} \le t_{x,y}(x_1) \le \overline{x}\). Applying the function \(T_{x,y}\) to the inequality yields \(T_{x,y}(\smash [t]{\overline{x}}) \le x_1 \le T_{x,y}(\smash [b]{\underline{x}})\).
Let \(\smash [b]{\underline{x}}_1:= \max \left\{ \smash [b]{\underline{x}}, T_{x,y}(\smash [t]{\overline{x}}) \right\} \), \(\smash [t]{\overline{x}}_1:= \min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}}) \right\} \), and let \(F_{x,y} :[\smash [b]{\underline{x}}_1, \smash [t]{\overline{x}}_1 ] \rightarrow \mathbb {R}\) be
Then, Problem (9) is equivalent to
Remark 4.2
Problem (11) allows for a geometrical interpretation as illustrated in Fig. 4: For a given point \(\left( x,y \right) \in D\), Problem (11) corresponds to finding the points \((x_1, \smash [b]{\underline{y}})\) and \((t_{x,y}(x_1),\smash [t]{\overline{y}})\) such that the line segment between these points contains (x, y), and the line segment with endpoints \(\left( x_1, \smash [b]{\underline{y}}, \varphi _{\smash [b]{\underline{y}}}(x_1) \right) \) and \(\left( t_{x,y}(x_1), \smash [t]{\overline{y}}, \varphi _{\smash [t]{\overline{y}}}(t_{x,y}(x_1)) \right) \) has the lowest height at \(\left( x, y \right) \). Note that the compact formulation of Problem (11) only contains \(x_1\) as a variable, where constraint (7) allows us to recover \(x_2 = t_{x,y}(x_1)\). \(\square \)
At this point, we can already compute the convex envelope for a simple case.
Proposition 4.3
If \(\smash [t]{\overline{x}} < \beta \smash [b]{\underline{x}} \), then \(\text {vex}_D[f](x,y) = F_{x,y}( \min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} )\).
Proof
When \(\smash [t]{\overline{x}} < \beta \smash [b]{\underline{x}} \) holds, then \(\varphi _y(x) = \alpha \smash [t]{\overline{x}} ^{\alpha - 1} x y + (1 - \alpha )\smash [t]{\overline{x}} ^{\alpha }y\). Therefore, \(F_{x,y}\) is a linear function and (11) reduces to minimizing a linear function over an interval. A simple computation shows that the coefficient of \(x_1\) in \(F_{x,y}(x_1)\) is negative, and so the minimum is achieved at the upper bound, i.e., \(\text {vex}_D[f](x,y) = F_{x,y}( \min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} )\). \(\square \)
In light of the previous proposition, we assume in the remainder of this section that \(\smash [t]{\overline{x}} \ge \beta \smash [b]{\underline{x}} \). In particular, \(\gamma = \beta \smash [b]{\underline{x}} \) (see Proposition 4.2).
4.4 Properties of the Objective Function
The objective function of Problem (11), \(F_{x,y}\), is convex as it is a convex combination of convex functions. Note that the function \(\varphi _{\smash [t]{\overline{y}}}(t_{x,y}(\cdot ))\) is convex since it is the composition of a convex (\(\varphi _{\smash [t]{\overline{y}}}\)) with an affine linear function (\(t_{x,y}\)).
The following proposition characterizes the optimal solution for a large class of problems of a form similar to Problem (11).
Proposition 4.4
Suppose that a function \(F :\mathbb {R} \rightarrow \mathbb {R}\) is convex and has a global minimum \(z^* \in \mathbb {R}\). Then, the solution of \(\min \{ F(z) : a \le z \le b \}\) is given by
where \({\text {mid}}(a, b, z^*)\) selects the middle value between \(a, b, z^*\).
Proof
The claim follows directly by comparing F(a), F(b) and \(F(z^*)\). If \(z^*\) is within the bounds, i.e., \(a \le z^* \le b\), then \( F( {\text {mid}}(a, b, z^*) ) = F(z^*)\). If \(z^* \le a\), then the convexity of F implies that F is increasing in \([z^*, \infty )\) and so \(F(a) = F( {\text {mid}}(a, b, z^*) )\) is the minimum. The argument is analogous for the case \(z^* \ge b\). \(\square \)
In order to apply Proposition 4.4, we need to extend \(F_{x,y}\) from \(x_1 \in [\underline{x}_1, \overline{x}_1]\) to \(x_1 \in \mathbb {R}\) and show that it has a global minimum. The extension is immediate, given that all the functions involved can be evaluated in \(\mathbb {R}\). To show that \(F_{x,y}\) has a global minimum over \(\mathbb {R}\), we show that the sublevel sets of \(F_{x,y}\) are bounded. This follows from the following proposition.
Proposition 4.5
The function \(F_{x,y} :\mathbb {R} \rightarrow \mathbb {R}\), given by Eq. (10) with \(\alpha > 1\), is coercive, i.e., it satisfies
Proof
We compute the limit when \(x_1\) goes to \(+\infty \). The case when \(x_1\) goes to \(-\infty \) is similar. By definition,
For every large enough \(x_1\), we have \(x_1\ge \beta \underline{x}\) and \(t_{x,y}(x_1) < \beta \underline{x}\). Thus,
This can be rewritten as \(a x_1^{\alpha } + b x_1 + c\), where \(a > 0\), \(b < 0\) (since \(t_{x,y}\) is strictly decreasing and linear), and \(c \in \mathbb {R}\). Given that \(\alpha > 1\), the sign of a determines the behavior of \(F_{x,y}\) at infinity; hence, \(\lim _{x_1 \rightarrow \infty }{F_{x,y}(x_1)} = \infty \). \(\square \)
Figure 5 visualizes function \(F_{x,y}(x_1)\) for different pairs of points (x, y).
4.5 Solution to Problem (11)
With these properties of \(F_{x,y}\), we are ready to compute its global minimum over \(\mathbb {R}\). To this end, we use that \(F_{x,y}\) is differentiable, since \(\varphi _y \in C^1\) for \(\alpha >1\), and solve \(F_{x,y}'(x_1) = 0\). We have
Then, given the piecewise nature of \(\varphi _y\), see (5), we get
where
To solve \(F_{x,y}'(x_1) = 0\), we restrict \(x_1\) to be in LL, LR, RL or RR.
Case \(x_1 \in LL\): From (12), it follows that \(F_{x,y}'(x_1) = 0\) if and only if
Given that \(\smash [b]{\underline{y}} < \smash [t]{\overline{y}} \), it follows \(F_{x,y}'(x_1) \ne 0\) for all \(x_1 \in LL\). Thus, the global minimum is not in LL.
Case \(x_1 \in LR\): The solution of \(F_{x,y}'(x_1) = 0\) is given by
Note, however, that \(x_{lr} \notin LR\), as \(t_{x,y}(x_{lr}) < \beta \smash [b]{\underline{x}} \). Thus, the global minimum is not in LR.
Case \(x_1 \in RL\): The solution of \(F_{x,y}'(x_1) = 0\) is given by
The point \(x_{rl}\) is in RL if and only if \(x_{rl} \ge \beta \smash [b]{\underline{x}} \) and \(t_{x,y}(x_{rl}) \le \beta \smash [b]{\underline{x}} \). Given that \(\beta , \smash [b]{\underline{x}} < 0\), we have that \(x_{rl} > \beta \smash [b]{\underline{x}} \). Thus, the global minimum is \(x_{rl}\) if and only if \(t_{x,y}(x_{rl}) \le \beta \smash [b]{\underline{x}} \). Otherwise, the global minimum must be in the next case.
Case \(x_1 \in RR\): The solution of \(F_{x,y}'(x_1) = 0\) is given by
From the above case distinction, it follows that the optimal solution is \(x_{rl}\) if \(t_{x,y}(x_{rl}) \le \beta \smash [b]{\underline{x}} \), otherwise it is \(x_{rr}\). Therefore, from Proposition 4.4, we conclude that
Figure 6 shows a visualization of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) and its convex envelope.
4.6 Simplifying the Convex Envelope
We now derive a representation of \(\text {vex}_D[f](x,y)\) more compact than (15).
Theorem 4.1
The convex envelope of \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) over \(D = [\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} ]\) is given by
Proof
To prove the claim, we rewrite (15). We first show that
where \(x_{rl}\) and \(x_{rr}\) are given in (13) and (14).
Then, we can rewrite (15) as
Then, we show that
This implies that \({\text {mid}}( \smash [b]{\underline{x}}_1, \smash [t]{\overline{x}}_1, \max \{x_{rl}, x_{rr}\} ) =\min \{\smash [t]{\overline{x}}_1, \max \{x_{rl}, x_{rr}\}\}= \min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}}), \max \{ x_{rl}, x_{rr} \} \right\} \), which proves the result.
To prove (17) and (18), define \(G_{x,y} :\mathbb {R} \rightarrow \mathbb {R}\) as
and notice that \(G_{x,y}(x_{rr}) = 0\) and \(G_{x,y}\) is strictly increasing.
The equivalence (17) follows from the following computation,
To prove (18), it is enough to show \(\smash [b]{\underline{x}}_1 \le x_{rr}\). Recall that \(\smash [b]{\underline{x}}_1 = \max \left\{ \smash [b]{\underline{x}}, T_{x,y}(\smash [t]{\overline{x}}) \right\} \). The inequality \(\smash [b]{\underline{x}} \le x_{rr}\) follows from the definition of \(x_{rr}\) and \(\lambda _y, \smash [b]{\underline{y}}, \smash [t]{\overline{y}} > 0\).
Finally, let us show \(T_{x,y}(\smash [t]{\overline{x}}) \le x_{rr}\). This follows from the following computation,
and the fact that \(x \le \smash [t]{\overline{x}} \le (1 - \lambda _y) \smash [t]{\overline{x}} \left( \frac{\smash [t]{\overline{y}}}{\smash [b]{\underline{y}}}\right) ^{\frac{1}{\alpha -1}} + \lambda _y \smash [t]{\overline{x}} \). \(\square \)
Remark 4.3
From (16), we see that if \(x_{rl} \ge \smash [t]{\overline{x}} \), then
The condition \(x_{rl} \ge \smash [t]{\overline{x}} \) depends only on the bounds, and not on the given point (x, y). Therefore, if the bounds are such that \(x_{rl} \ge \smash [t]{\overline{x}} \), then we can further reduce the description of the convex envelope as above. \(\square \)
In summary, we derived an analytic solution for the convex envelope of the function \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^\alpha \), with \(\alpha > 1\), which is stated in (16). To evaluate the convex envelope at a point (x, y) only requires the computation of the function \(F_{x,y}\) at the minimum between \(\smash [t]{\overline{x}}_1 \), \(x_{rl}\) and \(x_{rr}\).
5 Convexification of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) in SCIP
In this section, we compare the convex underestimator induced by SCIP to the convex envelope of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\). To this end, consider the epigraph of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\),
By virtue of Theorem 3.1, any convex relaxation of (19) induces a convex underestimator of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\). Therefore, we construct the convex underestimator of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) induced by SCIP’s convex relaxation of (19) and compare it to the convex envelope of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\).
SCIP, as other MINLP solvers, uses a standard reformulation procedure that allows to generate a convex relaxation for factorable MINLPs. The idea is to decompose a factorable constraint function into simpler functions for which convex underestimators or even convex envelopes are known. For general information about this reformulation method, we refer to [32], and for a description of how it works in SCIP to [35].
The decomposition is achieved by introducing additional variables forming, thus, an extended formulation of the original problem. In our case, SCIP decomposes the constraint \(\theta \ge y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) by introducing an auxiliary variable z into
Then, SCIP builds a relaxation by computing the convex envelopes of \(f_1 (x) = \text {sgn}(x) \left| x\right| ^{\alpha }\) and \(f_2 (y,z) = y \, z\). Note that by Proposition 4.2, the convex envelope of \(f_1\) is \(\varphi _y\) with \(y = 1\), which we denote by \(\varphi _1\). Moreover, the convex envelope of \(f_2\) is given by the well-known McCormick inequalities [25],
where \(\smash [b]{\underline{z}} = \min _{x \in [\smash [b]{\underline{x}},\smash [t]{\overline{x}} ]} f_1(x)\) and \(\smash [t]{\overline{z}} =\max _{x \in [\smash [b]{\underline{x}},\smash [t]{\overline{x}} ]} f_1(x)\).
The convex relaxation of (19) constructed by SCIP is
Hence, the underestimator induced by C is
Given that \(\psi \) is increasing with respect to z for every y, an optimal solution of Problem (20) satisfies \(z = \varphi _1(x)\). Therefore, the convex underestimator induced by SCIP is \(\phi _S(x,y) = \psi (y,\varphi _1(x))\).
Figure 7 shows the difference between \(\phi _S\) and \(\text {vex}_D[f]\) over D.
From the figure, we can see that, for \(D = [-100, 100] \times [0.01, 1]\), \(\phi _S\) and \(\text {vex}_D[f]\) coincide in \(\{ (x,y) \in D: T_{x,y}(\smash [b]{\underline{x}}) \le \beta \smash [b]{\underline{x}} \}\), see also Fig. 3. Using \(D = [-100, 20] \times [0.01, 1]\) instead would show that \(\phi _S\) and \(\text {vex}_D[f]\) coincide in all of D. The next proposition explains this behavior.
Proposition 5.1
If \(\beta \smash [b]{\underline{x}} \ge \smash [t]{\overline{x}} \), then \(\text {vex}_D[f](x,y) = \phi _S(x,y)\) for all \((x,y) \in D\).
Proof
Proposition 4.3 implies that \(\text {vex}_D[f](x,y) = F_{x,y}( \min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} )\). As \(\beta \smash [b]{\underline{x}} \ge \smash [t]{\overline{x}} \), we have that \(\min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} \le \beta \smash [b]{\underline{x}} \) and \(t_{x,y}(\min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} )\le \beta \smash [b]{\underline{x}} \). Therefore, in \(F_{x,y}( \min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} )\) we have that \(\varphi _{\smash [b]{\underline{y}}}\) and \(\varphi _{\smash [t]{\overline{y}}}\) are evaluated on points smaller than \(\beta \smash [b]{\underline{x}} \), and so they are affine linear.
Assume that \(\min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} = T_{x,y}(\smash [b]{\underline{x}})\). Notice that \(T_{x,y}(\smash [b]{\underline{x}})\) is an affine combination of x and \(\smash [b]{\underline{x}} \) as \(T_{x,y}(\smash [b]{\underline{x}}) = \frac{1}{1 - \lambda _y}x + \frac{-\lambda _y}{1 - \lambda _y}\smash [b]{\underline{x}} \) and \(\frac{1}{1 - \lambda _y} + \frac{-\lambda _y}{1 - \lambda _y} =1\). As \(\varphi _{\smash [b]{\underline{y}}}\) is affine linear, we have that \((1 -\lambda _y)\varphi _{\smash [b]{\underline{y}}}(T_{x,y}(\smash [b]{\underline{x}})) = \varphi _{\smash [b]{\underline{y}}}(x) -\lambda _y \varphi _{\smash [b]{\underline{y}}}(\smash [b]{\underline{x}})\).
Therefore, \(F_{x,y}(T_{x,y}(\smash [b]{\underline{x}})) = \varphi _{\smash [b]{\underline{y}}}(x) - \lambda _y \varphi _{\smash [b]{\underline{y}}}(\smash [b]{\underline{x}}) + \lambda _y \varphi _{\smash [t]{\overline{y}}}(\smash [b]{\underline{x}}).\)
From \(\varphi _{y}(x)=y \varphi _{1}(x)\) and the definition of \(\lambda _y\), we obtain
Notice that \(\smash [b]{\underline{z}} = \varphi _1(\smash [b]{\underline{x}})\) (as \(\varphi _1\) is the convex envelope of \(f_1\)) and so
However, \(F_{x,y}(T_{x,y}(\smash [b]{\underline{x}}))\) is the convex envelope of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\), while \(\phi _S(x, y)\) is just a convex underestimator, which implies that \(\phi _S(x, y) \le F_{x,y}(T_{x,y}(\smash [b]{\underline{x}}))\). From here, we conclude that \(\phi _S(x, y) = F_{x,y}(T_{x,y}(\smash [b]{\underline{x}}))\) as we wanted to show.
The case of \(\min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} = \smash [t]{\overline{x}} \) follows from a similar argument. \(\square \)
We now explain the behavior from Fig. 7.
Proposition 5.2
Assume that \(\beta \smash [b]{\underline{x}} < \smash [t]{\overline{x}} \) and let \(A := \{ (x,y) \in D: T_{x,y}(\smash [b]{\underline{x}}) \le \beta \smash [b]{\underline{x}} \}\). Then, for every \((x,y) \in A\),
Furthermore, \(\text {vex}_D[f]\) is linear in A.
Proof
Given that \(\beta \smash [b]{\underline{x}} < \smash [t]{\overline{x}} \) and \(T_{x,y}(\smash [b]{\underline{x}}) \le \beta \smash [b]{\underline{x}} \), we have that \(\min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} = T_{x,y}(\smash [b]{\underline{x}})\) and \(t_{x,y}(\min \left\{ \smash [t]{\overline{x}}, T_{x,y}(\smash [b]{\underline{x}})\right\} ) = \smash [b]{\underline{x}} \le \beta \smash [b]{\underline{x}} \). Thus, we can apply the same reasoning as the one given in the above proof of Proposition 5.1. \(\square \)
Remark 5.1
A similar behavior can be observed between the concave overestimator induced by SCIP and the concave envelope of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\). \(\square \)
6 Computational Study
In this section, we present a computational study that investigates the impact of the presented convex envelope on the performance of the expansion planning problem (see Sect. 2). For this, we extend SCIP with the convex envelope and compare it against default SCIP. We conduct two experiments to answer the following questions:
-
1.
ROOTGAP: How much gap can be additionally closed in the root node of the branch-and-bound tree when using the envelopes of \(y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) with aggressive separation settings?
-
2.
TREE: To which extent do the presented envelopes affect the performance of SCIP, both in running time and solvability?
6.1 Experimental Setup
In order to measure the impact of the convex envelope over the standard relaxation in terms of improving the dual bound, we use an aggressive emphasis setting for the separation in the ROOTGAP experiment. To reduce the impact of side effects, we disable restarts,Footnote 1 all primal heuristics,Footnote 2 and we give the value of the best known primal solution.
In contrast to the ROOTGAP experiment, the TREE experiment compares SCIP default against SCIP with the additional separation routine that generates gradient cuts to the envelopes of f in every local node during the entire tree search. We remark that these gradient cuts are, in general, only locally valid since we compute them using the local bounds of the node.
In both experiments, the gradient cuts for the constraint \(y \, \text {sgn}(x) \left| x\right| ^{2} = \pi _v - \pi _w\) using the convex and concave envelope read, respectively,
and
6.2 Test Sets
The computational study is carried out on three test sets taken from [20]: Belgium, GasLib-40 and Circuit rank. These test sets are based upon different gas networks, demand scenarios and expansion candidates. In total, all test sets contain 6500 instances, with Belgium and GasLib-40 having each 2000 instances, and Circuit rank having 2500 instances. As we use gas instances, the exponent \(\alpha \) is set to \(\alpha = 2\) in Eq. (1a).
Networks The underlying networks of the test sets vary in size and structure. Here, we consider the rather small and almost tree-shaped Belgian gas network from the GAMS model libraryFootnote 3 and the larger GasLib-40 network from [29]. In general, optimization problems get more challenging to solve on potential-driven networks that have a higher circuit rank, which is the number \(\left| \mathcal {A} \right| - \left| \mathcal {V} \right| + 1\), because the existence of cycles leads to more complex patterns of flow directions, see [31] and the references therein. To also account for this type of computational difficulty, the Circuit rank test set contains instances where the circuit rank is successively increased by adding up to ten new pipelines to the Belgian network, resulting in five new networks Belgium + (2, 4, 6, 8, 10) arcs. The rationale is to add arcs that connect different regions of the network and have a high impact on the network topology. Details about the different networks, their visualization and information of how to generate the new pipelines are found in Table 1, Figure 6 and Section 6 in [21]. We note that the original Belgian and GasLib-40 networks contain compressor stations. However, their treatment is outside the scope of this paper. In the case of the Belgian network, we modeled them as normal pipelines by using available data from the GAMS model library, while in the case of the GasLib-40 network, compressor station arcs had to be contracted by combining their adjacent nodes due to lack of available data.
Demands It is known in practice that increasing network demand typically results in more complex transport situations that stress the networks. Therefore, we generated instances for the Belgium and GasLib-40 test sets that represent a wide range of possible network demand vectors \(b \in \mathbb {R}^{\left| \mathcal {V}\right| }\) starting from “easier” instances with lower demands up to “harder” instances with higher demands. For the Circuit rank experiment, we utilize demand scenarios with a constant overall network demand. An overview of the total network demand loads (in \([10^6 \; \text {m}^3 / \text {day}]\)) used for the particular networks is given in Table 1.
Throughout, we randomly generated 500 scenarios for each network and for each given total network demand according to a random uniform distribution of the demand at sources and sinks. For more detailed information about the deployed method, we refer to [21]. Due to the variety of instances, we can assume that the results will provide us with a representative picture for the impact of the convex envelope on gas network related instances.
Expansion capacities In all instances of the three test sets, the expansion capacities \([\smash [b]{\underline{y}} _a, \smash [t]{\overline{y}} _a]\) of all arcs \(a \in \mathcal {A}\) result from building at most three pipelines in parallel to the already existing one. The data points for the cost function used follow a quadratic form, see Figure 1 in [21], and were provided to us by practitioners.
Implementation We extended a development versionFootnote 4 of SCIP by a so-called nonlinear handler, where the separation is applied in the separation and enforcement callbacks. With this handler, we add the cuts (21) and (22) in addition to the cuts that SCIP generates per default (see Sect. 5).
Given that the considered gas instances tend to be numerically unstable due to variables in different scales, we observed that many invalid cuts were generated in some preliminary experiments. The following two measures prevented the addition of invalid cuts to SCIP.
-
We only add the cuts (21) and (22) to SCIP if they are violated at least by \(10^{-4}\).
-
If the LP solution \((\hat{x}, \hat{y})\) satisfies \(\hat{y}= \smash [b]{\underline{y}} \) or \(\hat{y}= \smash [t]{\overline{y}} \), then we compute a slightly weakened gradient cut by considering the envelopes over the enlarged domain \([\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}-s, \smash [t]{\overline{y}} ]\) or \([\smash [b]{\underline{x}}, \smash [t]{\overline{x}} ] \times [\smash [b]{\underline{y}}, \smash [t]{\overline{y}} +s]\), respectively. We take \(s = 10^{-4}\).
Moreover, Propositions 5.1 and 5.2 give conditions under which \(\phi _S(x,y) = \text {vex}_D[f](x,y)\). In those cases, we omit the generation of gradient cuts of \(\text {vex}_D[f](x,y)\) because they will be generated in any case by SCIP. Specifically,
-
If the variable bound in the current LP relaxation satisfies \(\smash [t]{\overline{x}} < \beta \smash [b]{\underline{x}} \), then we do not generate gradient cuts for \(\text {vex}_D[f](x,y)\) (Proposition 5.1).
-
If the current LP solution satisfies \(T_{x,y}(\smash [b]{\underline{x}}) < \beta \smash [b]{\underline{x}} \) and \(\beta \smash [b]{\underline{x}} < \smash [t]{\overline{x}} \), i.e., \((x,y) \in A\), then we do not generate gradient cuts for \(\text {vex}_D[f](x,y)\) (Proposition 5.2).
Hardware and software The experiments were conducted on a cluster of 64-bit Intel Xeon CPU E5-2670 v2 CPUs at 2.5 GHz with 25 MB cache and 128 GB main memory. To safeguard against a potential mutual slowdown of parallel processes, we bind the processes to specific cores and run at most one job per node at a time. We used a development version of SCIP 6.0.2.4 [10] with CPLEX 12.7.1.0 as LP solver [8] and Ipopt 3.12.13 as NLP solver [36].
6.3 Computational Results
In this section, we present the results for the ROOTGAP and TREE experiments. In the following, Enabled refers to adding the gradient cuts of the convex and concave envelopes, as opposed to Disabled, which refers to SCIP default settings.
ROOTGAP Experiment From the 6500 instances, we do not consider the ones that have been detected infeasible (396 instances), no primal solution is known (1113 instances), or none of the versions could increase the dual bound by more than \(10^{-6}\) (1847 instances). This restriction results in 3144 instances for the ROOTGAP experiment.
To compare the dual bounds of Enabled and Disabled relative to a given primal bound, we use the following measure. For an instance, let \(d_1 \in \mathbb {R}\) be the dual bound of Enabled, and let \(d_2 \in \mathbb {R}\) be the dual bound of Disabled. Furthermore, let \(p \in \mathbb {R}\) be a reference primal bound, for example, the optimal or best known objective value of that instance. The function \(GC :\mathbb {R}^3 \rightarrow [-1,1]\) defined as
measures the improvement of the gap closed when comparing the dual bounds \(d_1\) and \(d_2\) relative to p, see also [26].
Note that a positive value \(GC(d_1, d_2, p) > 0\) implies that the dual bound improved by adding the gradient cuts to the convex envelope \(\text {vex}_D[f](x,y)\). Analogously, a negative value \(GC(d_1, d_2, p) < 0\) indicates that the addition of the gradient cuts deteriorates the performance.
Aggregated results for the ROOTGAP experiment are shown in Table 2 and Fig. 8. Table 2 shows that the addition of the cuts closes more gap on 1033 out of the 3144 instances and closes less gap only on 86 instances. (For an explanation, see below.)
Additionally, the table presents the average gap closed for instances, where either Enabled improves (better) or deteriorates (worse) by more than 0%, 2%, or 10%, whereas change encompasses the union of those instances that are better or worse by more than 0%, 2%, or 10%.
Among all instances, the average gap closed improvement is 0.87%. However, when one considers all instances for which the gap closed is different, the gap closed improvement increases to 2.44%. Furthermore, if we consider instances for which the gap closed differs by 2% and 10%, then the gap closed improvement increases to 6.75% and 19.69%, respectively.
The presented results show that our convexification of the considered constraint function has a significant impact on the quality of the relaxation. In theory, the dual bounds obtained by Enabled should always be at least as good as the one from Disabled, since we apply the gradient cuts on top of SCIP default.
Nevertheless, as mentioned above, there are 86 where disabling the cuts yields improved dual bounds. The explanation for this behavior is that some of the gradient cuts from the convex envelope increase the computational cost of solving LPs in the OBBT propagator [11]. As a consequence, the internal limit of simplex iterations in OBBT are hit faster, which has the side effect that less genvbounds can be learned. However, less genvbounds reduce the number of variable bound reductions, and as a consequence, lead to worse dual bounds. Indeed, when turning off the propagator genvbounds for Enabled and Disabled on these 86 instances, then better dual bounds are obtained by Enabled for all, but 14 out of these 86 instances. Furthermore, if we disable all propagation methods, then the behavior of Enabled is at least as good as Disabled on these 14 instances.Footnote 5
TREE Experiment Aggregated results for the TREE experiment are shown in Tables 3, 4 and 5 for the three test sets under consideration. We compare the number of solved instances and the solver performance when activating the cut generation for the convex envelope (Enabled) with SCIP using default settings (Disabled). The number of solved instances and the computation time is split into different categories: ALL, ALL OPT, \([1, \texttt {tlim}]\), \([10, \texttt {tlim}]\), \([100, \texttt {tlim}]\) and \([1000, \texttt {tlim}]\). ALL consists of all instances. ALL OPT consists of the instances that are solved to optimality or detected to be infeasible by both settings. \([t, \texttt {tlim}]\) consists of the instances that could be solved by at least one of both settings, Enabled or Disabled, in more than t seconds within the time limit of one hour. Note that the instances in \([t_1, \texttt {tlim}]\) are contained in \([t_2, \texttt {tlim}]\) whenever \(t_1 \ge t_2\) and that one can consider the instances in \([t, \texttt {tlim}]\) to be harder to solve the larger t is.
Most importantly, on all test sets, enabling the cut generation for the convex envelope increases the number of solved instances and decreases the average solving time compared to SCIP default. Concerning the number of solved instances, Enabled solves 39 more instances on Belgium, 19 more instances on GasLib-40 and 68 more instances on Circuit rank. From the tables, we deduce that all the extra instances that Enabled solves belong to the groups \([1000, \texttt {tlim}]\). This means that our separation routine renders these instances solvable but not trivially solvable.
To compare the total running time between Enabled and Disabled, we use the shifted geometric mean (sgm) with a shift value of \(s = 10\) for the average solving time in seconds. For instances that are solved to optimality by both settings (ALL OPT), we additionally present the sgm for the number of explored branch-and-bound nodes with a shift value of \(s = 100\). As can be seen in Tables 3, 4 and 5, there is an improvement in the performance when using Enabled over SCIP default in all test sets and through all time categories. This improvement increases toward the more “difficult” instances to a speed-up of up to 58% on Belgium. Even for ALL OPT, there is throughout a speed-up of 19% (Belgium) and 12% (GasLib-40 and Circuit rank) when using the Enabled separation routine. Likewise, the number of branch-and-bound nodes significantly decreases by 22%, 25% and 14% for ALL OPT on these three test sets.
Finally, we remark that the Wilcoxon signed-rank test [39] judges the performance improvements on the Belgium, GasLib-40 and Circuit rank test sets as highly statistical significant being consistently distributed in the shifted geometric mean across all three test sets.Footnote 6
7 Conclusion
In this paper, we derived the convex envelope of the nonconvex and bivariate function \(f(x,y) = y \, \text {sgn}(x) \left| x\right| ^{\alpha }\) for \(\alpha > 1\). This contribution was motivated by the fact that strong relaxations of nonconvex constraints play a key component in solving MINLPs, see [12]. As this function frequently occurs in expansion planning problems, it is also of significant practical relevance. We performed an extensive computational study on real-world instances which showed the benefit of the convex envelope over the standard relaxation used in state-of-the-art solvers. Given that we provide a closed-form expression for the envelopes, our implementation in the MINLP solver SCIP allows us to calculate and exploit the envelopes at every node in the branch-and-bound tree, which has a crucial impact on the solving process. Apart from yielding improved dual bounds that reduce the gap already in the root node, our procedure drastically boosts the solving process in the branch-and-bound tree. Our procedure significantly reduces the solving time on all test sets, for example, by up to 58% for hard instances on Belgium. Even more important, this tighter relaxation enables to solve more instances on all test sets.
One possible avenue for further research is to consider the convexification of multiple constraints simultaneously. In this fashion, [22] already reported promising results for quadratic functions with absolute value terms for gas network related problems.
Notes
In restarts, SCIP aborts the current search after encountering sufficient variable bound reductions and starts preprocessing the problem again. For more details about restarts in SCIP, we refer to Section 10.9 in [1].
We used the following SCIP settings: limits/totalnodes = 1, separation/emphasis/aggressive = True, limits/restarts = 0 as well as heuristics/emphasis = Off.
General Algebraic Modeling System (GAMS) Model Library https://www.gams.com/modlib/modlib.htm.
To be released in SCIP 8.
Furthermore, we did an experiment where we additionally turn off propagation and we saw that Enabled was never worse than Disabled. For more details, see [20].
In this paper, we test against the null hypothesis, which corresponds to the assumption that the approach yielding a better performance is actually not better. For the significance testing, we use a p-value of 1%, which describes the (approximate) probability that the null hypothesis holds. Small p-values indicate that the hypothesis was wrong, and we may assert that the difference of the performance values in the shifted geometric mean is indeed significant.
References
Achterberg, T.: Constraint integer programming. Ph.D. thesis, Technische Universität Berlin (2007)
Al-Khayyal, F.A., Falk, J.E.: Jointly constrained biconvex programming. Math. Oper. Res. 8(2), 273–286 (1983)
Alperovits, E., Shamir, U.: Design of optimal water distribution systems. Water Resour. Res. 13(6), 885–900 (1977). https://doi.org/10.1029/WR013i006p00885
André, J., Bonnans, F., Cornibert, L.: Optimization of capacity expansion planning for gas transportation networks. Eur. J. Oper. Res. 197(3), 1019–1027 (2009). https://doi.org/10.1016/j.ejor.2007.12.045
Ballerstein, M., Michaels, D., Vigerske, S.: Linear underestimators for bivariate functions with a fixed convexity behavior. Tech. Rep. 13-02, ZIB, Takustr. 7, 14195 Berlin (2013). https://nbn-resolving.de/urn:nbn:de:0297-zib-17641
Borraz-Sánchez, C., Bent, R., Backhaus, S., Hijazi, H., Hentenryck, P.V.: Convex relaxations for gas expansion planning. INFORMS J. Comput. 28(4), 645–656 (2016). https://doi.org/10.1287/ijoc.2016.0697
Bragalli, C., D’Ambrosio, C., Lee, J., Lodi, A., Toth, P.: On the optimal design of water distribution networks: a practical MINLP approach. Optim. Eng. 13(2), 219–246 (2012). https://doi.org/10.1007/s11081-011-9141-7
Cplex, IBM ILOG: CPLEX: High-performance software for mathematical programming and optimization. https://www.ibm.com/analytics/cplex-optimizer (2019). Accessed December 2019
Fujiwara, O., Khang, D.B.: A two-phase decomposition method for optimal design of looped water distribution networks. Water Resour. Res. 26(4), 539–549 (1990). https://doi.org/10.1029/WR026i004p00539
Gleixner, A., Bastubbe, M., Eifler, L., Gally, T., Gamrath, G., Gottwald, R.L., Hendel, G., Hojny, C., Koch, T., Lübbecke, M.E., Maher, S.J., Miltenberger, M., Müller, B., Pfetsch, M.E., Puchert, C., Rehfeldt, D., Schlösser, F., Schubert, C., Serrano, F., Shinano, Y., Viernickel, J.M., Walter, M., Wegscheider, F., Witt, J.T., Witzig, J.: The SCIP Optimization Suite 6.0. ZIB-Report 18-26, Zuse Institute Berlin (2018). https://nbn-resolving.de/urn:nbn:de:0297-zib-69361
Gleixner, A.M., Weltge, S.: Learning and propagating Lagrangian variable bounds for mixed-integer nonlinear programming. In: Gomes, C., Sellmann, M. (eds.) International Conference on AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems. Lecture Notes in Computer Science, vol. 7874, pp. 355–361. Springer, Berlin (2013)
Grossmann, I.E., Kravanja, Z.: Mixed-integer nonlinear programming: a survey of algorithms and applications. In: Biegler, L., Coleman, T., Conn, A., Santosa, F. (eds.) Large-scale Optimization with Applications, The IMA Volumes in Mathematics and its Applications, vol. 93, pp. 73–100. Springer, New York (1997)
Hijazi, H.: Perspective envelopes for bilinear functions. In: AIP Conference Proceedings, vol. 2070. AIP Publishing (2019)
Horst, R., Tuy, H.: Global Optimization: Deterministic Approaches. Springer, Berlin (1990)
Humpola, J.: Gas network optimization by MINLP. Ph.D. thesis, Technische Universität Berlin (2014). https://doi.org/10.14279/depositonce-4255
Jach, M., Michaels, D., Weismantel, R.: The convex envelope of \((n-1)\)-convex functions. SIAM J. Optim. 19(3), 1451–1466 (2008)
Jacoby, S.L.: Design of optimal hydraulic networks. J. Hydraul. Div. 94(3), 641–662 (1968)
Karmeli, D., Gadish, Y., Meyers, S.: Design of optimal water distribution networks. J. Pipeline Div. 94(1), 1–10 (1968)
Katz, D.L.V.: Handbook of Natural Gas Engineering. McGraw-Hill, New York (1959)
Lenz, R.: Optimization of stationary expansion planning and transient network control by mixed-integer nonlinear programming. Ph.D. thesis, Technische Universität Berlin (2021). https://doi.org/10.14279/depositonce-12765
Lenz, R., Becker, K.H.: Optimization of capacity expansion in potential-driven networks including multiple looping: a comparison of modelling approaches. OR Spectrum 44(1), 179–224 (2021). https://doi.org/10.1007/s00291-021-00648-7
Liers, F., Martin, A., Merkert, M., Mertens, N., Michaels, D.: Solving mixed-integer nonlinear optimization problems using simultaneous convexification: a case study for gas networks. J. Global Optim. 80(2), 1–34 (2021). https://doi.org/10.1007/s10898-020-00974-0
Linderoth, J.: A simplicial branch-and-bound algorithm for solving quadratically constrained quadratic programs. Math. Program. 103(2), 251–282 (2005). https://doi.org/10.1007/s10107-005-0582-7
Locatelli, M.: Convex envelopes of bivariate functions through the solution of KKT systems. J. Global Optim. 72(2), 277–303 (2018). https://doi.org/10.1007/s10898-018-0626-1
McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: part 1—convex underestimating problems. Math. Program. 10(1), 147–175 (1976)
Müller, B., Muñoz, G., Gasse, M., Gleixner, A., Lodi, A., Serrano, F.: On generalized surrogate duality in mixed-integer nonlinear programming. Math. Program. (2021). https://doi.org/10.1007/s10107-021-01691-6
Raghunathan, A.U.: Global optimization of nonlinear network design. SIAM J. Optim. 23(1), 268–295 (2013). https://doi.org/10.1137/110827387
Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series, vol. 28. Princeton University Press, Princeton (1970)
Schmidt, M., Aßmann, D., Burlacu, R., Humpola, J., Joormann, I., Kanelakis, N., Koch, T., Oucherif, D., Pfetsch, M.E., Schewe, L., Schwarz, R., Sirvent, M.: Gaslib—a library of gas network instances. Data 2(4) (2017). https://doi.org/10.3390/data2040040
Sherali, H.D., Alameddine, A.: An explicit characterization of the convex envelope of a bivariate bilinear function over special polytopes. Ann. Oper. Res. 25(1), 197–209 (1990)
Shiono, N., Suzuki, H.: Optimal pipe-sizing problem of tree-shaped gas distribution networks. Eur. J. Oper. Res. 252(2), 550–560 (2016). https://doi.org/10.1016/j.ejor.2016.01.008
Smith, E.M., Pantelides, C.C.: A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex MINLPs. Comput. Chem. Eng. 23(4–5), 457–478 (1999)
Tawarmalani, M., Sahinidis, N.V.: Semidefinite relaxations of fractional programs via novel convexification techniques. J. Global Optim. 20, 137–158 (2001)
Tawarmalani, M., Sahinidis, N.V.: Convex extensions and convex envelopes of lower semi-continuous functions. Math. Program. 93, 247–263 (2002)
Vigerske, S.: Decomposition in multistage stochastic programming and a constraint integer programming approach to mixed-integer nonlinear programming. Ph.D. thesis, Humboldt-Universität zu Berlin (2012)
Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)
Walski, T.M., Chase, D.V., Savic, D.A.: Water distribution modeling. Tech. rep., University of Dayton, Civil and Environmental Engineering and Engineering Mechanics Faculty Publications, Paper 17 (2001)
Weymouth, T.R.: Problems in natural gas engineering. Trans. Am. Soc. Mech. Eng. 34(1349), 185–231 (1912)
Wilcoxon, F.: Individual comparisons by ranking methods. In: Kotz, S., Johnson, N.L. (eds.) Breakthroughs in Statistics, pp. 196–202. Springer, Berlin (1992)
Zamora, J.M., Grossmann, I.E.: A branch and contract algorithm for problems with concave univariate, bilinear and linear fractional terms. J. Global Optim. 14(3), 217–249 (1999)
Zhang, J., Zhu, D.: A bilevel programming method for pipe network optimization. SIAM J. Optim. 6(3), 838–857 (1996). https://doi.org/10.1137/S1052623493260696
Acknowledgements
This work was supported by the Research Campus MODAL (Mathematical Optimization and Data Analysis Laboratories) funded by the German Federal Ministry of Education and Research (fund number 05M14ZAM).
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Paul I. Barton.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lenz, R., Serrano, F. Tight Convex Relaxations for the Expansion Planning Problem. J Optim Theory Appl 194, 325–352 (2022). https://doi.org/10.1007/s10957-022-02029-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-022-02029-8