[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Distributionally robust bottleneck combinatorial problems: uncertainty quantification and robust decision making

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

In a bottleneck combinatorial problem, the objective is to minimize the highest cost of elements of a subset selected from the combinatorial solution space. This paper studies data-driven distributionally robust bottleneck combinatorial problems (DRBCP) with stochastic costs, where the probability distribution of the cost vector is contained in a ball of distributions centered at the empirical distribution specified by the Wasserstein distance. We study two distinct versions of DRBCP from different applications: (i) Motivated by the multi-hop wireless network application, we first study the uncertainty quantification of DRBCP (denoted by DRBCP-U), where decision-makers would like to have an accurate estimation of the worst-case value of DRBCP. The difficulty of DRBCP-U is to handle its max–min–max form. Fortunately, similar to the strong duality of linear programming, the alternative forms of the bottleneck combinatorial problems using clutters and blocking systems allow us to derive equivalent deterministic reformulations, which can be computed via mixed-integer programs. In addition, by drawing the connection between DRBCP-U and its sampling average approximation counterpart under empirical distribution, we show that the Wasserstein radius can be chosen in the order of negative square root of sample size, improving the existing known results; and (ii) Next, motivated by the ride-sharing application, decision-makers choose the best service-and-passenger matching that minimizes the unfairness. That is, we study the decision-making DRBCP, denoted by DRBCP-D. For DRBCP-D, we show that its optimal solution is also optimal to its sampling average approximation counterpart, and the Wasserstein radius can be chosen in a similar order as DRBCP-U. When the sample size is small, we propose to use the optimal value of DRBCP-D to construct an indifferent solution space and propose an alternative decision-robust model, which finds the best indifferent solution to minimize the empirical variance. We further show that the decision robust model can be recast as a mixed-integer conic program. Finally, we extend the proposed models and solution approaches to the distributionally robust \(\varGamma \)—sum bottleneck combinatorial problem (\(\hbox {DR}\varGamma \hbox {BCP}\)), where decision-makers are interested in minimizing the worst-case sum of \(\varGamma \) highest costs of elements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Abadeh, S.S., Nguyen, V.A., Kuhn, D., Esfahani, P.M.M.: Wasserstein distributionally robust Kalman filtering. In: Advances in Neural Information Processing Systems, pp. 8474–8483 (2018)

  2. Ahmed, S.: Convexity and decomposition of mean-risk stochastic programs. Math. Program. 106(3), 433–446 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Albrecher, H.: A note on the asymptotic behaviour of bottleneck problems. Oper. Res. Lett. 33(2), 183–186 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, vol. 2. SIAM, Pathum Wan (2001)

    Book  MATH  Google Scholar 

  5. Bertsimas, D., Shtern, S., Sturt, B.: A data-driven approach for multi-stage linear optimization. Optimization Online

  6. Blanchet, J., Kang, Y., Murthy, K.: Robust Wasserstein profile inference and applications to machine learning. J. Appl. Prob. 56(3), 830–857 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  7. Blanchet, J., Murthy, K.: Quantifying distributional model risk via optimal transport. Math. Oper. Res. 44(2), 565–600 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  8. Blanchet, J., Murthy, K., Si, N.: Confidence regions in Wasserstein distributionally robust estimation. arXiv preprint arXiv:1906.01614 (2019)

  9. Camerini, P.M.: The min-max spanning tree problem and some extensions. Inf. Process. Lett. 7(1), 10–14 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  10. Chen, Z., Kuhn, D., Wiesemann, W.: Data-driven chance constrained programs over Wasserstein balls. arXiv preprint arXiv:1809.00210 (2018)

  11. Chen, Z., Xie, W.: Sharing the value-at-risk under distributional ambiguity. SSRN 3400033 (2019)

  12. Delage, E., Ye, Y.: Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 58(3), 595–612 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Duchi, J.C., Namkoong, H.: Variance-based regularization with convex objectives. J. Mach. Learn. Res. 20, 68–1 (2019)

    MathSciNet  MATH  Google Scholar 

  14. Edmonds, J., Fulkerson, D.R.: Bottleneck extrema. J. Combin. Theory 8(3), 299–306 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  15. Esfahani, P.M., Kuhn, D.: Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations. Math. Program. 171(1–2), 115–166 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Fournier, N., Guillin, A.: On the rate of convergence in Wasserstein distance of the empirical measure. Probab. Theory Relat. Fields 162(3–4), 707–738 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gao, R., Kleywegt, A.J.: Distributionally robust stochastic optimization with wasserstein distance. arXiv preprint arXiv:1604.02199 (2016)

  18. Gao, Y., Chiu, D.-M., Lui, J.: Determining the end-to-end throughput capacity in multi-hop networks: methodology and applications. In: ACM SIGMETRICS Performance Evaluation Review, vol 34, pp 39–50. ACM (2006)

  19. Goldsmith, A.: Wireless communications. Cambridge University Press, Cambridge (2005)

    Book  Google Scholar 

  20. Guigues, V., Juditsky, A., Nemirovski, A.: Non-asymptotic confidence bounds for the optimal value of a stochastic program. Optim. Methods Softw. 32(5), 1033–1058 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hanasusanto, G.A., Kuhn, D.: Conic programming reformulations of two-stage distributionally robust linear programs over Wasserstein balls. Oper. Res. 66(3), 849–869 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Hanasusanto, G.A., Roitch, V., Kuhn, D., Wiesemann, W.: A distributionally robust perspective on uncertainty quantification and chance constrained programming. Math. Program. 151, 35–62 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Hanasusanto, G.A., Roitch, V., Kuhn, D., Wiesemann, W.: Ambiguous joint chance constraints under mean and dispersion information. Oper. Res. 65(3), 751–767 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  24. Hsu, W.-L., Nemhauser, G.L.: Easy and hard bottleneck location problems. Discrete Appl. Math. 1(3), 209–215 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  25. Jiang, R., Guan, Y.: Risk-averse two-stage stochastic program with distributional ambiguity. Oper. Res. 66(5), 1390–1405 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kaibel, V., Peinhardt, M.: On the bottleneck shortest path problem (2006)

  27. Kasperski, A., Zieliński, P.: Possibilistic bottleneck combinatorial optimization problems with ill-known weights. Int. J. Approx. Reason. 52(9), 1298–1311 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kasperski, A., Zieliński, P.: Bottleneck combinatorial optimization problems with uncertain costs and the Owa criterion. Oper. Res. Lett. 41(6), 639–643 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kuhn, D., Esfahani, P.M., Nguyen, V.A., Shafieezadeh-Abadeh, S.: Wasserstein distributionally robust optimization: Theory and applications in machine learning. In: Operations Research & Management Science in the Age of Analytics, pp. 130–166. INFORMS (2019)

  30. Lam, H.: Recovering best statistical guarantees via the empirical divergence-based distributionally robust optimization. Oper. Res. 67(4), 1090–1105 (2019)

    MathSciNet  MATH  Google Scholar 

  31. Liu, X., Ravindran, K., Loguinov, D.: Multi-hop probing asymptotics in available bandwidth estimation: stochastic analysis. In: Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement, pp. 15–15. USENIX Association (2005)

  32. Martin, S., Ouelhadj, D., Smet, P., Berghe, G.V., ÖZcan, E.: Cooperative search for fair nurse rosters. Expert Syst. Appl. 40(16), 6674–6683 (2013)

    Article  Google Scholar 

  33. Natarajan, K., Shi, D., Toh, K.-C.: A probabilistic model for minmax regret in combinatorial optimization. Oper. Res. 62(1), 160–181 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  34. Parent, B., Caplan, A.L.: Fair is fair: We must re-allocate livers for transplant. BMC Med. Ethics 18(1), 26 (2017)

    Article  Google Scholar 

  35. Pérez-Galarce, F., Candia-Véjar, A., Astudillo, C., Bardeen, M.: Algorithms for the minmax regret path problem with interval data. Inf. Sci. 462, 218–241 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  36. Pferschy, U.: The random linear bottleneck assignment problem. RAIRO-Oper. Res. 30(2), 127–142 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  37. Rahimian, H., Mehrotra, S.: Distributionally robust optimization: a review. Optimization Online (2019)

  38. Schrijver, A.: Combinatorial Optimization: Polyhedra and Efficiency, vol. 24. Springer, Berlin (2003)

    MATH  Google Scholar 

  39. Shen, S., Kurt, M., Wang, J.: Chance-constrained programming models and approximations for general stochastic bottleneck spanning tree problems. INFORMS J. Comput. 27(2), 301–316 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  40. Shinn, T., Takaoka, T.: Efficient graph algorithms for network analysis. In: 1st International Conference on Resource Efficiency in Interorganizational Networks-ResEff 2013, p. 236 (2013)

  41. Spivey, M.Z.: Asymptotic moments of the bottleneck assignment problem. Math. Oper. Res. 36(2), 205–226 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  42. Stancu-Minasian, I., Caballero, R., Cerdá, E., Muñoz, M.: The stochastic bottleneck linear programming problem. Top 7(1), 123–143 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  43. Trillos, N.G., Slepčev, D.: On the rate of convergence of empirical measures in \(\infty \) transportation distance. Can. J. Math. 67(6), 1358–1383 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  44. Xie, W.: On distributionally robust chance constrained programs with Wasserstein distance. Math. Program. 1–41 (2019)

  45. Xie, W.: Tractable reformulations of distributionally robust two-stage stochastic programs with \(\infty -\) Wasserstein distance. arXiv preprint arXiv:1908.08454 (2019)

  46. Xie, W., Ahmed, S.: On deterministic reformulations of distributionally robust joint chance constrained optimization problems. SIAM J. Optim. 28(2), 1151–1182 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  47. Yoo, J.-Y., Kim, J.: Maximum end-to-end throughput of chain-topology wireless multi-hop networks. In: 2007 IEEE Wireless Communications and Networking Conference, pp. 4279–4283. IEEE (2007)

  48. Zhang, L., Chen, S., Jian, Y.: Achieving global end-to-end maxmin in multihop wireless networks. In: 2008 The 28th International Conference on Distributed Computing Systems, pp. 225–232. IEEE (2008)

  49. Zymler, S., Kuhn, D., Rustem, B.: Distributionally robust joint chance constraints with second-order moment information. Math. Program. 137, 167–198 (2013)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Editor-in-Chief Dr. Sven Leyffer and two anonymous reviewers for their valuable comments on improving this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weijun Xie.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. DRBCP-U (2a) under \(q-\hbox {Wasserstein}\) ambiguity set: equivalent reformulation, MIP representation, and confidence bounds

Appendix A. DRBCP-U (2a) under \(q-\hbox {Wasserstein}\) ambiguity set: equivalent reformulation, MIP representation, and confidence bounds

For the completeness of this paper, in this appendix, we provide parallel results for DRBCP-U (2a) under \(q-\hbox {Wasserstein}\) ambiguity set (3) with \(q\in [1,\infty )\).

1.1 A.1 Equivalent reformulation

We first derive a deterministic representation of DRBCP-U (2a) under \(q-\hbox {Wasserstein}\) ambiguity set.

Theorem 10

Suppose \(\Vert \cdot \Vert =\Vert \cdot \Vert _r\) with \(r\ge 1\), then DRBCP-U (2a) \(q-\hbox {Wasserstein}\) ambiguity set is equivalent to

$$\begin{aligned} v_U=\min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F}\max _{t^k}\left\{ t^k-\lambda \left( \sum _{j\in I^k(t^k,y)}(t^k-{{\bar{c}}}_j^k)^r\right) ^{\frac{q}{r}}\right\} \right\} , \end{aligned}$$
(29)

where F is defined in Theorem 1 and set \(I^k(t,y)=\{j\in y: t-{\overline{c}}_j^k\ge 0\}\).

Proof

According to Theorem 1 and theorem 1 in [17], we have

$$\begin{aligned} v_U&=\sup _{{\mathbb {P}}\in {{\mathcal {P}}}_{q}}{{\mathbb {E}}}_{{{\mathcal {P}}}}\left[ Z(\tilde{\varvec{c}})\right] =\min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{\varvec{c}^k}\max _{y\in F}\left( \min _{j\in y}c_j^k-\lambda \Vert \varvec{c}^k-\bar{\varvec{c}}^k\Vert _r^q\right) \right\} . \end{aligned}$$

By swapping the two maximization operators and linearizing the inner minimization operator, we further have

$$\begin{aligned} v_U&=\min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F}\max _{t^k\le c_j^k, \forall j\in y}\left( t^k-\lambda \Vert \varvec{c}^k-\bar{\varvec{c}}^k\Vert _r^q\right) \right\} . \end{aligned}$$

To simplify the inner maximization, we let \(\beta _j^k=c_j^k-t^k\) for all \(j\in y\) and define set \(I^k(t,y)=\{j\in y: t-{\overline{c}}_j^k\ge 0\}\), then we have

$$\begin{aligned} v_U&=\min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F}\max _{\varvec{\beta }^k,t^k}\left\{ \left( t^k-\lambda \Vert t^k\mathbf {e}+\varvec{\beta }^k-\bar{\varvec{c}}^k\Vert _r^q:\beta _j^k \ge 0, \forall j\in y\right) \right\} \right\} \nonumber \\&=\min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F}\max _{t^k}\left\{ t^k-\lambda \left( \sum _{j\in I^k(t^k,y)}(t^k-{{\bar{c}}}_j^k)^r\right) ^{\frac{q}{r}}\right\} \right\} , \end{aligned}$$
(30)

where the second equality is due to the optimality condition that if \(j\in y\setminus I^k(t^k,y) \), we must have \(\beta _j^k=-t^k+{{\bar{c}}}_j^k>0\); otherwise, \(\beta _j^k=0\). \(\square \)

1.2 A.2 Mixed integer programming representation

Note that in Theorem 10, the outer minimization problem is a univariate convex program and thus can be solved efficiently via the bisection method. We then observe that for any given \(\lambda \), the inner maximizations in DRBCP-U (29) can be decomposed into N separate MIPs.

Proposition 7

Suppose that the blocker F admits a binary programming representation \({\widehat{F}}\subseteq \{0,1\}^n\). Then \(v_U=\min _{\lambda \ge 0}\{\lambda \theta ^q+1/N\sum _{k\in [N]}v_U^k(\lambda )\}\) and \(v_U^k(\lambda )\) is equivalent to

$$\begin{aligned} v_U^k(\lambda )=\max _{\varvec{z}^k\in {\widehat{F}}, t^k, \varvec{\beta }^k,\theta ^k}&\left\{ t^k-\lambda (\theta ^{k})^q: \Vert t^k\mathbf {e}-\bar{\varvec{c}}_j^k+{\varvec{\beta }}^k\Vert _r\le \theta ^{k},\beta _j^k\ge -M_j^k(1-z_j^k), \forall j\in [n]\right\} , \end{aligned}$$
(31)

where \(M_{j}^k= {\widehat{M}}^k-{\bar{c}}_{j}^k\) and \( {\widehat{M}}^k=\max _{\tau \in [n]}{\bar{c}}_\tau ^k+(\lambda q)^{-1/(q-1)}\).

Proof

According to the proof of Theorem 10, \(v_U=\min _{\lambda \in [0,1]}\{\lambda \theta ^q+1/N\sum _{k\in [N]}v_U^k(\lambda )\}\), where

$$\begin{aligned} v_U^k(\lambda ):=\max _{y\in F,t^k,\varvec{\beta }^k,\theta ^k}\left\{ t^k-\lambda (\theta ^{k})^q:\Vert t^k\mathbf {e}-\bar{\varvec{c}}_j^k+{\varvec{\beta }}^k\Vert _r\le \theta ^{k}, \beta _j^k\ge 0, \forall j\in y\right\} . \end{aligned}$$

Above, we can augment the vector \(\varvec{\beta }^k\) by defining free variable \(\beta _j^k\) for each \(j\in [n]\setminus y\) and rewrite \(v_U^k(\lambda )\) as

$$\begin{aligned} v_U^k(\lambda ):=\max _{y\in F,t^k,\varvec{\beta }^k,\theta ^k}\left\{ t^k-\lambda (\theta ^{k})^q:\Vert t^k\mathbf {e}-\bar{\varvec{c}}_j^k+{\varvec{\beta }}^k\Vert _r\le \theta ^{k}, \beta _j^k\ge 0, \forall j\in y, \beta _j^k \in {{\mathbb {R}}}, \forall j\in [n]\setminus y\right\} , \end{aligned}$$

since at optimality, we must have \(\beta _{j}^k=-t^k+{{\bar{c}}}_j^k\) for each \(j\in [n]\setminus y\).

As the blocker F admits a binary programming representation \({\widehat{F}}\), the value \(v_U^k(\lambda )\) is further equal to

$$\begin{aligned} v_U^k(\lambda ):=\max _{\varvec{z}^k\in {\widehat{F}},t^k,\varvec{\beta }^k,\theta ^k}\left\{ t^k-\lambda (\theta ^{k})^q:\Vert t^k\mathbf {e}-\bar{\varvec{c}}_j^k+{\varvec{\beta }}^k\Vert _r\le \theta ^{k}, z_j^k\beta _j^k\ge 0, \forall j\in [n]\right\} . \end{aligned}$$
(32)

To prove that (32) and (31) are equivalent, we only need to show that \(\beta _{j}^k\ge -M_j^k\) for each \(j\in [n]\). Since at optimality, \(\beta _{j}^k\) is equal to 0 or \(-t^k+{{\bar{c}}}_j^k\). Thus, it is sufficient to find an upper bound of \(t^k\). In (29), suppose the optimal \(t^k\ge \max _{\tau \in [n]}{\bar{c}}_\tau ^k\), and then \(I^k(t,y)=y\). The first order optimality condition of inner maximization of (29) yields

$$\begin{aligned} 1&= \lambda q\left( \sum _{j\in y}(t^k-{{\bar{c}}}_j^k)^r\right) ^{\frac{q-r}{r}}\left( \sum _{j\in y}(t^k-{{\bar{c}}}_j^k)^{r-1}\right) \ge \lambda q\left( t^k-\max _{\tau \in [n]}{\bar{c}}_\tau ^k\right) ^{q-1}, \end{aligned}$$

where the first inequality is due to \(t^k-{{\bar{c}}}_j^k\ge t^k-\max _{\tau \in [n]}{\bar{c}}_\tau ^k\) and \(|y|\ge 1\). Thus, we can define \({\widehat{M}}^k\) as

$$\begin{aligned} {\widehat{M}}^k=\max _{\tau \in [n]}{\bar{c}}_\tau ^k+(\lambda q)^{-1/(q-1)}. \end{aligned}$$

\(\square \)

We remark that: (i) to obtain \(v_U\), one needs to first fix \(\lambda \) and then solve N separate MIPs (31), and then optimize \(\lambda \), which requires more computational time compared to that of DRBCP-U (2a) under \(\infty -\) Wasserstein ambiguity set; and (ii) as long as r is a rational number, using the conic quadratic representation results in [4], then the MIP (31) can be formulated as a mixed integer second order conic program (MISOCP).

1.3 A.3 Confidence bounds

In this subsection, we plan to compare \(v_U\) with its sampling average approximation counterpart, which further motivates us a choice of Wasserstein radius \(\theta \). Recall that \(v_U^{SAA}\) and \(v_U^{T}\) are defined in (14a) and (14b), respectively. Our goal is to determine a proper \(\theta \) such that \(v_U\ge v_U^{T}\) with a high probability. To begin with, we would like to show the relationship between \(v_U\) and \(v_U^{SAA}\).

Proposition 8

Let \(v_U^{SAA}\) be defined in (14a). Then

$$\begin{aligned} \frac{\theta }{\max _{y\in F}|y|^{\frac{1}{r}}}\le v_U-v_U^{SAA}\le \theta . \end{aligned}$$

Proof

  1. (i)

    To prove \(v^U-v^{SAA}\le \theta \), using the fact that \(t^k \ge \min _{j\in y}{\bar{c}}_j^k\), then we can upper bound \(v^U\) as

    $$\begin{aligned} v_U&\le \min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F}\max _{t^k\ge \min _{j\in y}{\bar{c}}_j^k}\left\{ t^k-\lambda \left( t^k-\min _{j\in y}{\bar{c}}_j^k\right) ^{q}\right\} \right\} . \end{aligned}$$
    (33)

    To optimize the inner maximization, there are two cases:

    1. Case 1.

      If \(q=1\), then the inequality (33) is equivalent to

      $$\begin{aligned} v_U&\le \min _{\lambda \ge 0}\left\{ \lambda \theta +\frac{1}{N}\sum _{k\in [N]}\max _{y\in F} \min _{j\in y}{\bar{c}}_j^k: \lambda \ge 1\right\} =v^{SAA}+\theta . \end{aligned}$$
    2. Case 2.

      If \(q>1\), then the inequality (33) is equivalent to

      $$\begin{aligned} v_U&\le \min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{q-1}{q^{\frac{q}{q-1}}}\lambda ^{-\frac{1}{q-1}}+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F} \min _{j\in y}{\bar{c}}_j^k\right\} \\&=v^{SAA}+\theta . \end{aligned}$$
    3. Case 3.

      To prove \(v^U-v^{SAA}\ge \frac{\theta }{{\max }_{y\in F}|y|^{\frac{1}{r}}}\), using the fact that \(t^k \ge \min _{j\in y}{\bar{c}}_j^k\) again, then we can bound \(v^U\) from the below as

      $$\begin{aligned} v_U&\ge \min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F}\max _{t^k\ge \min _{j\in y}{\bar{c}}_j^k}\left\{ t^k-\lambda |y|^{\frac{q}{r}}\left( t^k-\min _{j\in y}{\bar{c}}_j^k\right) ^{q}\right\} \right\} . \end{aligned}$$
      (34)

    To optimize the inner maximization, there are two cases:

    1. Case 1.

      If \(q=1\), then the inequality (34) is equivalent to

      $$\begin{aligned} v_U&\ge \min _{\lambda \ge 0}\left\{ \lambda \theta +\frac{1}{N}\sum _{k\in [N]}\max _{y\in F} \min _{j\in y}{\bar{c}}_j^k: \lambda \ge \frac{1}{\min _{h\in F}|h|^{\frac{q}{r}}}\right\} =v^{SAA}+\frac{\theta }{\min _{y\in F}|y|^{\frac{q}{r}}}\\&\ge v^{SAA}+\frac{\theta }{\max _{y\in F}|y|^{\frac{q}{r}}}. \end{aligned}$$
    2. Case 2.

      If \(q>1\), using the fact that \(|y|^{\frac{q}{r}}\le \max _{h\in F}|h|^{\frac{q}{r}}\) for all \(y\in F\), then the right-hand side of inequality (34) is further lower bounded by

      $$\begin{aligned} v_U&\ge \min _{\lambda \ge 0}\left\{ \lambda \theta ^q+\frac{q-1}{q^{\frac{q}{q-1}}}\left( \frac{1}{\lambda \max _{h\in F}|h|^{\frac{q}{r}}}\right) ^{\frac{1}{q-1}}+\frac{1}{N}\sum _{k\in [N]}\max _{y\in F} \min _{j\in y}{\bar{c}}_j^k\right\} \\&=v^{SAA}+\frac{\theta }{\max _{y\in F}|y|^{\frac{1}{r}}}. \end{aligned}$$

      \(\square \)

Now we are ready to show the following main result on the relationship between the value of DRBCP-U \(v_U\) and true value \(v_U^T\).

Theorem 11

Suppose that there exists a positive \(\sigma \) such that \({{\mathbb {E}}}_{{\mathbb {P}}^T}[\exp ((Z(\tilde{\varvec{c}})-v_U^T)^2/\sigma ^2)]\le e\). Given \(\epsilon \in (0,1)\), then

  1. (i)

    let \(\theta =N^{-\frac{1}{2}}\sigma \sqrt{-3\log (\epsilon )}{\max }_{y\in F}|y|^{\frac{1}{r}}=O(N^{-\frac{1}{2}})\). Then we have

    $$\begin{aligned} {\mathbb {P}}^T\left\{ v_U\ge v_U^T\right\} \ge 1-\epsilon ; \end{aligned}$$
  2. (ii)

    let \(\theta =N^{-\frac{1}{2}}\sigma \sqrt{-3\log (\epsilon )}=O(N^{-\frac{1}{2}})\). Then we have

    $$\begin{aligned} {\mathbb {P}}^T\left\{ v_U\le v_U^T+2\theta \right\} \ge 1-\epsilon . \end{aligned}$$

Proof

The proof is almost identical to Theorem 3 and is thus omitted. \(\square \)

1.4 A.4 Numerical illustration of DRBCP-U (2a) under \(2-\)Wasserstein ambiguity set

See Table 3.

Table 3 Numerical results of DRBCP-U (2a) under \(2-\hbox {Wasserstein}\) ambiguity set with application to the multi-hop network

1.5 A.5 Numerical illustration of DRBCP-D (2b) under \(\phi \)-divergence

Alternatively, we can consider DRBCP-D (2b) under a \(\phi \)-divergence ambiguity set \({{\mathcal {P}}}\). In particular, we construct the ambiguity set based on the total-variational distance with radius \(d\in [0,2]\). According to [25], given \(d\in [0,2]\), the distributionally robust fair matching problem in Example 6 under total variation based ambiguity set admits the following form

$$\begin{aligned} \begin{aligned} v_{D\phi }=\min _{\varvec{v,{\bar{v}}, z},\beta ,r}\&(1-d/2)\beta +\frac{1}{N}\sum _{k\in [N]}{{\bar{v}}}^k+dr/2\\ \text {s.t.}\&{{\bar{v}}}^k\ge v^k-\beta , {{\bar{v}}}^k\ge 0, \forall k \in [N]\\&r\ge v^k, \forall k \in [N], \\&v^k\ge {\overline{c}}_{ij}^kz_{ij}, \sum _{i\in [m]}z_{ij}=1, \sum _{j\in [m]}z_{ij}=1, z_{ij}\in \{0,1\}, \ \forall i, j\in [m],\forall k\in [N], \\&z_{ij}\in \{0,1\} \ \forall i,j\in [m], \forall k \in [N]. \end{aligned} \end{aligned}$$
(35)

The numerical results using the same datasets as those in Sect. 5.2 are displayed in Fig. 2 and Table 4.

Fig. 2
figure 2

Numerical illustration of fair matching problem under total variation-based ambiguity set (35): using real-world dataset in Sect. 5.2

Table 4 Numerical results of fair matching problem under total variation-based ambiguity set (35): using hypothetical dataset in Sect. 5.2

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, W., Zhang, J. & Ahmed, S. Distributionally robust bottleneck combinatorial problems: uncertainty quantification and robust decision making. Math. Program. 196, 597–640 (2022). https://doi.org/10.1007/s10107-021-01627-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-021-01627-0

Mathematics Subject Classification

Navigation