[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

No-Collision Transportation Maps

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Transportation maps between probability measures are critical objects in numerous areas of mathematics and applications such as PDE, fluid mechanics, geometry, machine learning, computer science, and economics. Given a pair of source and target measures, one searches for a map that has suitable properties and transports the source measure to the target one. Here, we study maps that possess the no-collision property; that is, particles simultaneously traveling from sources to targets in a unit time with uniform velocities do not collide. These maps are particularly relevant for applications in swarm control problems. We characterize these no-collision maps in terms of half-space preserving property and establish a direct connection between these maps and binary-space-partitioning (BSP) tree structures. Based on this characterization, we provide explicit BSP algorithms, of cost \(O(n \log n)\), to construct no-collision maps. Moreover, interpreting these maps as approximations of optimal transportation maps, we find that they succeed in computing nearly optimal maps for q-Wasserstein metric (\(q=1,2\)). In some cases, our maps yield costs that are just a few percent off from being optimal.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Altschuler, J., Bach, F., Rudi, A., Weed, J.: Approximating the quadratic transportation metric in near-linear time. arXiv preprint arXiv:1810.10046 (2018)

  2. Altschuler, J., Weed, J., Rigollet, P.: Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 1961–1971. Curran Associates Inc., USA (2017). http://dl.acm.org/citation.cfm?id=3294771.3294958. Accessed Dec 14 2019

  3. Ambrosio, L., Gigli, N., Savaré, G.: Gradient Flows in Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics ETH Zürich, 2nd edn. Birkhäuser Verlag, Basel (2008)

    MATH  Google Scholar 

  4. Blum, M., Floyd, R.W., Pratt, V., Rivest, R.L., Tarjan, R.E.: Time bounds for selection. J. Comput. Syst. Sci. 7(4), 448–461 (1973). https://doi.org/10.1016/S0022-0000(73)80033-9

    Article  MathSciNet  MATH  Google Scholar 

  5. Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, pp. 2292–2300. Curran Associates, Inc. (2013). http://papers.nips.cc/paper/4927-sinkhorn-distances-lightspeed-computation-of-optimal-transport.pdf

  6. Cuturi, M., Doucet, A.: Fast computation of Wasserstein barycenters. In: Xing, E.P., Jebara, T. (eds.) Proceedings of the 31st International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 32, pp. 685–693. PMLR, Bejing, China (2014). http://proceedings.mlr.press/v32/cuturi14.html. Accessed Dec 14 2019

  7. Dasgupta, S., Freund, Y.: Random projection trees and low dimensional manifolds. Citeseer (2008)

  8. Dasgupta, S., Sinha, K.: Randomized partition trees for exact nearest neighbor search. In: Conference on Learning Theory, pp. 317–337 (2013)

  9. Flamary, R., Courty, N.: POT Python optimal transport library (2017). https://github.com/rflamary/POT. Accessed Dec 14 2019

  10. Genevay, A., Cuturi, M., Peyré, G., Bach, F.: Stochastic optimization for large-scale optimal transport. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pp. 3440–3448. Curran Associates Inc., USA (2016). http://dl.acm.org/citation.cfm?id=3157382.3157482. Accessed Dec 14 2019

  11. Indyk, P.: Algorithms for dynamic geometric problems over data streams. In: Proceedings of the Thirty-sixth Annual ACM Symposium on Theory of Computing, STOC ’04, pp. 373–380. ACM, New York (2004). https://doi.org/10.1145/1007352.1007413

  12. Indyk, P.: Nearest neighbors in high-dimensional spaces. In: Handbook of Discrete and Computational Geometry. Citeseer (2004)

  13. Indyk, P.: A near linear time constant factor approximation for Euclidean bichromatic matching (cost). In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, pp. 39–42. Society for Industrial and Applied Mathematics, Philadelphia (2007). http://dl.acm.org/citation.cfm?id=1283383.1283388. Accessed Dec 14 2019

  14. Indyk, P., Thaper, N.: Fast image retrieval via embeddings. In: Proceedings of the 3rd International Workshop on Statistical and Computational Theories of Vision, vol. 2, no. 3, p. 5. (2003)

  15. Jacobs, M., Léger, F.: A fast approach to optimal transport: the back-and-forth method. Preprint (2019). ArXiv:1905.12154 [math.OC]

  16. Johnson, J., Douze, M., Jégou, H.: Billion-scale similarity search with GPUs. IEEE Trans. Big Data (2019). https://doi.org/10.1109/TBDATA.2019.2921572

    Article  Google Scholar 

  17. Knothe, H.: Contributions to the theory of convex bodies. Mich. Math. J. 4, 39–52 (1957)

    Article  MathSciNet  Google Scholar 

  18. Li, K., Malik, J.: Fast k-nearest neighbour search via prioritized DCI. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 2081–2090. JMLR.org (2017)

  19. Musser, D.R.: Introspective sorting and selection algorithms. Softw. Pract. Exp. 27(8), 983–993 (1997)

    Article  Google Scholar 

  20. Peyré, G., Cuturi, M.: Computational Optimal Transport: With Applications to Data Science. Now (2019). https://ieeexplore.ieee.org/document/8641476. Accessed Dec 14 2019

  21. Radha, H., Vetterli, M., Leonardi, R.: Image compression using binary space partitioning trees. IEEE Trans. Image Process. 5(12), 1610–1624 (1996). https://doi.org/10.1109/83.544569

    Article  Google Scholar 

  22. Rosenblatt, M.: Remarks on a multivariate transformation. Ann. Math. Stat. 23, 470–472 (1952). https://doi.org/10.1214/aoms/1177729394

    Article  MathSciNet  MATH  Google Scholar 

  23. Schmitzer, B.: Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM J. Sci. Comput. 41(3), A1443–A1481 (2019)

    Article  MathSciNet  Google Scholar 

  24. Schrijver, A.: Combinatorial Optimization: Polyhedra and Efficiency, vol. 24. Springer, Berlin (2003)

    MATH  Google Scholar 

  25. Villani, C.: Topics in Optimal Transportation. Graduate Studies in Mathematics, vol. 58. American Mathematical Society, Providence (2003). https://doi.org/10.1007/b12016

    Book  MATH  Google Scholar 

  26. Villani, C.: Optimal Transport, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 338. Springer, Berlin (2009). https://doi.org/10.1007/978-3-540-71050-9

    Book  Google Scholar 

  27. Wu, X.: Image coding by adaptive tree-structured segmentation. IEEE Trans. Inf. Theory 38(6), 1755–1767 (1992). https://doi.org/10.1109/18.165448

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Levon Nurbekyan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

L.N. was supported by Simons Foundation and the Centre de Recherches Mathématiques, through the Simons-CRM scholar-in-residence program, AFOSR MURI FA9550-18-1-0502, and ONR Grant N00014-18-1-2527. This material is based on work supported by the Air Force Office of Scientific Research under Award Number FA9550-18-1-0167.

Appendix

Appendix

Proof (Theorem 3)

We assume that \(\{e_i\}_{i=1}^d\) is the standard basis in \(\mathbb {R}^d\) because the proof for a general basis is identical up to a multiplication by a suitable volume element.

1. Suppose that \(x \in \mathrm {int}\left( \mathrm {supp}(\mu )\right) \), and \(~x^{\prime }\in \mathbb {R}^d,~x\ne x^{\prime },\) but they never get separated by a hyperplane. Therefore, whenever a common subset containing \(x,x^{\prime }\) gets partitioned they always stay in the same side. Suppose that \(\{A_k\}\) is the sequence of subsets that they both belong during the cutting process. By construction, we have that

$$\begin{aligned} A_{k+1}\subset A_k,\quad \mu (A_{k+1})=\frac{\mu (A_k)}{2}. \end{aligned}$$

Since \(\{v_k\}_{k=1}^\infty \subset \{e_1,e_2,\ldots ,e_d\}\) we have that

$$\begin{aligned} A_k=\left( \alpha ^k_1,\beta _1^k\right] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] , \end{aligned}$$

where \(-\infty \le \alpha ^k_i <\beta ^k _i \le +\infty \). Denote by

$$\begin{aligned} x=(x_1,x_2,\ldots ,x_d),\quad x^{\prime }=(x^{\prime }_1,x^{\prime }_2,\ldots ,x^{\prime }_d). \end{aligned}$$

Without loss of generality, assume that

$$\begin{aligned} x_i\ne x^{\prime }_i,~1\le i \le l,\quad x_i=x^{\prime }_i,~i>l. \end{aligned}$$

Since \(\{A_k\}\) are rectangles we have that

$$\begin{aligned} R=\bigcap _k A_k, \end{aligned}$$

is also a rectangle, and we denote by

$$\begin{aligned} R=[\alpha _1,\beta _1]\times [\alpha _2,\beta _2]\times \cdots [\alpha _d,\beta _d], \end{aligned}$$

and we have that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\alpha _i^k=\alpha _i,\quad \lim \limits _{k\rightarrow \infty }\beta _i^k=\beta _i,\quad 1\le i \le d. \end{aligned}$$

Since \(x,x^{\prime } \in R\) we have that

$$\begin{aligned} \beta _i-\alpha _i \ge |x_i-x^{\prime }_i|>0,\quad 1\le i \le l. \end{aligned}$$

Furthermore, we have that

$$\begin{aligned} \mu (R)=\lim \limits _{k\rightarrow \infty } \mu (A_k)=0. \end{aligned}$$

If \(\mathcal {L}^d(R)>0\) then we have that \(\mathrm {int}(R)\ne \emptyset \) and \(\mathrm {int}(R)\cap \mathrm {supp}(\mu )=\emptyset \) which contradicts to the fact that \(x \in \mathrm {int}\left( \mathrm {supp}(\mu )\right) \). Therefore, we have that \(\mathcal {L}^d(R)=0\) which means that \(\alpha _i=\beta _i\) for some \(i>l\). Without loss of generality assume that

$$\begin{aligned} \alpha _i<\beta _i,~1\le i \le q,\quad \alpha _i=\beta _i,~i>q. \end{aligned}$$

We have that \(q\ge l\). Moreover, \(\alpha _i=\beta _i=x_i=x^{\prime }_i\), and \(-\infty<\alpha _i^k<\beta _i^k<\infty \) for all \(i>q\) and k large enough. Additionally, if \(-\infty<\alpha _i<\beta _i<\infty \) for some \(1\le i \le q\) then \(-\infty<\alpha _i^k<\beta _i^k<\infty \) for k large enough. In what follows we assume that k is so large that this previous statements hold.

Furthermore, assume that \(M>0\) is such that

$$\begin{aligned} \mathrm {supp}(\mu ) \subset [-M,M]^d. \end{aligned}$$

Since \(\mu =fdx\), by construction we have that

$$\begin{aligned} \int _{A_k{\setminus } A_{k+1}} fdx= \int _{A_{k+1}} fdx,\quad \forall k. \end{aligned}$$

Therefore, using \(c\le f \le C,~\mu \) a.e. we get that

$$\begin{aligned} \frac{\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k+1}\cap \mathrm {supp}(\mu ) ) } \ge \frac{c}{C}>0,\quad \forall k. \end{aligned}$$
(6)

Now, suppose that for some k the set \(A_k\) gets partitioned in the direction \(e_1\). There are three possibilities: (a) \(-\infty<\alpha _1<\beta _1<\infty \), (b) \(-\infty<\alpha _1<\beta _1=\infty \), (c) \(-\infty =\alpha _1<\beta _1<\infty \).

(a) \(-\infty<\alpha _1<\beta _1<\infty \). In this case, we have that \(-\infty<\alpha _1^k<\beta _1^k<\infty \) since k is large enough.Therefore, either

$$\begin{aligned} \begin{aligned} A_{k+1}&=\left( \alpha ^k_1,\gamma \right] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] ,\\ A_k{\setminus } A_{k+1}&=\left( \gamma ,\beta ^k_1\right] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] , \end{aligned} \end{aligned}$$

or

$$\begin{aligned} \begin{aligned} A_{k+1}&=\left( \gamma ,\beta ^k_1\right] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] ,\\ A_k{\setminus } A_{k+1}&=\left( \alpha ^k_1,\gamma \right] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] , \end{aligned} \end{aligned}$$

for some \(\alpha _1^k<\gamma <\beta _1^k\). Suppose that we are in the former case.

Since \(x\in \mathrm {int}(\mathrm {supp}(\mu ))\) we have that there exists a \(\sigma >0\) such that

$$\begin{aligned} \times _{i=1}^d [x_i-\sigma ,x_i+\sigma ] \subset \mathrm {supp}(\mu ). \end{aligned}$$

We have that

$$\begin{aligned} A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ) \subset A_k{\setminus } A_{k+1}\cap [-M,M]^d, \end{aligned}$$

and therefore

$$\begin{aligned} \begin{aligned}&\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu )) \\&\quad \le \mathcal {L}^d(A_k{\setminus } A_{k+1}\cap [-M,M]^d)\le \left( \beta _1^k-\gamma \right) \prod _{i=2}^d \min \left\{ \beta _i^k-\alpha _i^k,2M\right\} \\&\quad \le \left( \beta _1^k-\beta _1\right) (2M)^{q-1} \prod _{i>q} \left( \beta _i^k-\alpha _i^k\right) , \end{aligned} \end{aligned}$$

where we used the fact that \(\beta _1\le \gamma < \beta _1^k\) and

$$\begin{aligned} \lim \limits _{k\rightarrow \infty } \alpha _i^k= \lim \limits _{k\rightarrow \infty } \beta _i^k=x_i,\quad i>q. \end{aligned}$$

On the other hand, we have that

$$\begin{aligned} A_{k+1} \cap \mathrm {supp}(\mu ) \supset A_{k+1} \cap \times _{i=1}^d [x_i-\sigma ,x_i+\sigma ], \end{aligned}$$

therefore

$$\begin{aligned} \begin{aligned}&\mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu ))\\&\quad \ge \mathcal {L}^d\left( A_{k+1} \cap \times _{i=1}^d [x_i-\sigma ,x_i+\sigma ]\right) \ge \min \left\{ \gamma -\alpha _1^k,\sigma \right\} \prod _{i=2}^d \min \left\{ \beta _i^k-\alpha _i^k,\sigma \right\} \\&\quad \ge \prod _{i=1}^q \min \left\{ \beta _i-\alpha _i,\sigma \right\} \prod _{i>q} \left( \beta _i^k-\alpha _i^k\right) . \end{aligned} \end{aligned}$$

Hence, we obtain that

$$\begin{aligned} \frac{\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu ))} \le (\beta _1^k-\beta _1) \frac{(2M)^{q-1}}{\prod _{i=1}^q \min \{ \beta _i-\alpha _i,\sigma \}}. \end{aligned}$$

Similarly, if \(x,x^{\prime }\) fall in the upper (in \(e_1\) direction) half of \(A_k\) we get that

$$\begin{aligned} \frac{\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu ))} \le (\alpha _1-\alpha _1^k) \frac{(2M)^{q-1}}{\prod _{i=1}^q \min \{ \beta _i-\alpha _i,\sigma \}}. \end{aligned}$$

(b) \(-\infty<\alpha _1<\beta _1=\infty \). In this case we have that \(-\infty<\alpha _1^k<\beta _1^k=\infty \), and

$$\begin{aligned} \begin{aligned} A_{k+1}&=(\gamma ,\infty ] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] ,\\ A_k{\setminus } A_{k+1}&=\left( \alpha ^k_1,\gamma \right] \times \left( \alpha ^k_2,\beta ^k_2\right] \cdots \times \left( \alpha ^k_d,\beta ^k_d\right] , \end{aligned} \end{aligned}$$

for some \(\alpha _1^k<\gamma <\infty \). As before, we have that

$$\begin{aligned} \begin{aligned}&\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))\\&\quad \le (\gamma -\alpha _1^k) (2M)^{q-1} \prod _{i>q}(\beta _i^k-\alpha _i^k) \le (\alpha _1-\alpha _1^k) (2M)^{q-1} \prod _{i>q}(\beta _i^k-\alpha _i^k). \end{aligned} \end{aligned}$$

Similarly, we have that

$$\begin{aligned} \mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu )) \ge \prod _{i=1}^q \min \{ \beta _i-\alpha _i,\sigma \} \prod _{i>q} (\beta _i^k-\alpha _i^k), \end{aligned}$$

and thus

$$\begin{aligned} \frac{\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu ))} \le (\alpha _1-\alpha _1^k) \frac{(2M)^{q-1}}{\prod _{i=1}^q \min \{ \beta _i-\alpha _i,\sigma \}}. \end{aligned}$$

c) \(-\infty =\alpha _1<\beta _1<\infty \). In this case, we have that \(-\infty =\alpha _1^k<\beta _1^k<\infty \), and

$$\begin{aligned} \frac{\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu ))} \le (\beta _1^k-\beta _1) \frac{(2M)^{q-1}}{\prod _{i=1}^q \min \{ \beta _i-\alpha _i,\sigma \}}. \end{aligned}$$

Summarizing, we get whenever \(A_k\) gets partitioned in \(e_1\) we have that

$$\begin{aligned} \frac{\mathcal {L}^d(A_k{\setminus } A_{k+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k+1} \cap \mathrm {supp}(\mu ))} \le o(1). \end{aligned}$$

We get similar estimates for partitions in any of the directions \(\{e_i\}_{i=1}^q\). Thus, if we take a subsequence \(\{A_{k_m}\}\) that get partitioned in one of these directions we get that

$$\begin{aligned} \lim \limits _{m\rightarrow \infty } \frac{\mathcal {L}^d(A_{k_m}{\setminus } A_{k_m+1} \cap \mathrm {supp}(\mu ))}{\mathcal {L}^d(A_{k_m+1} \cap \mathrm {supp}(\mu ))} =0, \end{aligned}$$

which contradicts to (6). Thus, the first item is proven.

2. Firstly, we will show that for every \(x\in \mathrm {int}(\mathrm {supp}(\mu ))\) there exists \(y \in \mathrm {supp}(\nu )\) such that

$$\begin{aligned} \hat{s}(x)=\hat{r}(y). \end{aligned}$$

Assume that \(\{A_k\}\) is the sequence of partition sets that contain x. Again, we have that

$$\begin{aligned} A_{k+1}\subset A_k,\quad \mu (A_{k+1})=\frac{\mu (A_k)}{2}, \end{aligned}$$

and

$$\begin{aligned} A_k=(\alpha ^k_1,\beta _1^k] \times (\alpha ^k_2,\beta ^k_2]\cdots \times (\alpha ^k_d,\beta ^k_d], \end{aligned}$$

for some \(-\infty \le \alpha ^k_i <\beta ^k _i \le +\infty \). Moreover, from the previous item we obtain that

$$\begin{aligned} \bigcap _k A_k=\{x\}, \end{aligned}$$

because \(x\in \mathrm {int}(\mathrm {supp}(\mu ))\), and the intersection cannot contain any other point. Therefore, we have that \(-\infty< \alpha ^k_i<\beta ^k _i < +\infty \), and

$$\begin{aligned} \alpha ^k_i \nearrow x_i,\quad \beta ^k_i \searrow x_i,\quad \text{ as }~k\rightarrow \infty , \end{aligned}$$

where \(x=(x_1,x_2,\ldots ,x_d)\).

Denote by \(\{B_k\}\) the dual sequence of \(\{A_k\}\) that partition \(\nu \). Again, we have that

$$\begin{aligned} B_k=\left( \gamma ^k_1,\delta _1^k\right] \times \left( \gamma ^k_2,\delta ^k_2\right] \cdots \times \left( \gamma ^k_d,\delta ^k_d\right] , \end{aligned}$$

for some \(-\infty< \gamma ^k_i<\delta ^k _i < +\infty \). Thus, our first task is to show that

$$\begin{aligned} \bigcap _k B_k \cap \mathrm {supp}(\nu )\ne \emptyset . \end{aligned}$$

Since \(\alpha _i^k<x_i\) we have that \(\{\alpha _i^k\}_k\) is not an eventually constant sequence. Therefore, \(\{\gamma ^k_i\}_k\) is also not an eventually constant sequence. Besides, \(\{\gamma ^k_i\}_k\) and \(\{\delta ^k_i\}_k\) are, respectively, nondecreasing and nonincreasing sequences. Therefore, we have that

$$\begin{aligned} W_x=\bigcap _k B_k =[\gamma _1,\delta _1]\times [\gamma _2,\delta _2] \cdots \times [\gamma _d,\delta _d], \end{aligned}$$

where

$$\begin{aligned} \gamma _i=\sup _k \gamma _i^k,\quad \delta _i=\inf _k \delta _i^k,\quad 1\le i \le d. \end{aligned}$$

In fact, we have that

$$\begin{aligned} W_x=\bigcap _k \mathrm {cl}(B_k). \end{aligned}$$

Since \(\nu (B_k)>0\) we have that \(\mathrm {cl}(B_k) \cap \mathrm {supp}(\nu ) \ne \emptyset \). Thus,

$$\begin{aligned} \{\mathrm {cl}(B_k) \cap \mathrm {supp}(\nu )\}_k \end{aligned}$$

is a nested family of nonempty compact sets. Therefore, we have that

$$\begin{aligned} W_x\cap \mathrm {supp}(\nu )= \bigcap _k \mathrm {cl}(B_k) \cap \mathrm {supp}(\nu ) \ne \emptyset . \end{aligned}$$

If \(W_x\cap \mathrm {int}(\mathrm {supp}(\nu )) \ne \emptyset \) then by item 1, we get that

$$\begin{aligned} W_x=\{y\}, \end{aligned}$$

for some \(y \in \mathrm {int}(\mathrm {supp}(\nu ))\). Hence, to complete the proof of item 1, we need to show that there exists a \(F_0 \in \mathcal {B}(\mathbb {R}^d)\) such that \(\mu (F_0)=0\), and

$$\begin{aligned} W_x \cap \mathrm {int}(\mathrm {supp}(\nu )) \ne \emptyset ,\quad \forall x\notin \mathrm {int}(\mathrm {supp}(\mu )){\setminus } F_0. \end{aligned}$$

For every k denote by

$$\begin{aligned} \varDelta ^{\prime }_k=\{B \in \varDelta _k ~\text{ s,t. }~B \cap \partial (\mathrm {supp}(\nu )) \ne \emptyset \}, \end{aligned}$$

and

$$\begin{aligned} \varDelta ^{\prime \prime }_k=\varDelta _k {\setminus } \varDelta ^{\prime }_k. \end{aligned}$$

Since

$$\begin{aligned} \bigcup _{\varDelta _k} B =\mathbb {R}^d, \end{aligned}$$

we obtain that

$$\begin{aligned} \partial (\mathrm {supp}(\nu ) ) \subset \bigcup _{\varDelta ^{\prime }_k} B=H_k. \end{aligned}$$

Furthermore, for every \(B \in \varDelta _k^{\prime \prime }\) we have that \(\nu (B)>0\), and therefore \(B \cap \mathrm {supp}(\nu ) \ne \emptyset \). On the other hand, \(B \cap \partial (\mathrm {supp}(\nu )) = \emptyset \), and B is connected. Hence, \(B \subset \mathrm {int}(\mathrm {supp}(\nu ))\), and

$$\begin{aligned} G_k= \bigcup _{\varDelta ^{\prime \prime }_k} B \subset \mathrm {int}(\mathrm {supp}(\nu )). \end{aligned}$$

Note that

$$\begin{aligned} H_k \supset H_{k+1},\quad G_k \subset G_{k+1},\quad \forall k. \end{aligned}$$

By item 1 we have that for every \(y \in \mathrm {int}(\mathrm {supp}(\nu ))\) there exists a partition rectangle B such that \(y\in B \subset \mathrm {int}(\mathrm {supp}(\nu ))\). Hence, \(B \in \varDelta ^{\prime \prime }_k\) for some k, and \(y \in G_k\). Therefore, we obtain that

$$\begin{aligned} \bigcup _k G_k =\mathrm {int}(\mathrm {supp}(\nu )). \end{aligned}$$

Consequently,

$$\begin{aligned} \bigcap _k H_k =\mathbb {R}^d {\setminus } \mathrm {int}(\mathrm {supp}(\nu )) \supset \partial (\mathrm {supp}(\nu )). \end{aligned}$$

Since \(\nu (\mathrm {int}(\mathrm {supp}(\nu )))=1\) we get that

$$\begin{aligned} \nu \left( \bigcup _k G_k \right) =1,\quad \nu \left( \bigcap _k H_k \right) =0. \end{aligned}$$

Denote by \(\varOmega _k^{\prime }\) and \(\varOmega _k^{\prime \prime }\) the families dual to \(\varDelta _k^{\prime }\) and \(\varDelta _k^{\prime \prime }\). Furthermore, denote by

$$\begin{aligned} F_k= \bigcup _{\varOmega _k^{\prime }} A,\quad E_k= \bigcup _{\varOmega _k^{\prime \prime }} A. \end{aligned}$$

By construction, we have that

$$\begin{aligned} F_k \supset F_{k+1},\quad E_k \subset E_{k+1},\quad \forall k. \end{aligned}$$

Moreover,

$$\begin{aligned} \mu (F_k)=\nu (H_k),\quad \mu (E_k)=\nu (G_k). \end{aligned}$$

Denote by

$$\begin{aligned} F_0=\bigcap _k F_k. \end{aligned}$$

Then, we have that \(F_0 \in \mathcal {B}(\mathbb {R}^d)\), and

$$\begin{aligned} \mu (F_0)=\lim \limits _{k\rightarrow \infty } \mu (F_k)=\lim \limits _{k\rightarrow \infty } \nu (H_k)=\nu \left( \bigcap _k H_k \right) =0. \end{aligned}$$

Finally, note that if \(W_x \cap \mathrm {int}(\mathrm {supp}(\nu ))=\emptyset \) then \(x\in F_0\).

3. From items 1, 2 we have that the map \(\hat{t}: \mathrm {int}(\mathrm {supp}(\mu )) {\setminus } F_0 \rightarrow \mathrm {int}(\mathrm {supp}(\nu ))\) given by

$$\begin{aligned} \hat{t}(x)=\hat{r}^{-1}(\hat{s}(x)), \end{aligned}$$

is well defined. Our first task is to show that \(\hat{t}\) is Borel measurable. For that, we need to show that \(\hat{t}^{-1}(G) \in \mathcal {B}(\mathbb {R}^d)\) for any open set \(G \subset \mathbb {R}^d\). Since \(\mathrm {Im}(\hat{t}) \subset \mathrm {int}(\mathrm {supp}(\nu ))\) we have that

$$\begin{aligned} \hat{t}^{-1}(G)= \hat{t}^{-1}\left( G \cap \mathrm {int}(\mathrm {supp}(\nu )) \right) . \end{aligned}$$

Therefore, we may assume that \(G \subset \mathrm {int}(\mathrm {supp}(\nu ))\). Furthermore, denote by

$$\begin{aligned} \varDelta _k^{\prime \prime }=\left\{ B \in \varDelta _k~\text{ s.t. }~B\subset G \right\} ,\quad \varDelta _k^{\prime }=\varDelta _k {\setminus } \varDelta _k^{\prime \prime }. \end{aligned}$$

Next, define

$$\begin{aligned} G_k=\bigcup _{\varDelta ^{\prime \prime }_k} B. \end{aligned}$$

Then we have that

$$\begin{aligned} G_{k}\subset G_{k+1},\quad G_k \subset G,\quad \forall k. \end{aligned}$$

From item 1, we have that for every \(y\in G\) there exists a partition set B such that \(y\in B \subset G\). Hence, we get that

$$\begin{aligned} \bigcup _k G_k=G. \end{aligned}$$

This means that

$$\begin{aligned} \hat{t}^{-1}(G)=\bigcup _k \hat{t}^{-1}(G_k)=\bigcup _k \bigcup _{\varDelta ^{\prime \prime }_k} \hat{t}^{-1}(B). \end{aligned}$$

On the other hand, from Eq. (4) in Theorem 2 we have that

$$\begin{aligned} \hat{t}^{-1}(G)=\bigcup _k \bigcup _{\varOmega ^{\prime \prime }_k} (A \cap \mathrm {int}(\mathrm {supp}(\mu )) {\setminus } F_0), \end{aligned}$$

where \(\varOmega _k^{\prime },\varOmega _k^{\prime \prime }\) are the dual families of \(\varDelta _k^{\prime },\varDelta _k^{\prime \prime }\). Therefore, we have that \(\hat{t}^{-1}(G) \in \mathcal {B}(\mathbb {R}^d)\). Furthermore, denote by

$$\begin{aligned} F_k= \bigcup _{\varOmega ^{\prime \prime }_k} (A {\setminus } F_0),\quad \forall k. \end{aligned}$$

By construction, we have that

$$\begin{aligned} F_k \subset F_{k+1},\quad \forall k, \end{aligned}$$

and hence

$$\begin{aligned} \mu \left( \hat{t}^{-1}(G)\right) = \lim \limits _{k\rightarrow \infty } \mu (F_k). \end{aligned}$$

On the other hand, from construction and equations \(\mu (F_0)=0\), \(\mu (\mathrm {int}(\mathrm {supp}(\mu )))=1\), we have that

$$\begin{aligned} \mu (F_k)=\sum _{\varOmega _k^{\prime \prime }} \mu (A)= \sum _{\varDelta _k^{\prime \prime }} \nu (B)=\nu (G_k). \end{aligned}$$

Therefore, we obtain that

$$\begin{aligned} \mu \left( \hat{t}^{-1}(G)\right) = \lim \limits _{k\rightarrow \infty } \mu (F_k)= \lim \limits _{k\rightarrow \infty } \nu (G_k)=\nu (G). \end{aligned}$$

Thus, \(\hat{t}\) is Borel measurable and \(\hat{t}\sharp \mu =\nu \). Finally, from item 1 we have that points in \(\mathrm {Im}(\hat{t})\subset \mathrm {int}(\mathrm {supp}(\nu ))\) get eventually separated. Therefore, by Theorem 2 we obtain that \(\hat{t}\) is half-space-preserving. \(\square \)

Proof (Theorem 4)

Denote by H the union of all the hyperplanes that partition \(\mu \). Then we have that \(\mathcal {L}^d(H)=\mu (H)=0\). We prove that \(\hat{t}\) is continuous on \(\mathrm {Dom}(\hat{t}){\setminus } H\).

Suppose \(x\in \mathrm {Dom}(\hat{t}){\setminus } H\), and \(y=\hat{t}(x)\). Furthermore, denote by \(\{A_k\}\) and \(\{B_k\}\) the rectangles that contain x and y, respectively. We have that

$$\begin{aligned} \bigcap _k A_k=\{x\},\quad \bigcap _k B_k=\{y\}. \end{aligned}$$

Moreover, since \(x\notin H\) we have that \(x\in \mathrm {int}(A_k)\) for all k. Therefore, by construction, we have that \(y\in \mathrm {int}(B_k)\) for all k. Thus, we obtain that

$$\begin{aligned} \bigcap _k \mathrm {int}(A_k)=\{x\},\quad \bigcap _k \mathrm {int}(B_k)=\{y\}, \end{aligned}$$

which yields the continuity. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nurbekyan, L., Iannantuono, A. & Oberman, A.M. No-Collision Transportation Maps. J Sci Comput 82, 45 (2020). https://doi.org/10.1007/s10915-020-01143-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-020-01143-x

Keywords

Navigation