[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Structured \((\min ,+)\)-convolution and its applications for the shortest/closest vector and nonlinear knapsack problems

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

In this work we consider the problem of computing the \((\min , +)\)-convolution of two sequences a and b of lengths n and m, respectively, where \(n \ge m\). We assume that a is arbitrary, but \(b_i = f(i)\), where \(f(x) :[0,m) \rightarrow {\mathbb {R}}\) is a function with one of the following properties: f is linear, f is monotone, f is convex, f is concave, f is piece-wise linear, f is a polynomial function of a fixed degree. To the best of our knowledge, the concave, piece-wise linear and polynomial cases were not considered in literature before. We develop true sub-quadratic algorithms for them. We apply our results to the knapsack problem with a separable nonlinear objective function, shortest lattice vector, and closest lattice vector problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Actually, a more general class of rational functions f(x)/g(x), where \(f,g \in {\mathbb {Z}}^d[x]\), could be considered by the cost of using 2d instead of d in the complexity bound. But for the sake of simplicity, this case is not considered. More generally, our approach is applicable to any functions f(x) with a fixed number of inflection points.

References

  1. Aggarwal, D., Dadush, D., Regev, O., Stephens-Davidowitz, N.: Solving the shortest vector problem in 2n time using discrete gaussian sampling. In: Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, pp. 733–742 (2015)

  2. Aggarwal, D., Dadush, D., Stephens-Davidowitz, N.: Solving the closest vector problem in \(2^{n}\) time–the discrete gaussian strikes again! In: 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pp. 563–582. IEEE (2015)

  3. Ajtai, M., Kumar, R., Sivakumar, D.: A sieve algorithm for the shortest lattice vector problem. In: Proceedings of the Thirty-Third Annual ACM Symposium on Theory of computing, pp. 601–610 (2001)

  4. Ajtai, M., Kumar, R., Sivakumar, D.: Sampling short lattice vectors and the closest lattice vector problem. In: Proceedings 17th IEEE Annual Conference on Computational Complexity, pp. 53–57. IEEE (2002)

  5. cp algorithms.com: Segment tree (2022). https://cp-algorithms.com/data_structures/segment_tree.html

  6. Arvind, V., Joglekar, P.S.: Some sieving algorithms for lattice problems. In: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science. Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2008)

  7. Axiotis, K., Tzamos, C.: Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms. In: Baier, C., Chatzigiannakis, I., Flocchini, P., Leonardi, S. (eds.) 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), Leibniz International Proceedings in Informatics (LIPIcs), vol. 132, pp. 19:1–19:13. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany (2019). https://doi.org/10.4230/LIPIcs.ICALP.2019.19

  8. Backurs, A., Indyk, P., Schmidt, L.: Better approximations for tree sparsity in nearly-linear time. In: Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2215–2229. SIAM (2017)

  9. Bateni, M., Hajiaghayi, M., Seddighin, S., Stein, C.: Fast algorithms for knapsack via convolution and prediction. In: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1269–1282 (2018)

  10. Bellman, R.: Dynamic programming. Science 153(3731), 34–37 (1966)

    Article  Google Scholar 

  11. Blömer, J., Naewe, S.: Sampling methods for shortest vectors, closest vectors and successive minima. Theoret. Comput. Sci. 410(18), 1648–1665 (2009)

    Article  MathSciNet  Google Scholar 

  12. Bremner, D., Chan, T.M., Demaine, E.D., Erickson, J., Hurtado, F., Iacono, J., Langerman, S., Taslakian, P.: Necklaces, convolutions, and \(x+y\). In: European Symposium on Algorithms, pp. 160–171. Springer (2006)

  13. Chan, T.M., Har-Peled, S.: Smallest k-enclosing rectangle revisited. Discrete Comput. Geom. 66(2), 769–791 (2021)

    Article  MathSciNet  Google Scholar 

  14. Chan, T.M., Lewenstein, M.: Clustered integer 3sum via additive combinatorics. In: Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, STOC ’15, pp. 31–40. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2746539.2746568

  15. Chan, T.M., Williams, R.: Deterministic apsp, orthogonal vectors, and more: Quickly derandomizing razborov-smolensky. In: Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pp. 1246–1255. SIAM (2016)

  16. Chi, S., Duan, R., Xie, T., Zhang, T.: Faster min-plus product for monotone instances (2022)

  17. Cygan, M., Mucha, M., Węgrzycki, K., Włodarczyk, M.: On problems equivalent to \(({\rm min},+)\)-convolution. ACM Trans. Algorithm (TALG) 15(1), 1–25 (2019)

    Article  Google Scholar 

  18. Dadush, D.: Integer programming, lattice algorithms, and deterministic volume estimation. Georgia Institute of Technology, ProQuest Dissertations Publishing, Ann Arbor (2012)

  19. Dadush, D., Peikert, C., Vempala, S.: Enumerative lattice algorithms in any norm via m-ellipsoid coverings. In: 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, pp. 580–589 (2011). https://doi.org/10.1109/FOCS.2011.31

  20. Eisenbrand, F., Hähnle, N., Niemeier, M.: Covering cubes and the closest vector problem. In: Proceedings of the Twenty-Seventh Annual Symposium on Computational Geometry, pp. 417–423 (2011)

  21. Eisenbrand, F., Weismantel, R.: Proximity results and faster algorithms for integer programming using the steinitz lemma. ACM Trans. Algorithms (2019). https://doi.org/10.1145/3340322

    Article  Google Scholar 

  22. Fincke, U., Pohst, M.: Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Math. Comput. 44(170), 463–471 (1985)

    Article  MathSciNet  Google Scholar 

  23. Gribanov D., V.: An FPTAS for the \(\varDelta \)-Modular Multidimensional Knapsack Problem (2021). https://doi.org/10.1007/978-3-030-77876-7_6

  24. Gribanov, D.V., Malyshev, D.S., Pardalos, P.M., Veselov, S.I.: FPT-algorithms for some problems related to integer programming. J. Comb. Optim. 35, 1128–1146 (2018). https://doi.org/10.1007/s10878-018-0264-z

    Article  MathSciNet  Google Scholar 

  25. Gribanov D., V., Shumilov I., A., Malyshev D., S., Pardalos P., M.: On \(\delta \)-modular integer linear problems in the canonical form and equivalent problems. J. Glob. Optim. (2022). https://doi.org/10.1007/s10898-022-01165-9

  26. Hanrot, G., Pujol, X., Stehlé, D.: Algorithms for the shortest and closest lattice vector problems. In: International Conference on Coding and Cryptology, pp. 159–190. Springer (2011)

  27. Hanrot, G., Stehlé, D.: Improved analysis of kannan’s shortest lattice vector algorithm. In: Annual International Cryptology Conference, pp. 170–186. Springer (2007)

  28. Helfrich, B.: Algorithms to construct minkowski reduced and hermite reduced lattice bases. Theoret. Comput. Sci. 41, 125–139 (1985)

    Article  MathSciNet  Google Scholar 

  29. Jansen, K., Rohwedder, L.: On integer programming and convolution. In: 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2018)

  30. Kannan, R.: Minkowski’s convex body theorem and integer programming. Math. Oper. Res. 12(3), 415–440 (1987)

    Article  MathSciNet  Google Scholar 

  31. Kellerer, H., Pferschy, U.: Improved dynamic programming in connection with an fptas for the knapsack problem. J. Comb. Optim. 8(1), 5–11 (2004)

    Article  MathSciNet  Google Scholar 

  32. Kellerer, H., Pferschy, U., Pisinger, D.: Knapsack Problems. Springer Science & Business Media, Berlin (2013)

    Google Scholar 

  33. Korte, B., Vygen, J.: Combinatorial Optimization. Springer, Berlin (2011)

    Google Scholar 

  34. Kulikov, A.S., Mikhailin, I., Mokhov, A., Podolskii, V.: Complexity of Linear Operators. In: P. Lu, G. Zhang (eds.) 30th International Symposium on Algorithms and Computation (ISAAC 2019), Leibniz International Proceedings in Informatics (LIPIcs), vol. 149, pp. 17:1–17:12. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany (2019). https://doi.org/10.4230/LIPIcs.ISAAC.2019.17

  35. Künnemann, M., Paturi, R., Schneider, S.: On the Fine-Grained Complexity of One-Dimensional Dynamic Programming. In: I. Chatzigiannakis, P. Indyk, F. Kuhn, A. Muscholl (eds.) 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), Leibniz International Proceedings in Informatics (LIPIcs), vol. 80, pp. 21:1–21:15. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany (2017). https://doi.org/10.4230/LIPIcs.ICALP.2017.21. http://drops.dagstuhl.de/opus/volltexte/2017/7468

  36. Laber, E.S., Bardales, W., Cicalese, F.: On lower bounds for the maximum consecutive subsums problem and the \(({\rm min},+)\)-convolution. In: 2014 IEEE International Symposium on Information Theory, pp. 1807–1811. IEEE (2014)

  37. Lawler, E.L.: Fast approximation algorithms for knapsack problems. In: 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pp. 206–213. IEEE (1977)

  38. Liu, M., Wang, X., Xu, G., Zheng, X.: Shortest lattice vectors in the presence of gaps. Cryptology ePrint Archive (2011)

  39. Micciancio, D., Voulgaris, P.: Faster exponential time algorithms for the shortest vector problem. In: Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, pp. 1468–1480. SIAM (2010)

  40. Micciancio, D., Voulgaris, P.: A deterministic single exponential time algorithm for most lattice problems based on voronoi cell computations. SIAM J. Comput. 42(3), 1364–1391 (2013)

    Article  MathSciNet  Google Scholar 

  41. Micciancio, D., Walter, M.: Fast lattice point enumeration with minimal overhead. In: Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 276–294. SIAM (2014)

  42. Nguyen, P.Q., Vidick, T.: Sieve algorithms for the shortest vector problem are practical. J. Math. Cryptol. 2(2), 181–207 (2008)

    Article  MathSciNet  Google Scholar 

  43. Pferschy, U.: Dynamic programming revisited: Improving knapsack algorithms. Computing 63(4), 419–430 (1999). https://doi.org/10.1007/s006070050042

    Article  MathSciNet  Google Scholar 

  44. Pohst, M.: On the computation of lattice vectors of minimal length, successive minima and reduced bases with applications. ACM Sigsam Bull. 15(1), 37–44 (1981)

    Article  Google Scholar 

  45. Polak, A., Rohwedder, L., Wegrzycki, K.: Knapsack and subset sum with small items (2021). arXiv:2105.04035v1 [cs.DS]

  46. Pujol, X., Stehlé, D.: Solving the shortest lattice vector problem in time 2\(\hat{\,}\) 2.465 n. Cryptology ePrint Archive (2009)

  47. Schrijver, A.: Theory of Linear and Integer Programming. Wiley, Chichester (1998)

    Google Scholar 

  48. Sommer, N., Feder, M., Shalvi, O.: Finding the closest lattice point by iterative slicing. SIAM J. Discret. Math. 23(2), 715–731 (2009)

    Article  MathSciNet  Google Scholar 

  49. Storjohann, A.: Near optimal algorithms for computing Smith normal forms of integer matrices. In: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, ISSAC ’96, pp. 267–274. Association for Computing Machinery, New York, NY, USA (1996). https://doi.org/10.1145/236869.237084

  50. Taylor, A.B., Hendrickx, J.M., Glineur, F.: Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Math. Program. 161(1), 307–345 (2017)

    Article  MathSciNet  Google Scholar 

  51. Williams, R.R.: Faster all-pairs shortest paths via circuit complexity. SIAM J. Comput. 47(5), 1965–1985 (2018)

    Article  MathSciNet  Google Scholar 

  52. Yasuda, M.: A survey of solving svp algorithms and recent strategies for solving the svp challenge. In: International Symposium on Mathematics, Quantum Theory, and Cryptography, pp. 189–207. Springer, Singapore (2021)

  53. Zhendong, W.: Computing the Smith forms of Integer Matrices and Solving Related Problems. University of Delaware, Newark, DE (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. V. Gribanov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The article was prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE). The authors would like to thank A. Vanin and the anonymous reviewers for useful discussions and remarks during preparation of this article.

Appendix

Appendix

1.1 Proof of Theorem 1

Proof

Description of data structure: Our new data structure is a common segment tree T with some additional augmentations. Here we will use the same notations, as in Sect. 2.2. We augment each vertex v of T with an additional data, represented by a finite set \({\mathscr {G}}_v\) of functions \(g: {\mathcal {D}}_g \cap {\mathbb {Z}}\rightarrow {\mathbb {Z}}\), where each domain \({\mathcal {D}}_g\) is an integer sub-interval of [0, n) and g acts on \({\mathcal {D}}_g\) as a function \(g(x) = A[j] + f(x + t)\), for some \(j,t \in \{{0}, \dots , {n-1}\}\). We will support the following four invariants, for any \(v \in T\):

  • Invariant 1: the intervals \({\mathcal {D}}_g\), for any \(g \in {\mathscr {G}}_v\), must split [0, n):

    $$\begin{aligned} {[}0,n) = \bigsqcup \limits _{g \in {\mathscr {G}}_v} {\mathcal {D}}_g; \end{aligned}$$
  • Invariant 2: for any \(x \in [0,n)\), there exists a unique function \(g \in {\mathscr {G}}_v\), such that \(x \in {\mathcal {D}}_g\), and

    $$\begin{aligned} {\mathcal {F}}\left( [i_v, j_v); x\right) = g(x); \end{aligned}$$
  • Invariant 3: the functions \(g \in {\mathscr {G}}_v\) are stored in the sorted order with respect to the end-points of their domains \({\mathcal {D}}_g\);

  • Invariant 4: for any \(g \in {\mathscr {G}}_v\), the function g(x) acts on \({\mathcal {D}}_g\) like \(g(x) = A[j] + f(x + t)\), for \(j,t \in \{{0}, \dots , {n-1}\}\);

Description and analysis of the query operation for basic intervals: For the definition of basic intervals, see Sect. 2.2. Assume that a vertex \(v \in T\) is given, and we want to perform the \(query(i_v, j_v, x)\) operation with respect to the basic interval \([i_v, j_v)\). Due to Invariant 2, we just need to find an appropriate function g from the set \({\mathscr {G}}_v\). Due to Invariant 3, the function g can be found in \(O(\log (n))\) time, because \({\mathscr {G}}_v\) contains at most n functions. Due to Invariant 4, g(x) looks like \(A[j] + f(x + t)\), so it can be computed, using a single call to \(E\!V\). The total complexity is \(O(\log _2(n))\).

Description and analysis of the query operation for general intervals: Assume that an interval [ij) is given, and we want to perform the query(ijx) operation. Due to Lemma 1, there exist \(m \le 2 \log _2(n)\) vertices \(v_1, v_2, \dots , v_m \in T\), such that [ij) is partitioned into the basic intervals \([i_{v_k}, j_{v_k})\), for \(k \in \{{1}, \dots , {m}\}\). Let us assume that \(i_{v_1}< i_{v_2}< \dots < i_{v_m}\), and let \(h_k = i_{v_k} - i_{v_1}\). Due to the property (1), we have

$$\begin{aligned}&{\mathcal {F}}\left( [i,j);\, x\right) \\&\quad = \min \left\{ {\mathcal {F}}\left( [i_{v_1},j_{v_1});\, x + h_1\right) , \dots , {\mathcal {F}}\left( [i_{v_m},j_{v_m});\, x + h_m\right) \right\} \\&\quad = \min \limits _{k \in \{{1}, \dots , {m}\}} \left\{ {\mathcal {F}}\left( [i_{v_k},j_{v_k});\, x + h_k\right) \right\} . \end{aligned}$$

Consequently, due to the complexity bound on queries for basic intervals, the complexity of the query(ijx) operation is \(O(\log ^2(n))\).

Description and analysis of the preprocessing:

First of all, let us construct the standard segment tree T, described in Sect. 2.2, for the array A. It will take O(n) time and space. We need to show how to compute \({\mathscr {G}}_v\), for any \(v \in T\), and satisfy all the invariants. The algorithm is recursive: it starts from the leafs, and moves upper, until it meats the root of T. Let v be a leaf. Since \(j_v - i_v = 1\), \({\mathcal {F}}\left( [i_v,j_v); x\right) = A[i_v] + f(x)\). Consequently, \({\mathscr {G}}_v\) consists of only one function \(g(x) = A[i_v] + f(x)\), and \({\mathcal {D}}_g = [0, n)\).

Next, we assume that v is not a leaf, and let u and w be the children of v. We will show how the set \({\mathscr {G}}_v\) can be constructed from the sets \({\mathscr {G}}_u\) and \({\mathscr {G}}_w\), based on the formula

$$\begin{aligned} {\mathcal {F}}\left( [i_v,j_v);\, x\right) = \min \left\{ {\mathcal {F}}\left( [i_u,j_u);\, x\right) ,\; {\mathcal {F}}\left( [i_w,j_w);\, x + j_u-i_u\right) \right\} , \end{aligned}$$
(19)

which is a direct application of (1). Let \({\mathcal {P}}_u\) and \({\mathcal {P}}_w\) be the sets of end-points of intervals, representing the domains of functions inside \({\mathscr {G}}_u\) and \({\mathscr {G}}_w\). We assume that \(0, n \in {\mathcal {P}}_u\) and \(0, n \in {\mathcal {P}}_w\). Clearly, \(\left| {{\mathcal {P}}_u}\right| =\left| {{\mathscr {G}}_u}\right| + 1\) and \(\left| {{\mathcal {P}}_w}\right| = \left| {{\mathscr {G}}_w}\right| + 1\). Due to Invarinat 3, we can assume that \({\mathcal {P}}_u\) and \({\mathcal {P}}_w\) are sorted. Next, we merge \({\mathcal {P}}_u\) and \({\mathcal {P}}_w\) into \({\mathcal {P}}_v\), maintaining the same sorting order, and remove the duplicates. The last step can be done in \(O(\left| {{\mathscr {G}}_u}\right| + \left| {{\mathscr {G}}_w}\right| )\)-time, since \({\mathcal {P}}_u\) and \({\mathcal {P}}_w\) are sorted. Since the points 0, n are common for both \({\mathcal {P}}_u\) and \({\mathcal {P}}_v\), we have

$$\begin{aligned} \left| {{\mathcal {P}}_v}\right| \le \left| {{\mathscr {G}}_u}\right| + \left| {{\mathscr {G}}_w}\right| . \end{aligned}$$
(20)

Take a pair \(\nu , \tau \) of consecutive points in \({\mathcal {P}}_v\). Due to Invariant 2, there exist unique functions \(g_u \in {\mathscr {G}}_u\) and \(g_w \in {\mathscr {G}}_w\), such that \([\nu , \tau ) \subseteq {\mathcal {D}}_{g_u} \cap {\mathcal {D}}_{g_w}\). Due to the formula (19), for \(x \in [\nu , \tau )\), we have

$$\begin{aligned} {\mathcal {F}}\left( [i_v,j_v);\, x\right) =\min \left\{ g_u(x),\;g_w(x + j_u-i_u) \right\} . \end{aligned}$$

Let \(h(x) = g_u(x) - g_w(x + j_u - i_u)\), defined on \([\nu , \tau )\). Due to Invariant 4, the function h(x) has the form \(f(x+a) - f(x+b) + c\), for some \(a,b \in {\mathbb {Z}}_{\ge 0}\) and \(c \in {\mathbb {Z}}\).

To efficiently precompute \({\mathcal {F}}\left( [i_v,j_v); x\right) \) for \(x \in [\nu , \tau ) \cap {\mathbb {Z}}\), we need to compute a minimal sign partition \({\mathcal {S}}\in {\mathscr {P}}(h, [\nu , \tau ))\). It can be done by a single call to \(S\!P\). Now, for any interval \({\mathcal {I}}\in {\mathcal {S}}\), if \(h(x) \ge 0\) on \({\mathcal {I}}\), then \({\mathcal {F}}\left( [i_v,j_v); x\right) = g_u(x)\) and \({\mathcal {F}}\left( [i_v,j_v); x\right) = g_w(x + j_u - i_u)\) in the opposite case \(h(x) \le 0\). Consequently, for any such interval \({\mathcal {I}}\), we create a new function \(g_{{\mathcal {I}}}\) and put it inside \({\mathscr {G}}_v\) in the sorted order with respect to endpoints of \({\mathcal {I}}\). Hence, the interval \([\nu , \tau )\) will be decomposed into at most p new sub-intervals, and the same number of new functions will be added into \({\mathscr {G}}_v\).

Now, let us estimate the time and space requirements to build the set \({\mathscr {G}}_v\). As it was shown before, for any pair \([\nu ,\tau )\) of consecutive points from \({\mathcal {P}}_v\), we add at most p functions to \({\mathscr {G}}_v\). Therefore, due to (20), we have

$$\begin{aligned} \left| {{\mathscr {G}}_v}\right| \le \left( \left| {{\mathscr {G}}_u}\right| + \left| {{\mathscr {G}}_w}\right| - 1\right) \cdot p. \end{aligned}$$

Denote \(N(m) = \max \left\{ \left| {{\mathscr {G}}_v}\right| :v \in T,\; j_v - i_v =m \right\} \), for \(m \le n\) being a power of 2. Since \(N(1) = 1\), we have

$$\begin{aligned} N(m) \le 2 \cdot N(m/2)\cdot p \le (2p)^{\log _2 (m)} = m^{1 + \log _2 (p)}. \end{aligned}$$

And, since we always work in the interval [0, n),

$$\begin{aligned} N(m) \le \min \{m^{1 + \log _2 (p)},\; n\}. \end{aligned}$$
(21)

By analogy with N(m), let us denote the maximal time to construct \({\mathscr {G}}_v\) (in the assumption that \({\mathscr {G}}_u\) and \({\mathscr {G}}_w\) are already constructed) by \(t_{node}(m)\), where \(m = j_v - i_v\). By the word "time", we mean both arithmetical and oracle complexities. Clearly, the definition is correct, because the value of m is the same for all the vertices of the same level in T. Since the complexity to compute \({\mathscr {G}}_v\) is linear with respect to the resulting size of \({\mathscr {G}}_v\), due to (21),

$$\begin{aligned} t_{node}(m) = O\left( N(m) \right) = O\left( \min \{m^{1 + \log _2 (p)},\; n\} \right) . \end{aligned}$$
(22)

Note additionally that the space requirements to store \({\mathscr {G}}_v\) with the whole information, related to v, can be described by the same function \(t_{node}(m)\). Now, let us compute the total time and space complexity to construct the final augmented tree T. It can be expressed by the function

$$\begin{aligned} t(n) = \sum \limits _{k = 0}^{\log _2 (n)} 2^k \cdot t_{node}(n/2^k). \end{aligned}$$

Let \(s = \left\lceil \log _2\left( n^{\frac{1}{1+\log _2 (p)}} \right) \right\rceil \). To calculate the asymptotic of t(n), we split the sum into two parts and estimate elements of each sum, using (22):

$$\begin{aligned} t(n)&\lesssim \sum \limits _{k = 0}^{s} 2^k \cdot n + \sum \limits _{k = s+1}^{\log _2 (n)} 2^k \cdot (n/2^k)^{1+\log _2(p)}\\&\lesssim n^{1 + \frac{1}{1 + \log _2(p)}} + n^{1 + \log _2 (p)} \cdot \sum \limits _{k = s+1}^{\log _2 (n)} 2^{- k \cdot \log _2 (p)}. \end{aligned}$$

Estimating the sum at the end of the last formula, we have:

$$\begin{aligned} \sum \limits _{k = s+1}^{\log _2 (n)} 2^{- k \cdot \log _2 (p)}&= \sum \limits _{k = s+1}^{\log _2 (n)} p^{- k} = \frac{p}{p-1} \cdot \left( \frac{1}{p^{1+s}} -\frac{1}{p^{1+\log _2 (n)}} \right) \\&= \frac{1}{p-1} \cdot \left( \frac{1}{p^{s}} - \frac{1}{p^{\log _2 (n)}} \right) \le \frac{1}{p-1} \cdot \left( \frac{1}{n^{\frac{\log _2 (p)}{1+\log _2 (p)}}} - \frac{1}{n^{\log _2 (p)}} \right) . \end{aligned}$$

Finally, the total time and space requirements can be estimated as follows

$$\begin{aligned} t(n)&\lesssim n^{1 + \frac{1}{1 + \log _2 (p)}} + \frac{1}{p} \cdot n^{1 + \log _2 (p) - \frac{\log _2 (p)}{1+\log _2 (p)}} \\&= n^{1 + \frac{1}{1+ \log _2 (p)}} \cdot \left( 1 + \frac{1}{p} \cdot n^{-1 + \log _2 (p)} \right) \\&= O\left( n^{\log _2 (p) + \frac{1}{1 + \log _2 (p)} } \right) . \end{aligned}$$

\(\square \)

1.2 Proof of Lemma 2

Proof

Put \({\mathcal {I}}= \{{0}, \dots , {n-1}\}\). To prove the lemma, we will use the criteria, given in [50, Corollary 1]. For given \(g_0, g_1, \dots , g_{n-1}\), it states that the \(C^{1,L}\)-smooth convex function f with \(f'(x_i) = g_i\) exists if and only if the following conditions are satisfied:

$$\begin{aligned} f_i \ge f_j + g_j\cdot (x_i-x_j) + \frac{1}{2L} \cdot \left| {g_i - g_j}\right| ,\quad \text {for } i,j \in {\mathcal {I}}. \end{aligned}$$
(23)

We construct \(g_i\) in the following way. We choose \(g_0 < d_1\) and \(g_{n-1} > d_{n-1}\). For \(i \in \{{1}, \dots , {n-2}\}\), if \(d_{i} < d_{i+1}\), we choose \(g_i\) strictly between \(d_{i}\) and \(d_{i+1}\): \(d_{i}< g_i < d_{i+1}\). In the opposite case, when \(d_{i} = d_{i+1}\), we just set \(g_i:= d_i\).

Fix \(i,j \in \{{1}, \dots , {n-2}\}\). Assume firstly that \(d_{k} = d_{l}\), for all kl between (inclusively) ij. Then, the following equality, for any \(\varepsilon > 0\), holds, which follows from the definition of \(d_i\):

$$\begin{aligned} f_i = f_j + g_j \cdot (x_i - x_j) = f_j + g_j \cdot (x_i - x_j) + 0 \cdot \left| {g_i - g_j}\right| . \end{aligned}$$

Now, assume that \(d_k \not = d_{k+1}\), for some k between (inclusively) ij. Then, since \(g_k > d_k\), we have

$$\begin{aligned} f_i > f_j + g_j \cdot (x_i - x_j) \quad \Longrightarrow \quad f_i \ge f_j + g_j \cdot (x_i - x_j) + \varepsilon \cdot \left| {g_i - g_j}\right| , \end{aligned}$$

for any sufficiently small \(\varepsilon > 0\). Finally, if \(i = 0\) or \(j = n-1\), the same inequality holds, because \(g_0 < d_1\) and \(g_{n-1} > d_{n-1}\).

Therefore, since \({\mathcal {I}}\) is finite, we can choose \(\varepsilon \) sufficiently small, such that the following inequality will hold, for any \(i,j \in {\mathcal {I}}\):

$$\begin{aligned} f_i \ge f_j + g_j \cdot (x_i - x_j) + \varepsilon \cdot \left| {g_i - g_j}\right| , \quad \text {which satisfies} \ (23) \ \text {with}\ L = 1/(2\varepsilon ). \end{aligned}$$

\(\square \)

1.3 Proof of Lemma 3

Proof

Assume that f is convex. In the opposite case, we can consider a function \(-g(x) = \left( -f(x+a)\right) - b -\left( -f(x)\right) \). Clearly, all the sign partitions of g(x) and \(-g(x)\) are equivalent. Next, we can assume that f is \(C^1\)-smooth. Definitely, if f is not \(C^1\)-smooth, due to Lemma 2, there exists a convex \(C^1\)-smooth function h, such that \(h(x) = f(x)\), for all \(x \in [\alpha ,\beta ) \cap {\mathbb {Z}}\). Since any minimal sign partition of f consists of intervals with integer end-points, the sign partitions for h and f are equivalent, and we can use h instead of f.

Additionally, we assume that \(a > 0\), because, in the opposite case, both statements are trivial. Now, we claim that the equality \(g(x) =0\) is possible only for points in some connected interval \([\nu , \tau ] \subseteq [\alpha ,\beta )\). Assume that \(g(\nu ) = 0\) and \(g(\tau ) = 0\), for some \(\nu ,\tau \in [\alpha ,\beta )\) with \(\tau > \nu \). By definition, we have

$$\begin{aligned} f(\nu +a) - f(\nu ) = -b \quad \text {and}\quad f(\tau +a) - f(\tau ) = -b. \end{aligned}$$

Due to \(C^1\)-smoothness of f, we can use the fundamental theorem of calculus:

$$\begin{aligned} \int \limits _{\nu }^{\nu +a} f'(x) dx = -b, \quad \text {and}\quad \int \limits _{\tau }^{\tau +a} f'(x) dx = -b. \end{aligned}$$
(24)

Put \(\delta = \tau - \nu \). Make a change of variables \(x \rightarrow x - \delta \) and \(x \rightarrow x + \delta \) in (24):

$$\begin{aligned} \int \limits _{\tau }^{\tau +a} f'(x - \delta ) dx = -b, \quad \text {and} \quad \int \limits _{\nu }^{\nu +a} f'(x + \delta ) dx = -b. \end{aligned}$$
(25)

Combining (24) and (25), we get

$$\begin{aligned} \int \limits _{\nu }^{\nu +a} \left( f'(x+\delta ) - f'(x)\right) dx = 0, \quad \text {and}\quad \int \limits _{\tau }^{\tau +a} \left( f'(x) - f'(x-\delta ) \right) dx = 0. \end{aligned}$$

Since f is convex, \(f'(x+\delta ) - f'(x) \ge 0\) and \(f'(x) - f'(x-\delta ) \ge 0\), and, consequently,

$$\begin{aligned} \forall x \in [\nu ,\nu +a] \quad f'(x) = {{\,\textrm{const}\,}}, \quad \text {and} \quad \forall x \in [\tau ,\tau +a] \quad f'(x) = {{\,\textrm{const}\,}}. \end{aligned}$$

Again, since \(f'(x)\) is convex, \(f'(x) = {{\,\textrm{const}\,}}\), for the whole interval \([\nu , \tau ]\). Consequently, \(g'(x) = 0\) and \(g(x) = {{\,\textrm{const}\,}}\), for \(x \in [\nu , \tau ]\). Since \(g(\nu ) = 0\), it holds that \(g(x) = 0\), for \(x \in [\nu , \tau ]\). So, the claim is proved.

Therefore, any given interval \({\mathcal {I}}\subseteq [\alpha ,\beta )\), where g is well-defined, can be partitioned into at most three parts: strict inequalities \(g(x) > 0\) or \(g(x) < 0\) on the left and right sides, and equality \(g(x) = 0\) in the middle. Consequently, any minimal sign partition of g on \({\mathcal {I}}\) consists of at most 2 pieces, which proves the inequality \(p_f \le 2\). To calculate such a partition, we need to find an integer point \(z \in {\mathcal {I}}\) with \(g(z) = 0\). Or, in the case, when \(g(x) \not = 0\) for all \(x \in {\mathcal {I}}\cap {\mathbb {Z}}\), we need to calculate a point z, such that \(g(z) < 0\) and \(g(z+1) > 0\), or vice-versa. Clearly, in both cases we can use the standard dichotomy principle that takes \(O( \log (n) )\) calls to \(E\!V\). \(\square \)

1.4 Proof of Lemma 4

Proof

First we need to prove the following auxiliary lemma:

Lemma 5

Let \(g_1 = g_2 = \dots = g_k\), for \(k \in \{{1}, \dots , {n}\}\), and \(x^*\) be an optimal solution of 17. Denote \(S = x_1^* + \dots + x_k^*\). Then, there exists an optimal solution \(z^*\) with the following structure:

  1. (i)

    If \(S \ge 0\), then:

    $$\begin{aligned} z^* = (a+1, a+1, \dots , a+1, a , a, \dots , a)^\top , \quad \text {where}\ a \in {\mathbb {Z}}_{\ge 0}. \end{aligned}$$
    (26)
  2. (ii)

    If \(S < 0\), then:

    $$\begin{aligned} z^* = -(a, a, \dots , a, a+1 , a+1, \dots , a+1)^\top , \quad \text {where}\ a \in {\mathbb {Z}}_{\ge 0}. \end{aligned}$$
    (27)

Proof

Note that the expressions \(g_1 \cdot x_1^* + \dots + g_k \cdot x_k^*\) and \(g_1 \cdot S\) are equivalent in therms of constraints of (17). Assume that \(S \ge 0\). First of all, we claim that there exists an optimal solution \(z^*\) with the property \(z^*_i \ge 0\), for \(i \in \{{1}, \dots , {k}\}\). Assume that there exist \(i,j \in \{{1}, \dots , {k}\}\) with \(x^*_i \ge 1\) and \(x^*_j \le -1\). Since \(S \ge 0\), if \(x^*_j\) exists, then \(x^*_i\) exists also. Next, we construct a vector \(z^*\), which coincides with \(x^*\) in all the coordinates, except ij. Put \(z^*_i = x^*_i - 1\) and \(z^*_j =x^*_j + 1\). Due to Property 1, we have \(\sum _{i=1}^k f_i(z^*_i) \le \sum _{i=1}^k f_i(x^*_i)\). Such a procedure can be repeated until no negative coordinates remain. Consequently, it can be assumed that \(x^*_i \ge 0\), for \(i \in \{{1}, \dots , {k}\}\). Let us consider the following auxiliary optimization problem:

$$\begin{aligned}&\sum _{i=1}^k f_i(x_i) \rightarrow \min \\&{\left\{ \begin{array}{ll} x_1 + \dots + x_k = S\\ x \in {\mathbb {Z}}_{\ge 0}. \end{array}\right. } \end{aligned}$$

Clearly, \(x^*[1,k]\) gives an optimal solution of this problem, and vice versa, an optimal solution of 6.4 could be used to generate the first k coordinates of \(x^*\). Let us consider the set \({\mathcal {S}}= \{x \in {\mathbb {Z}}^k_{\ge 0} :x_1 + \dots +x_k \le S\}\). Elements of \({\mathcal {S}}\) could be treated as the characteristic vectors of multisets with the cardinality S. Identifying vectors with multisets, we can see that \({\mathcal {S}}\) is a matroid, see, for example, [33, Proposition 13.4, Part 13. Matroids]. The vectors \(z \in {\mathcal {S}}\) with \(z_1 + \dots + z_k = S\) are the bases of \({\mathcal {S}}\). Consequently, an optimal solution of 6.4 is exactly a base of \({\mathcal {S}}\) with the minimal possible value of the objective function. Since \({\mathcal {S}}\) is a matroid, an optimal solution of 6.4 can by found by the following greedy algorithm:

  1. 1.

    Assign \(s:= 0\), \(x:= \textbf{0}^k\), and \(F: = f_1(0) + \dots + f_k(0)\);

  2. 2.

    While \(s \le S\) do the following:

  3. 3.

          Choose \(i \in \{{1}, \dots , {k}\}\), such that the value \(f_i(x_i+1) - f_i(x_i)\) is minimal;

  4. 4.

          Assign \(x_i:= x_i + 1\), \(s:= s+1\), and \(F:= F + f_i(x_i+1) - f_i(x_i)\);

  5. 5.

          Move to the step 2;

  6. 6.

    Return x as a greedy solution and F as f(x);

Due to the properties 4,5, there exists a greedy solution \(z^*\) that looks like (26). This proves the lemma for the case \(S \ge 0\). The case \(S < 0\) is absolutely similar. \(\square \)

If \(x^*\) is an optimal solution of (17), then, due to Lemma 5, there exists an optimal solution \(z^*\) of (17), such that the first k components of \(z^*\) look like (26) or (27). By the definition of h, we have \(\sum _{i=1}^k f_i(z^*_i) = h(y)\), where \(y = z_1^* + \dots z_k^*\). Note that \((y,\, x^*[k+1,n])\) is a feasible solution of (18) with the same value of the objective function. In the opposite direction, let \((y,\, x^*)\) be an optimal solution of (18). Let us construct the vector z as it was described in the lemma definition. Clearly, the vector \(\left( {\begin{array}{c}z\\ x^*\end{array}}\right) \) is a feasible solution of (17) with the same value of the objective function.

Finally, let us explain how to compute \(h(0),h(1), \dots , h(m)\) with O(m) operations. Assume that h(i) has already been computed, and let \(r = i \bmod k\) and \(a = \lfloor i / k \rfloor \). Then, by the definition of h, we have \(h(i + 1) = h(i) + f_{r+1}(a+1) - f_{r+1}(a)\). Hence, we need O(1) operations to compute \(h(i+1)\) and O(m) operations to compute the whole sequence. A similar algorithm works for \(h(0), h(-1), \dots , h(-m)\). The proof of Lemma 4 is finished. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gribanov, D.V., Shumilov, I.A. & Malyshev, D.S. Structured \((\min ,+)\)-convolution and its applications for the shortest/closest vector and nonlinear knapsack problems. Optim Lett 18, 73–103 (2024). https://doi.org/10.1007/s11590-023-02017-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-023-02017-5

Keywords

Navigation