[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Tensorisation of vectors and their efficient convolution

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

In recent papers the tensorisation of vectors has been discussed. In principle, this is the isomorphic representation of an \({\mathbb{R}^{n}}\) vector as a tensor. Black-box tensor approximation methods can be used to reduce the data size of the tensor representation. In particular, if the vector corresponds to a grid function, the resulting data size can become much smaller than n, e.g., \({O(\log n)\ll n}\) . In this article we discuss the convolution of two vectors which are given via a sparse tensor representation. We want to obtain the result again in the tensor representation. Furthermore, the cost of the convolution algorithm should be related to the operands’ data sizes. While \({\mathbb{R}^{n}}\) vectors can be considered as grid values of function, we also apply the corresponding procedure to univariate functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Braess D., Hackbusch W.: On the efficient computation of high-dimensional integrals and the approximation by exponential sums. In: DeVore, R., Kunoth, A. (eds) Multiscale, nonlinear and adaptive approximation, pp. 39–74. Springer, Berlin (2009)

    Chapter  Google Scholar 

  2. Espig, M.: Effiziente Bestapproximation mittels Summen von Elementartensoren in hohen Dimensionen. Doctoral thesis, University Leipzig (2008)

  3. Grasedyck, L.: Polynomial approximation in hierarchical Tucker format by vector-tensorization. Submitted (2010)

  4. Hackbusch W.: Convolution of hp-functions on locally refined grids. IMA J. Numer. Anal. 29, 960–985 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Hackbusch, W.: Tensor spaces and numerical tensor calculus. Monograph (in preparation)

  6. Hackbusch W., Kühn S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15, 706–722 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Khoromskij, B.N.: O(d log N)-quantics approximation of N-d tensors in high-dimensional numerical modeling. Constr. Approx (2011). doi:10.1007/s00365-011-9131-1

  8. Oseledets I.V.: Approximation of 2d × 2d matrices using tensor decomposition. SIAM J. Matrix Anal. Appl. 31, 2130–2145 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Oseledets I.V., Tyrtyshnikov E.E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31, 3744–3759 (2009)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wolfgang Hackbusch.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hackbusch, W. Tensorisation of vectors and their efficient convolution. Numer. Math. 119, 465–488 (2011). https://doi.org/10.1007/s00211-011-0393-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-011-0393-0

Mathematics Subject Classification (2000)

Navigation