Abstract
Tensor ring (TR) decomposition is an efficient approach to discover the hidden low-rank patterns in higher-order tensors, and streaming tensors are becoming highly prevalent in real-world applications. In this paper, we investigate how to track TR decompositions of streaming tensors. An efficient algorithm is first proposed. Then, based on this algorithm and randomized sketching techniques, we present a randomized streaming TR decomposition. The proposed algorithms make full use of the structure of TR decomposition, and the randomized version can allow any sketching type. Theoretical results on sketch size are provided. In addition, the complexity analyses for the obtained algorithms are also given. We compare our proposals with the existing batch methods using both real and synthetic data. Numerical results show that they have better performance in computing time with maintaining similar accuracy.
Similar content being viewed by others
Data Availibility
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Notes
We use rSTR-U, rSTR-L and rSTR-K to notate rSTR with uniform sampling, leverage-based sampling and KSRFT, respectively.
References
Ahmadi-Asl S, Cichocki A, Phan AH, Asante-Mensah MG, Ghazani MM, Tanaka T, Oseledets IV (2020) Randomized algorithms for fast computation of low rank tensor ring model. Mach Learn Sci Technol 2(1):011001. https://doi.org/10.1088/2632-2153/abad87
Ahmadi-Asl S, Caiafa CF, Cichocki A, Phan AH, Tanaka T, Oseledets I, Wang J (2021) Cross tensor approximation methods for compression and dimensionality reduction. IEEE Access 9:150809–150838. https://doi.org/10.1109/ACCESS.2021.3125069
Bader BW, Kolda TG, et al (2021) Tensor Toolbox for MATLAB. Version 3.2.1. https://www.tensortoolbox.org Accessed 2021/04/05
Battaglino C, Ballard G, Kolda TG (2018) A practical randomized CP tensor decomposition. SIAM J Matrix Anal Appl 39(2):876–901. https://doi.org/10.1137/17M1112303
Chachlakis DG, Dhanaraj M, Prater-Bennette A, Markopoulos PP (2021) Dynamic l1-norm Tucker tensor decomposition. IEEE J Sel Topics Signal Process 15(3):587–602. https://doi.org/10.1109/JSTSP.2021.3058846
Drineas P, Kannan R, Mahoney MW (2006) Fast Monte Carlo algorithms for matrices i: Approximating matrix multiplication. SIAM J Comput 36(1):132–157. https://doi.org/10.1137/S0097539704442684
Drineas P, Mahoney MW, Muthukrishnan S, Sarlós T (2011) Faster least squares approximation. Numer Math 117(2):219–249. https://doi.org/10.1007/s00211-010-0331-6
Drineas P, Magdon-Ismail M, Mahoney MW, Woodruff DP (2012) Fast approximation of matrix coherence and statistical leverage. J Mach Learn Res 13(1):3475–3506
Espig M, Naraparaju KK, Schneider J (2012) A note on tensor chain approximation. Comput Visual Sci 15:331–344. https://doi.org/10.1007/s00791-014-0218-7
He Y, Atia GK (2022) Patch tracking-based streaming tensor ring completion for visual data recovery. IEEE Trans Circuits Syst Video Technol 32(12):8312–8326. https://doi.org/10.1109/TCSVT.2022.3190818
Huang Z, Qiu Y, Yu J, Zhou G (2022) Multi-aspect streaming tensor ring completion for dynamic incremental data. IEEE Signal Process Lett 29:2657–2661. https://doi.org/10.1109/LSP.2022.3231469
Jin R, Kolda TG, Ward R (2021) Faster Johnson-Lindenstrauss transforms via Kronecker products. Inf Inference 10(4):1533–1562. https://doi.org/10.1093/imaiai/iaaa028
Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500. https://doi.org/10.1137/07070111X
Kressner D, Vandereycken B, Voorhaar R (2023) Streaming tensor train approximation. SIAM J Sci Comput 45(5):2610–2631. https://doi.org/10.1137/22M1515045
Liu H, Yang LT, Guo Y, Xie X, Ma J (2021) An incremental tensor-train decomposition for cyber-physical-social big data. IEEE Trans Big Data 7(2):341–354. https://doi.org/10.1109/TBDATA.2018.2867485
Malik OA (2022) More efficient sampling for tensor decomposition with worst-case guarantees. In: Proceedings of the 39th international conference on machine learning, vol 162, pp 14887–14917. PMLR, Virtual Event
Malik OA, Becker S (2021) A sampling-based method for tensor ring decomposition. In: Proceedings of the 38th international conference on machine learning, vol 139, pp 7400–7411. PMLR, Virtual Event
Ma C, Yang X, Wang H (2018) Randomized online CP decomposition. In: 2018 tenth international conference on advanced computational intelligence (ICACI), pp 414–419. IEEE, Xiamen, China
Mickelin O, Karaman S (2020) On algorithms for and computing with the tensor ring decomposition. Numer Linear Algebra Appl 27(3):2289. https://doi.org/10.1002/nla.2289
Oseledets IV (2011) Tensor-train decomposition. SIAM J Sci Comput 33(5):2295–2317. https://doi.org/10.1137/090752286
Sun J, Tao D, Papadimitriou S, Yu PS, Faloutsos C (2008) Incremental tensor analysis: Theory and applications. ACM Trans Knowl Discov Data 2(3):1556–4681. https://doi.org/10.1145/1409620.1409621
Sun Y, Guo Y, Luo C, Tropp J, Udell M (2020) Low-rank Tucker approximation of a tensor from streaming data. SIAM J Math Data Sci 2(4):1123–1150. https://doi.org/10.1137/19M1257718
Sun J, Tao D, Faloutsos C (2006) Beyond streams and graphs: Dynamic tensor analysis. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, vol KDD ’06, pp 374–383. Association for Computing Machinery, New York, NY, USA
Thanh LT, Abed-Meraim K, Trung NL, Boyer R (2021) Adaptive algorithms for tracking tensor-train decomposition of streaming tensors. In: 2020 28th European signal processing conference (EUSIPCO), pp 995–999. IEEE, Amsterdam, Netherlands
Thanh LT, Abed-Meraim K, Trung NL, Hafiane A (2022) A contemporary and comprehensive survey on streaming tensor decomposition. IEEE Trans Knowl Data Eng 35(11):10897–10921. https://doi.org/10.1109/TKDE.2022.3230874
Woodruff DP (2014) Sketching as a tool for numerical linear algebra. Found Trends Theor Comput Sci 10(1–2):1–157. https://doi.org/10.1561/0400000060
Xiao H, Wang F, Ma F, Gao J (2018) eOTD: An efficient online Tucker decomposition for higher order tensors. In: 2018 IEEE international conference on data mining (ICDM), pp 1326–1331
Yu Y, Li H (2024a) Practical sketching-based randomized tensor ring decomposition. Numer Linear Algebra Appl 31(4): 2548. https://doi.org/10.1002/nla.2548
Yu Y, Li H (2024) Practical alternating least squares for tensor ring decomposition. Numer Linear Algebra Appl 31(3):2542. https://doi.org/10.1002/nla.2542
Yu J, Zou T, Zhou G (2022) Online subspace learning and imputation by tensor-ring decomposition. Neural Netw 153:314–324. https://doi.org/10.1016/j.neunet.2022.05.023
Yuan L, Cao J, Zhao X, Wu Q, Zhao Q (2018) Higher-dimension tensor completion via low-rank tensor ring decomposition. In: 2018 Asia-pacific signal and information processing association annual summit and conference (APSIPA ASC), pp 1071–1076. IEEE, Honolulu, HI, USA
Yuan L, Li C, Cao J, Zhao Q (2019) Randomized tensor ring decomposition and its application to large-scale data reconstruction. In: ICASSP 2019 - 2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2127–2131. IEEE, Brighton Conference Centre Brighton, U.K
Zeng C, Ng MK (2021) Incremental CP tensor decomposition by alternating minimization method. SIAM J Matrix Anal Appl 42(2):832–858. https://doi.org/10.1137/20M1319097
Zeng C, Ng MK, Jiang TX (2024) Incremental algorithms for truncated higher-order singular value decompositions. BIT Numer Math 64(1):4. https://doi.org/10.1007/s10543-023-01004-7
Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A (2016) Tensor ring decomposition. arXiv preprint arXiv:1606.05535
Zhou S, Vinh NX, Bailey J, Jia Y, Davidson I (2016) Accelerating online CP decompositions for higher order tensors. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, vol KDD ’16, pp 1375–1384. Association for Computing Machinery, New York, NY, USA
Acknowledgements
The authors would like to thank the editor and the anonymous reviewers for their detailed comments and helpful suggestions, which helped considerably to improve the quality of the paper.
Funding
This work was supported by the National Natural Science Foundation of China (No. 12471349) and the Natural Science Foundation Project of CQ CSTC (No. cstc2019jcyj-msxmX0267).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Proofs
We first state some preliminaries that will be used in the proofs, where Lemma 6 is a variant of [Drineas et al. (2011), Lemma 1] for multiple right hand sides, Lemma 7 is a part of [Drineas et al. (2006), Lemma 8] and Lemma 8 is from [Drineas et al. (2011), Theorem 4].
Lemma 6
Let \({{\,\textrm{OPT}\,}}{\mathop {=}\limits ^{{\tiny \textrm{def}}}}\min _{\textbf{X}} \Vert \textbf{A} \textbf{X} - \textbf{Y}\Vert _F\) with \(\textbf{A} \in \mathbb {R}^{I \times R}\) and \(I > R\), let \(\textbf{U} \in \mathbb {R}^{I \times {{\,\textrm{rank}\,}}(\textbf{A})}\) contain the left singular vectors of \(\textbf{A}\), let \(\textbf{U}^\perp \) be an orthonormal matrix whose columns span the space perpendicular to \({{\,\textrm{range}\,}}(\textbf{U})\) and define \(\textbf{Y}^\perp {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\textbf{U}^\perp (\textbf{U}^{\perp })^\intercal \textbf{Y}\). If \(\mathbf {\Psi } \in \mathbb {R}^{m \times I}\) satisfies
for some \(\varepsilon \in (0,1)\), then
where \(\tilde{\textbf{X}} {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\arg \min _{\textbf{X}} \Vert \mathbf {\Psi } \textbf{A} \textbf{X} - \mathbf {\Psi } \textbf{Y} \Vert _F\).
Lemma 7
Let \(\textbf{A}\) and \(\textbf{B}\) be matrices with I rows, and let \(\varvec{q} \in \mathbb {R}^I\) be a probability distribution satisfying
If \(\mathbf {\Psi } \in \mathbb {R}^{m \times I}\) is a sampling matrix with the probability distribution \(\varvec{q}\), then
Lemma 8
Let \(\textbf{A} \in \mathbb {R}^{I \times R}\) with \(\Vert \textbf{A}\Vert _2 \le 1\), and let \(\varvec{q} \in \mathbb {R}^I\) be a probability distribution satisfying
If \(\mathbf {\Psi } \in \mathbb {R}^{m \times I}\) is a sampling matrix with the probability distribution \(\varvec{q}\), \(\varepsilon \in (0,1)\) is an accuracy parameter, \(\Vert \textbf{A}\Vert _F^2 \ge \frac{1}{24}\), and
then, with a probability of at least \(1-\delta \),
1.1 Proof of Theorem 2
We first state a theorem similar to [Malik and Becker (2021), Theorem 7] i.e., the theoretical guarantee of uniform sampling for TR-ALS.
Theorem 9
Let \(\mathbf {\Psi }_{n}\) be a uniform sampling matrix defined as in (9), and
If
with \(\varepsilon \in (0,1)\), \(\delta \in (0,1)\), and \(\gamma >1\), then the following inequality holds with a probability of at least \(1-\delta \):
Proof
Let \(\textbf{U} \in \mathbb {R}^{J_n \times {{\,\textrm{rank}\,}}(\textbf{G}_{[2]}^{\ne n})}\) contain the left singular vectors of \(\textbf{G}_{[2]}^{\ne n}\) and \({{\,\textrm{rank}\,}}(\textbf{G}_{[2]}^{\ne n}) = R_{n} R_{n+1}\). Then, there is a \(\gamma >1\) such that
Note that \(\Vert \textbf{U}\Vert _F = \sqrt{R_{n} R_{n+1}}\). Thus, setting \(\beta = \frac{1}{\gamma }\), we have
That is, the uniform probability distribution \(\varvec{q}\) on \([J_n]\) satisfies (A4). Moreover, it is easy to see that \(\Vert \textbf{U}\Vert _2 = 1\le 1\), \(\Vert \textbf{U}\Vert _F^2 = R_{n} R_{n+1}>\frac{1}{24}\), and
Thus, noting that \(\mathbf {\Psi }_{n}\) is a sampling matrix with the probability distribution \(\varvec{q}\), applying Lemma 8 implies that
On the other hand, note that for all \(i \in [R_{n} R_{n+1}]\),
Thus, choosing \(\varepsilon = 1-1/\sqrt{2}\) gives that \(\sigma ^2_{\min } (\mathbf {\Psi }_{n} \textbf{U}) \ge \frac{1}{\sqrt{2}}\), therefore (A1) is satisfied.
Next, we check (A2). Recall that \((\textbf{X}_{[n]}^\intercal )^\perp {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\textbf{U}^\perp (\textbf{U}^{\perp })^\intercal \textbf{X}_{[n]}^\intercal \). Hence, \(\textbf{U}^\intercal (\textbf{X}_{[n]}^\intercal )^\perp =0\) and
Thus, noting (A3) and (A4), applying Lemma 7, we get
where \({{\,\textrm{OPT}\,}}= \min _{\textbf{G}_{n(2)}} \left\| \textbf{G}_{[2]}^{\ne n} \textbf{G}_{n(2)}^\intercal - \textbf{X}_{[n]}^\intercal \right\| _F\). Markov’s inequality now implies that with probability at least \(1-\delta _2\)
Setting \(m \ge \frac{2R_{n} R_{n+1}}{\delta _2 \beta \varepsilon }\) and using the value of \(\beta \) specified above, we have that (A2) is indeed satisfied.
Finally, using Lemma 6 concludes the proof of the theorem. \(\square \)
Proof of Theorem 2
For the temporal mode N, if
according to Theorem 9, at each time step we can obtain a corresponding upper error bound between the new coming tensor and its decomposition as follows
To obtain an upper bound on the error for all current time steps, let \({\tilde{\varepsilon }} = \min \{{\tilde{\varepsilon }}_1, {\tilde{\varepsilon }}_2, \cdots \}\), then the following holds with a probability of at least \(1-\delta \):
for
For the non-temporal mode n, if
we have
Thus, setting \(\varepsilon = \min \{{\tilde{\varepsilon }}, \varepsilon '_1, \varepsilon '_2, \cdots , \varepsilon '_{N-1}\}\), the proof can be completed. \(\square \)
Along the same line, the proofs of Theorems 4 and 5 can be completed by using [Malik and Becker (2021), Theorem 7] and [Yu and Li (2024a), Theorem 5] respectively.
Specific algorithms based on different sketching
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yu, Y., Li, H. Tracking tensor ring decompositions of streaming tensors. Comp. Appl. Math. 44, 60 (2025). https://doi.org/10.1007/s40314-024-03019-4
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-024-03019-4
Keywords
- Tensor ring decomposition
- Streaming tensor
- Randomized algorithm
- Alternating least squares
- Kronecker sub-sampled randomized Fourier transform
- Uniform sampling
- Importance sampling
- Leverage score