[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Tracking tensor ring decompositions of streaming tensors

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

Tensor ring (TR) decomposition is an efficient approach to discover the hidden low-rank patterns in higher-order tensors, and streaming tensors are becoming highly prevalent in real-world applications. In this paper, we investigate how to track TR decompositions of streaming tensors. An efficient algorithm is first proposed. Then, based on this algorithm and randomized sketching techniques, we present a randomized streaming TR decomposition. The proposed algorithms make full use of the structure of TR decomposition, and the randomized version can allow any sketching type. Theoretical results on sketch size are provided. In addition, the complexity analyses for the obtained algorithms are also given. We compare our proposals with the existing batch methods using both real and synthetic data. Numerical results show that they have better performance in computing time with maintaining similar accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availibility

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Notes

  1. We use rSTR-U, rSTR-L and rSTR-K to notate rSTR with uniform sampling, leverage-based sampling and KSRFT, respectively.

  2. https://cave.cs.columbia.edu/repository/COIL-20.

  3. https://github.com/qbzhao/BRTF/tree/master/videos.

References

  • Ahmadi-Asl S, Cichocki A, Phan AH, Asante-Mensah MG, Ghazani MM, Tanaka T, Oseledets IV (2020) Randomized algorithms for fast computation of low rank tensor ring model. Mach Learn Sci Technol 2(1):011001. https://doi.org/10.1088/2632-2153/abad87

    Article  Google Scholar 

  • Ahmadi-Asl S, Caiafa CF, Cichocki A, Phan AH, Tanaka T, Oseledets I, Wang J (2021) Cross tensor approximation methods for compression and dimensionality reduction. IEEE Access 9:150809–150838. https://doi.org/10.1109/ACCESS.2021.3125069

    Article  Google Scholar 

  • Bader BW, Kolda TG, et al (2021) Tensor Toolbox for MATLAB. Version 3.2.1. https://www.tensortoolbox.org Accessed 2021/04/05

  • Battaglino C, Ballard G, Kolda TG (2018) A practical randomized CP tensor decomposition. SIAM J Matrix Anal Appl 39(2):876–901. https://doi.org/10.1137/17M1112303

    Article  MathSciNet  Google Scholar 

  • Chachlakis DG, Dhanaraj M, Prater-Bennette A, Markopoulos PP (2021) Dynamic l1-norm Tucker tensor decomposition. IEEE J Sel Topics Signal Process 15(3):587–602. https://doi.org/10.1109/JSTSP.2021.3058846

  • Drineas P, Kannan R, Mahoney MW (2006) Fast Monte Carlo algorithms for matrices i: Approximating matrix multiplication. SIAM J Comput 36(1):132–157. https://doi.org/10.1137/S0097539704442684

  • Drineas P, Mahoney MW, Muthukrishnan S, Sarlós T (2011) Faster least squares approximation. Numer Math 117(2):219–249. https://doi.org/10.1007/s00211-010-0331-6

    Article  MathSciNet  Google Scholar 

  • Drineas P, Magdon-Ismail M, Mahoney MW, Woodruff DP (2012) Fast approximation of matrix coherence and statistical leverage. J Mach Learn Res 13(1):3475–3506

    MathSciNet  Google Scholar 

  • Espig M, Naraparaju KK, Schneider J (2012) A note on tensor chain approximation. Comput Visual Sci 15:331–344. https://doi.org/10.1007/s00791-014-0218-7

    Article  MathSciNet  Google Scholar 

  • He Y, Atia GK (2022) Patch tracking-based streaming tensor ring completion for visual data recovery. IEEE Trans Circuits Syst Video Technol 32(12):8312–8326. https://doi.org/10.1109/TCSVT.2022.3190818

    Article  Google Scholar 

  • Huang Z, Qiu Y, Yu J, Zhou G (2022) Multi-aspect streaming tensor ring completion for dynamic incremental data. IEEE Signal Process Lett 29:2657–2661. https://doi.org/10.1109/LSP.2022.3231469

    Article  Google Scholar 

  • Jin R, Kolda TG, Ward R (2021) Faster Johnson-Lindenstrauss transforms via Kronecker products. Inf Inference 10(4):1533–1562. https://doi.org/10.1093/imaiai/iaaa028

    Article  MathSciNet  Google Scholar 

  • Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500. https://doi.org/10.1137/07070111X

    Article  MathSciNet  Google Scholar 

  • Kressner D, Vandereycken B, Voorhaar R (2023) Streaming tensor train approximation. SIAM J Sci Comput 45(5):2610–2631. https://doi.org/10.1137/22M1515045

    Article  MathSciNet  Google Scholar 

  • Liu H, Yang LT, Guo Y, Xie X, Ma J (2021) An incremental tensor-train decomposition for cyber-physical-social big data. IEEE Trans Big Data 7(2):341–354. https://doi.org/10.1109/TBDATA.2018.2867485

    Article  Google Scholar 

  • Malik OA (2022) More efficient sampling for tensor decomposition with worst-case guarantees. In: Proceedings of the 39th international conference on machine learning, vol 162, pp 14887–14917. PMLR, Virtual Event

  • Malik OA, Becker S (2021) A sampling-based method for tensor ring decomposition. In: Proceedings of the 38th international conference on machine learning, vol 139, pp 7400–7411. PMLR, Virtual Event

  • Ma C, Yang X, Wang H (2018) Randomized online CP decomposition. In: 2018 tenth international conference on advanced computational intelligence (ICACI), pp 414–419. IEEE, Xiamen, China

  • Mickelin O, Karaman S (2020) On algorithms for and computing with the tensor ring decomposition. Numer Linear Algebra Appl 27(3):2289. https://doi.org/10.1002/nla.2289

    Article  MathSciNet  Google Scholar 

  • Oseledets IV (2011) Tensor-train decomposition. SIAM J Sci Comput 33(5):2295–2317. https://doi.org/10.1137/090752286

    Article  MathSciNet  Google Scholar 

  • Sun J, Tao D, Papadimitriou S, Yu PS, Faloutsos C (2008) Incremental tensor analysis: Theory and applications. ACM Trans Knowl Discov Data 2(3):1556–4681. https://doi.org/10.1145/1409620.1409621

  • Sun Y, Guo Y, Luo C, Tropp J, Udell M (2020) Low-rank Tucker approximation of a tensor from streaming data. SIAM J Math Data Sci 2(4):1123–1150. https://doi.org/10.1137/19M1257718

  • Sun J, Tao D, Faloutsos C (2006) Beyond streams and graphs: Dynamic tensor analysis. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, vol KDD ’06, pp 374–383. Association for Computing Machinery, New York, NY, USA

  • Thanh LT, Abed-Meraim K, Trung NL, Boyer R (2021) Adaptive algorithms for tracking tensor-train decomposition of streaming tensors. In: 2020 28th European signal processing conference (EUSIPCO), pp 995–999. IEEE, Amsterdam, Netherlands

  • Thanh LT, Abed-Meraim K, Trung NL, Hafiane A (2022) A contemporary and comprehensive survey on streaming tensor decomposition. IEEE Trans Knowl Data Eng 35(11):10897–10921. https://doi.org/10.1109/TKDE.2022.3230874

  • Woodruff DP (2014) Sketching as a tool for numerical linear algebra. Found Trends Theor Comput Sci 10(1–2):1–157. https://doi.org/10.1561/0400000060

    Article  MathSciNet  Google Scholar 

  • Xiao H, Wang F, Ma F, Gao J (2018) eOTD: An efficient online Tucker decomposition for higher order tensors. In: 2018 IEEE international conference on data mining (ICDM), pp 1326–1331

  • Yu Y, Li H (2024a) Practical sketching-based randomized tensor ring decomposition. Numer Linear Algebra Appl 31(4): 2548. https://doi.org/10.1002/nla.2548

  • Yu Y, Li H (2024) Practical alternating least squares for tensor ring decomposition. Numer Linear Algebra Appl 31(3):2542. https://doi.org/10.1002/nla.2542

    Article  MathSciNet  Google Scholar 

  • Yu J, Zou T, Zhou G (2022) Online subspace learning and imputation by tensor-ring decomposition. Neural Netw 153:314–324. https://doi.org/10.1016/j.neunet.2022.05.023

    Article  Google Scholar 

  • Yuan L, Cao J, Zhao X, Wu Q, Zhao Q (2018) Higher-dimension tensor completion via low-rank tensor ring decomposition. In: 2018 Asia-pacific signal and information processing association annual summit and conference (APSIPA ASC), pp 1071–1076. IEEE, Honolulu, HI, USA

  • Yuan L, Li C, Cao J, Zhao Q (2019) Randomized tensor ring decomposition and its application to large-scale data reconstruction. In: ICASSP 2019 - 2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2127–2131. IEEE, Brighton Conference Centre Brighton, U.K

  • Zeng C, Ng MK (2021) Incremental CP tensor decomposition by alternating minimization method. SIAM J Matrix Anal Appl 42(2):832–858. https://doi.org/10.1137/20M1319097

    Article  MathSciNet  Google Scholar 

  • Zeng C, Ng MK, Jiang TX (2024) Incremental algorithms for truncated higher-order singular value decompositions. BIT Numer Math 64(1):4. https://doi.org/10.1007/s10543-023-01004-7

  • Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A (2016) Tensor ring decomposition. arXiv preprint arXiv:1606.05535

  • Zhou S, Vinh NX, Bailey J, Jia Y, Davidson I (2016) Accelerating online CP decompositions for higher order tensors. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, vol KDD ’16, pp 1375–1384. Association for Computing Machinery, New York, NY, USA

Download references

Acknowledgements

The authors would like to thank the editor and the anonymous reviewers for their detailed comments and helpful suggestions, which helped considerably to improve the quality of the paper.

Funding

This work was supported by the National Natural Science Foundation of China (No. 12471349) and the Natural Science Foundation Project of CQ CSTC (No. cstc2019jcyj-msxmX0267).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hanyu Li.

Ethics declarations

Conflict of interest

The authors declare that they have no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs

We first state some preliminaries that will be used in the proofs, where Lemma 6 is a variant of [Drineas et al. (2011), Lemma 1] for multiple right hand sides, Lemma 7 is a part of [Drineas et al. (2006), Lemma 8] and Lemma 8 is from [Drineas et al. (2011), Theorem 4].

Lemma 6

Let \({{\,\textrm{OPT}\,}}{\mathop {=}\limits ^{{\tiny \textrm{def}}}}\min _{\textbf{X}} \Vert \textbf{A} \textbf{X} - \textbf{Y}\Vert _F\) with \(\textbf{A} \in \mathbb {R}^{I \times R}\) and \(I > R\), let \(\textbf{U} \in \mathbb {R}^{I \times {{\,\textrm{rank}\,}}(\textbf{A})}\) contain the left singular vectors of \(\textbf{A}\), let \(\textbf{U}^\perp \) be an orthonormal matrix whose columns span the space perpendicular to \({{\,\textrm{range}\,}}(\textbf{U})\) and define \(\textbf{Y}^\perp {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\textbf{U}^\perp (\textbf{U}^{\perp })^\intercal \textbf{Y}\). If \(\mathbf {\Psi } \in \mathbb {R}^{m \times I}\) satisfies

$$\begin{aligned} \sigma ^2_{\min } (\mathbf {\Psi } \textbf{U}) \ge \frac{1}{\sqrt{2}}, \end{aligned}$$
(A1)
$$\begin{aligned} \Vert \textbf{U}^\intercal \mathbf {\Psi }^\intercal \mathbf {\Psi } \textbf{Y}^\perp \Vert _F^2 \le \frac{\varepsilon }{2} {{\,\textrm{OPT}\,}}^2, \end{aligned}$$
(A2)

for some \(\varepsilon \in (0,1)\), then

$$\begin{aligned} \Vert \textbf{A} \tilde{\textbf{X}} - \textbf{Y} \Vert _F \le (1+\varepsilon ){{\,\textrm{OPT}\,}}, \end{aligned}$$

where \(\tilde{\textbf{X}} {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\arg \min _{\textbf{X}} \Vert \mathbf {\Psi } \textbf{A} \textbf{X} - \mathbf {\Psi } \textbf{Y} \Vert _F\).

Lemma 7

Let \(\textbf{A}\) and \(\textbf{B}\) be matrices with I rows, and let \(\varvec{q} \in \mathbb {R}^I\) be a probability distribution satisfying

$$\begin{aligned} \varvec{q}(i) \ge \beta \frac{\Vert \textbf{A}(i,:)\Vert ^2_2}{\Vert \textbf{A} \Vert _F^2} ~~\text {for all}~~ i \in [I] ~~\text {and some}~~ \beta \in (0, 1]. \end{aligned}$$

If \(\mathbf {\Psi } \in \mathbb {R}^{m \times I}\) is a sampling matrix with the probability distribution \(\varvec{q}\), then

$$\begin{aligned} \mathbb {E} \Vert \textbf{A}^\intercal \textbf{B} - \textbf{A}^\intercal \mathbf {\Psi }^\intercal \mathbf {\Psi } \textbf{B} \Vert _F^2 \le \frac{1}{\beta m} \Vert \textbf{A}\Vert _F^2 \Vert \textbf{B}\Vert _F^2. \end{aligned}$$

Lemma 8

Let \(\textbf{A} \in \mathbb {R}^{I \times R}\) with \(\Vert \textbf{A}\Vert _2 \le 1\), and let \(\varvec{q} \in \mathbb {R}^I\) be a probability distribution satisfying

$$\begin{aligned} \varvec{q}(i) \ge \beta \frac{\Vert \textbf{A}(i,:)\Vert ^2_2}{\Vert \textbf{A} \Vert _F^2} ~~\text {for all}~~ i \in [I] ~~\text {and some}~~ \beta \in (0, 1]. \end{aligned}$$

If \(\mathbf {\Psi } \in \mathbb {R}^{m \times I}\) is a sampling matrix with the probability distribution \(\varvec{q}\), \(\varepsilon \in (0,1)\) is an accuracy parameter, \(\Vert \textbf{A}\Vert _F^2 \ge \frac{1}{24}\), and

$$\begin{aligned} m \ge \frac{96 \Vert \textbf{A}\Vert _F^2}{\beta \varepsilon ^2}\ln \left( \frac{96 \Vert \textbf{A}\Vert _F^2}{\beta \varepsilon ^2 \sqrt{\delta }} \right) , \end{aligned}$$

then, with a probability of at least \(1-\delta \),

$$\begin{aligned} \Vert \textbf{A}^\intercal \textbf{A} - \textbf{A}^\intercal \mathbf {\Psi }^\intercal \mathbf {\Psi } \textbf{A} \Vert _F^2 \le \varepsilon . \end{aligned}$$

1.1 Proof of Theorem 2

We first state a theorem similar to [Malik and Becker (2021), Theorem 7] i.e., the theoretical guarantee of uniform sampling for TR-ALS.

Theorem 9

Let \(\mathbf {\Psi }_{n}\) be a uniform sampling matrix defined as in (9), and

$$\begin{aligned} \tilde{\textbf{G}}_{n(2)} {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\arg \min _{\textbf{G}_{n(2)}} \left\| \textbf{X}_{[n]} \mathbf {\Psi }_{n}^\intercal - \textbf{G}_{n(2)}(\mathbf {\Psi }_{n}\textbf{G}_{[2]}^{\ne n})^\intercal \right\| _F. \end{aligned}$$

If

$$\begin{aligned} m \ge \left( \frac{2 \gamma R_{n} R_{n+1}}{\varepsilon } \right) \max \left[ \frac{48}{\varepsilon }\ln \left( \frac{96 \gamma R_{n} R_{n+1}}{ \varepsilon ^2 \sqrt{\delta }} \right) , \frac{1}{\delta } \right] , \end{aligned}$$

with \(\varepsilon \in (0,1)\), \(\delta \in (0,1)\), and \(\gamma >1\), then the following inequality holds with a probability of at least \(1-\delta \):

$$\begin{aligned} \left\| \textbf{X}_{[n]} - \tilde{\textbf{G}}_{n(2)}(\textbf{G}_{[2]}^{\ne n})^\intercal \right\| _F \le (1+\varepsilon ) \min _{\textbf{G}_{n(2)}} \left\| \textbf{X}_{[n]} - \textbf{G}_{n(2)}(\textbf{G}_{[2]}^{\ne n})^\intercal \right\| _F. \end{aligned}$$

Proof

Let \(\textbf{U} \in \mathbb {R}^{J_n \times {{\,\textrm{rank}\,}}(\textbf{G}_{[2]}^{\ne n})}\) contain the left singular vectors of \(\textbf{G}_{[2]}^{\ne n}\) and \({{\,\textrm{rank}\,}}(\textbf{G}_{[2]}^{\ne n}) = R_{n} R_{n+1}\). Then, there is a \(\gamma >1\) such that

$$\begin{aligned} \Vert \textbf{U}(i,:)\Vert _2^2 \le \frac{\gamma R_{n} R_{n+1}}{J_n} ~~\text {for all}~~ i \in \left[ J_n \right] . \end{aligned}$$
(A3)

Note that \(\Vert \textbf{U}\Vert _F = \sqrt{R_{n} R_{n+1}}\). Thus, setting \(\beta = \frac{1}{\gamma }\), we have

$$\begin{aligned} \frac{1}{J_n} \ge \beta \frac{\Vert \textbf{U}(i,:)\Vert _2^2}{\Vert \textbf{U}\Vert _F^2}. \end{aligned}$$
(A4)

That is, the uniform probability distribution \(\varvec{q}\) on \([J_n]\) satisfies (A4). Moreover, it is easy to see that \(\Vert \textbf{U}\Vert _2 = 1\le 1\), \(\Vert \textbf{U}\Vert _F^2 = R_{n} R_{n+1}>\frac{1}{24}\), and

$$\begin{aligned} m \ge \frac{96R_{n} R_{n+1}}{\beta \varepsilon ^2}\ln \left( \frac{96 R_{n} R_{n+1}}{\beta \varepsilon ^2 \sqrt{\delta _1}} \right) . \end{aligned}$$

Thus, noting that \(\mathbf {\Psi }_{n}\) is a sampling matrix with the probability distribution \(\varvec{q}\), applying Lemma 8 implies that

$$\begin{aligned} \Vert \textbf{U}^\intercal \textbf{U} - \textbf{U}^\intercal \mathbf {\Psi }_{n}^\intercal \mathbf {\Psi }_{n} \textbf{U} \Vert _2 \le \varepsilon . \end{aligned}$$

On the other hand, note that for all \(i \in [R_{n} R_{n+1}]\),

$$\begin{aligned} |1-\sigma ^2_{i}(\mathbf {\Psi }_{n}\textbf{U})|&= |\sigma _{i}(\textbf{U}^\intercal \textbf{U}) - \sigma _{i}(\textbf{U}^\intercal \mathbf {\Psi }_{n}^\intercal \mathbf {\Psi }_{n} \textbf{U})| \\&\le \Vert \textbf{U}^\intercal \textbf{U} - \textbf{U}^\intercal \mathbf {\Psi }_{n}^\intercal \mathbf {\Psi }_{n} \textbf{U} \Vert _2. \end{aligned}$$

Thus, choosing \(\varepsilon = 1-1/\sqrt{2}\) gives that \(\sigma ^2_{\min } (\mathbf {\Psi }_{n} \textbf{U}) \ge \frac{1}{\sqrt{2}}\), therefore (A1) is satisfied.

Next, we check (A2). Recall that \((\textbf{X}_{[n]}^\intercal )^\perp {\mathop {=}\limits ^{{\tiny \textrm{def}}}}\textbf{U}^\perp (\textbf{U}^{\perp })^\intercal \textbf{X}_{[n]}^\intercal \). Hence, \(\textbf{U}^\intercal (\textbf{X}_{[n]}^\intercal )^\perp =0\) and

$$\begin{aligned} \Vert (\mathbf {\Psi }_{n}\textbf{U})^\intercal \mathbf {\Psi }_{n} (\textbf{X}_{[n]}^\intercal )^\perp \Vert _2^2 = \Vert \textbf{U}^\intercal \mathbf {\Psi }_{n}^\intercal \mathbf {\Psi }_{n} (\textbf{X}_{[n]}^\intercal )^\perp - \textbf{U}^\intercal (\textbf{X}_{[n]}^\intercal )^\perp \Vert _2^2. \end{aligned}$$

Thus, noting (A3) and (A4), applying Lemma 7, we get

$$\begin{aligned} \mathbb {E} \left[ \Vert (\mathbf {\Psi }_{n}\textbf{U})^\intercal \mathbf {\Psi }_{n} (\textbf{X}_{[n]}^\intercal )^\perp \Vert _2^2 \right] \le \frac{1}{\beta m} \Vert \textbf{U} \Vert _F^2 \Vert (\textbf{X}_{[n]}^\intercal )^\perp \Vert _2^2 = \frac{R_{n} R_{n+1} {{\,\textrm{OPT}\,}}^2}{\beta m}, \end{aligned}$$

where \({{\,\textrm{OPT}\,}}= \min _{\textbf{G}_{n(2)}} \left\| \textbf{G}_{[2]}^{\ne n} \textbf{G}_{n(2)}^\intercal - \textbf{X}_{[n]}^\intercal \right\| _F\). Markov’s inequality now implies that with probability at least \(1-\delta _2\)

$$\begin{aligned} \Vert (\mathbf {\Psi }_{n}\textbf{U})^\intercal \mathbf {\Psi }_{n} (\textbf{X}_{[n]}^\intercal )^\perp \Vert _2^2 \le \frac{ R_{n} R_{n+1} {{\,\textrm{OPT}\,}}^2}{\delta _2 \beta m}. \end{aligned}$$

Setting \(m \ge \frac{2R_{n} R_{n+1}}{\delta _2 \beta \varepsilon }\) and using the value of \(\beta \) specified above, we have that (A2) is indeed satisfied.

Finally, using Lemma 6 concludes the proof of the theorem. \(\square \)

Proof of Theorem 2

For the temporal mode N, if

$$\begin{aligned} {\tilde{m}}_1&\ge \left( \frac{2 \gamma R_{N} R_{1}}{{\tilde{\varepsilon }}_1} \right) \max \left[ \frac{48}{{\tilde{\varepsilon }}_1}\ln \left( \frac{96 \gamma R_{N} R_{1}}{ {\tilde{\varepsilon }}_1^2 \sqrt{\delta }} \right) , \frac{1}{\delta } \right] ,\\ {\tilde{m}}_2&\ge \left( \frac{2 \gamma R_{N} R_{1}}{{\tilde{\varepsilon }}_2} \right) \max \left[ \frac{48}{{\tilde{\varepsilon }}_2}\ln \left( \frac{96 \gamma R_{N} R_{1}}{ {\tilde{\varepsilon }}_2^2 \sqrt{\delta }} \right) , \frac{1}{\delta } \right] ,\\&\cdots \end{aligned}$$

according to Theorem 9, at each time step we can obtain a corresponding upper error bound between the new coming tensor and its decomposition as follows

$$\begin{aligned} \text {The 1st time step:}&\left\| \textbf{X}_{[N]}^{old} - \tilde{\textbf{G}}_{N(2)}^{old}(\textbf{G}_{[2]}^{\ne N})^\intercal \right\| _F \le (1+{\tilde{\varepsilon }}_1) \min _{\textbf{G}_{N(2)}^{old}} \left\| \textbf{X}_{[N]}^{old} - \textbf{G}_{N(2)}^{old}(\textbf{G}_{[2]}^{\ne N})^\intercal \right\| _F,\\ \text {The 2nd time step:}&\left\| \textbf{X}_{[N]}^{new} - \tilde{\textbf{G}}_{N(2)}^{new}(\textbf{G}_{[2]}^{\ne N})^\intercal \right\| _F \le (1+{\tilde{\varepsilon }}_2) \min _{\textbf{G}_{N(2)}^{new}} \left\| \textbf{X}_{[N]}^{new} - \textbf{G}_{N(2)}^{new}(\textbf{G}_{[2]}^{\ne N})^\intercal \right\| _F,\\&\cdots \end{aligned}$$

To obtain an upper bound on the error for all current time steps, let \({\tilde{\varepsilon }} = \min \{{\tilde{\varepsilon }}_1, {\tilde{\varepsilon }}_2, \cdots \}\), then the following holds with a probability of at least \(1-\delta \):

$$\begin{aligned} \left\| \textbf{X}_{[N]} - \tilde{\textbf{G}}_{N(2)}(\textbf{G}_{[2]}^{\ne N})^\intercal \right\| _F \le (1+{\tilde{\varepsilon }}) \min _{\textbf{G}_{N(2)}} \left\| \textbf{X}_{[N]} - \textbf{G}_{N(2)}(\textbf{G}_{[2]}^{\ne N})^\intercal \right\| _F, \end{aligned}$$

for

$$\begin{aligned} {\tilde{m}} \ge \left( \frac{2 \gamma R_{N} R_{1}}{{\tilde{\varepsilon }}} \right) \max \left[ \frac{48}{{\tilde{\varepsilon }}}\ln \left( \frac{96 \gamma R_{N} R_{1}}{ {\tilde{\varepsilon }}^2 \sqrt{\delta }} \right) , \frac{1}{\delta } \right] . \end{aligned}$$

For the non-temporal mode n, if

$$\begin{aligned} m'_n \ge \left( \frac{2 \gamma R_{n} R_{n+1}}{\varepsilon '_n} \right) \max \left[ \frac{48}{\varepsilon '_n}\ln \left( \frac{96 \gamma R_{n} R_{n+1}}{ {\varepsilon '}_n^2 \sqrt{\delta }} \right) , \frac{1}{\delta } \right] , \end{aligned}$$

we have

$$\begin{aligned} \left\| \textbf{X}_{[n]} - \tilde{\textbf{G}}_{n(2)}(\textbf{G}_{[2]}^{\ne n})^\intercal \right\| _F \le (1+\varepsilon '_n) \min _{\textbf{G}_{n(2)}} \left\| \textbf{X}_{[n]} - \textbf{G}_{n(2)}(\textbf{G}_{[2]}^{\ne n})^\intercal \right\| _F. \end{aligned}$$

Thus, setting \(\varepsilon = \min \{{\tilde{\varepsilon }}, \varepsilon '_1, \varepsilon '_2, \cdots , \varepsilon '_{N-1}\}\), the proof can be completed. \(\square \)

Along the same line, the proofs of Theorems 4 and 5 can be completed by using [Malik and Becker (2021), Theorem 7] and [Yu and Li (2024a), Theorem 5] respectively.

Specific algorithms based on different sketching

figure g
figure h
figure i
figure j
figure k
figure l
figure m

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, Y., Li, H. Tracking tensor ring decompositions of streaming tensors. Comp. Appl. Math. 44, 60 (2025). https://doi.org/10.1007/s40314-024-03019-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40314-024-03019-4

Keywords

Mathematics Subject Classification

Navigation