[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3666000.3669715acmconferencesArticle/Chapter ViewAbstractPublication PagesissacConference Proceedingsconference-collections
research-article

Computing Krylov iterates in the time of matrix multiplication

Published: 16 July 2024 Publication History

Abstract

Krylov methods rely on iterated matrix-vector products Akuj for an n × n matrix A and vectors u1, …, um. The space spanned by all iterates Akuj admits a particular basis — the maximal Krylov basis — which consists of iterates of the first vector u1, Au1, A2u1, …, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig’s 1985 algorithm whose complexity bound <Formula format="inline"><TexMath><?TeX $\mathchoice{O\left(n^\omega \log (n)\right)}{O(n^\omega \log (n))}{O(n^\omega \log (n))}{O(n^\omega \log (n))}$?></TexMath><AltText>Math 1</AltText><File name="issac24-45-inline1" type="svg"/></Formula> comes from repeated squarings of A and logarithmically many Gaussian eliminations. Here ω > 2 is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in <Formula format="inline"><TexMath><?TeX $\mathchoice{O\left(n^\omega \operatorname{loglog}(n)\right)}{O(n^\omega \operatorname{loglog}(n))}{O(n^\omega \operatorname{loglog}(n))}{O(n^\omega \operatorname{loglog}(n))}$?></TexMath><AltText>Math 2</AltText><File name="issac24-45-inline2" type="svg"/></Formula> field operations when <Formula format="inline"><TexMath><?TeX $m \in \mathchoice{O\left(n\right)}{O(n)}{O(n)}{O(n)}$?></TexMath><AltText>Math 3</AltText><File name="issac24-45-inline3" type="svg"/></Formula>, and even <Formula format="inline"><TexMath><?TeX $\mathchoice{O\left(n^\omega \right)}{O(n^\omega)}{O(n^\omega)}{O(n^\omega)}$?></TexMath><AltText>Math 4</AltText><File name="issac24-45-inline4" type="svg"/></Formula> as soon as <Formula format="inline"><TexMath><?TeX $m\in \mathchoice{O\left(n/\log (n)^c\right)}{O(n/\log (n)^c)}{O(n/\log (n)^c)}{O(n/\log (n)^c)}$?></TexMath><AltText>Math 5</AltText><File name="issac24-45-inline5" type="svg"/></Formula> for some fixed real c > 0. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in <Formula format="inline"><TexMath><?TeX $\mathchoice{O\left(n^\omega (\operatorname{loglog}(n))^2\right)}{O(n^\omega (\operatorname{loglog}(n))^2)}{O(n^\omega (\operatorname{loglog}(n))^2)}{O(n^\omega (\operatorname{loglog}(n))^2)}$?></TexMath><AltText>Math 6</AltText><File name="issac24-45-inline6" type="svg"/></Formula>, and therefore matrix exponentiation Ak can be performed in the latter complexity if <Formula format="inline"><TexMath><?TeX $\log (k) \in \mathchoice{O\left(n^{\omega -1-\varepsilon }\right)}{O(n^{\omega -1-\varepsilon })}{O(n^{\omega -1-\varepsilon })}{O(n^{\omega -1-\varepsilon })}$?></TexMath><AltText>Math 7</AltText><File name="issac24-45-inline7" type="svg"/></Formula> for some fixed ε > 0. A key idea for these improvements is to rely on fast algorithms for m × m polynomial matrices of average degree n/m, involving high-order lifting and minimal kernel bases.

References

[1]
J. Alman, R. Duan, V. V. Williams, Y. Xu, Z. Xu, and R. Zhou. 2024. More Asymmetry Yields Faster Matrix Multiplication. arXiv:2404.16349. https://arxiv.org/abs/2404.16349
[2]
B. Beckermann, G. Labahn, and G. Villard. 2006. Normal forms for general polynomial matrices. J. Symbolic Comput. 41, 6 (2006), 708–737. https://doi.org/10.1016/j.jsc.2006.02.001
[3]
R. Duan, H. Wu, and R. Zhou. 2023. Faster Matrix Multiplication via Asymmetric Hashing. In Proceedings 64th IEEE Symposium on Foundations of Computer Science (FOCS). 2129–2138. https://doi.org/10.1109/FOCS57990.2023.00130
[4]
F. R. Gantmacher. 1960. The Theory of Matrices. Volume one. Chelsea Publishing Company. https://bookstore.ams.org/chelgantset
[5]
J. von zur Gathen and J. Gerhard. 1999. Modern computer algebra. Third edition 2013. Cambridge University Press. https://doi.org/10.1017/CBO9781139856065
[6]
M. Giesbrecht. 1995. Nearly Optimal Algorithms for Canonical Matrix Forms. SIAM J. Comput. 24, 5 (1995), 948–969. https://doi.org/10.1137/S0097539793252687
[7]
S. Gupta, S. Sarkar, A. Storjohann, and J. Valeriote. 2012. Triangular x-basis decompositions and derandomization of linear algebra algorithms over K[x]. J. Symbolic Comput. 47, 4 (2012), 422–453. https://doi.org/10.1016/j.jsc.2011.09.006
[8]
N. Jacobson. 2009. Basic Algebra I. Dover Publications Inc.https://store.doverpublications.com/0486471896.html Second Edition W.H. Freeman 1985.
[9]
C.-P. Jeannerod, V. Neiger, É. Schost, and G. Villard. 2017. Computing minimal interpolation bases. J. Symbolic Comput. 83 (2017), 272–314. https://doi.org/10.1016/j.jsc.2016.11.015
[10]
T. Kailath. 1980. Linear Systems. Prentice-Hall.
[11]
R.E. Kalman. 1963. Mathematical Description of Linear Dynamical Systems. Journal of the Society for Industrial and Applied Mathematics, Series A: Control 1, 2 (1963), 152–192. https://doi.org/10.1137/0301010
[12]
W. Keller-Gehrig. 1985. Fast algorithms for the characteristic polynomial. Theoretical computer science 36 (1985), 309–317. https://doi.org/10.1016/0304-3975(85)90049-0
[13]
G. Labahn, V. Neiger, and W. Zhou. 2017. Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix. J. Complexity 42 (2017), 44–71. https://doi.org/10.1016/j.jco.2017.03.003
[14]
S. Lang. 2002. Algebra (revised third edition). Springer-Verlag New-York Inc.https://doi.org/10.1007/978-1-4613-0041-0
[15]
R. T. Moenck and J. H. Carter. 1979. Approximate algorithms to derive exact solutions to systems of linear equations. In Proceedings International Symposium on Symbolic and Algebraic Manipulation (Eurosam)(LNCS 72). 65–73. https://doi.org/10.1007/3-540-09519-5_60
[16]
V. Neiger and C. Pernet. 2021. Deterministic computation of the characteristic polynomial in the time of matrix multiplication. J. Complexity 67 (2021), 101572. https://doi.org/10.1016/j.jco.2021.101572
[17]
V. Neiger and T. X. Vu. 2017. Computing canonical bases of modules of univariate relations. In Proceedings ISSAC 2017 (Kaiserslautern, Germany). ACM, 357–364. https://doi.org/10.1145/3087604.3087656
[18]
C. Pernet and A. Storjohann. 2007. Faster Algorithms for the Characteristic Polynomial. In Proceedings ISSAC 2007. ACM, 307–314. https://doi.org/10.1145/1277548.1277590
[19]
C. Pernet and A. Storjohann. 2007. Frobenius form in expected matrix multiplication time over sufficiently large fields. Technical Report. https://cs.uwaterloo.ca/ astorjoh/cpoly.pdf
[20]
A. Storjohann. 2000. Algorithms for Matrix Canonical Forms. Ph. D. Dissertation. Institut für Wissenschaftliches Rechnen, ETH-Zentrum, Zurich, Switzerland. https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/145127/eth-24018-02.pdf
[21]
A. Storjohann. 2001. Deterministic computation of the Frobenius form. In Proceedings 42nd IEEE Symposium on Foundations of Computer Science (FOCS). IEEE, 368–377. https://doi.org/10.1109/SFCS.2001.959911
[22]
A. Storjohann. 2003. High-order lifting and integrality certification. J. Symbolic Comput. 36, 3-4 (2003), 613–648. https://doi.org/10.1016/S0747-7171(03)00097-X
[23]
G. Villard. 1997. Fast Parallel Algorithms for Matrix Reduction to Normal Forms. Appl. Algebra Eng. Commun. Comput. 8, 6 (1997), 511–537. https://doi.org/10.1007/s002000050089
[24]
V. V. Williams, Y. Xu, Z. Xu, and R. Zhou. 2024. New Bounds for Matrix Multiplication: from Alpha to Omega. In Proceedings ACM-SIAM Symposium on Discrete Algorithms (SODA). 3792–3835. https://doi.org/10.1137/1.9781611977912.134
[25]
W. Zhou and G. Labahn. 2013. Computing Column Bases of Polynomial Matrices. In Proceedings ISSAC 2013. ACM, 379–386. https://doi.org/10.1145/2465506.2465947
[26]
W. Zhou, G. Labahn, and A. Storjohann. 2012. Computing Minimal Nullspace Bases. In Proceedings ISSAC 2012 (Grenoble, France). ACM, 366–373. https://doi.org/10.1145/2442829.2442881
[27]
W. Zhou, G. Labahn, and A. Storjohann. 2015. A deterministic algorithm for inverting a polynomial matrix. J. Complexity 31, 2 (2015), 162–173. https://doi.org/10.1016/j.jco.2014.09.004

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ISSAC '24: Proceedings of the 2024 International Symposium on Symbolic and Algebraic Computation
July 2024
470 pages
ISBN:9798400706967
DOI:10.1145/3666000
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 July 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Frobenius normal form
  2. Krylov iteration
  3. Polynomial linear algebra.

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • ANR

Conference

ISSAC '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 395 of 838 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 53
    Total Downloads
  • Downloads (Last 12 months)53
  • Downloads (Last 6 weeks)3
Reflects downloads up to 14 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media