Abstract
Dynamical low-rank integrators for matrix differential equations recently attracted a lot of attention and have proven to be very efficient in various applications. In this paper, we propose a novel strategy for choosing the rank of the projector-splitting integrator of Lubich and Oseledets adaptively. It is based on a combination of error estimators for the local time-discretization error and for the low-rank error with the aim to balance both. This ensures that the convergence of the underlying time integrator is preserved. The adaptive algorithm works for projector-splitting integrator methods for first-order matrix differential equations and also for dynamical low-rank integrators for second-order equations, which use the projector-splitting integrator method in its substeps. Numerical experiments illustrate the performance of the new integrators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Dynamical low-rank integrators [17, 19] have been designed for the approximation of large, time-dependent matrices which are solutions to first-order matrix differential equations
whose solutions can be well approximated by low-rank matrices. The projector-splitting integrator introduced in [19] has particularly favorable properties. It is robust in the presence of small singular values, which appear in the case of over-approximation, i.e., when the approximation rank is chosen larger than the rank of the solution A(t) of (1). A variant of this method adapted to strongly dissipative problems was presented in [5]. Another variant for stiff first-order matrix differential equations was introduced in [24].
Recently, a novel dynamical low-rank integrator for second-order matrix differential equations
has been constructed in [12]. Is is based on a Strang splitting of the equivalent first-order formulation and the projector-splitting integrator [19]. This integrator is also robust in the case of over-approximation and shows second-order convergence in time. In particular, with a few amendments it is an effective method for semilinear second-order matrix differential equations, see Sect. 2.3.
In applications, rank-adaptivity turns out to be essential for the efficiency of the algorithms. For first-order equations, in [4] a rank-adaptive variant of the unconventional integrator [5] was proposed. However, the approach from [4] is not applicable to the projector-splitting integrator [19]. In [8], rank-adaptivity for tensor methods for high-dimensional PDEs was based on a functional tensor train series expansion. For the special case of finite-dimensional parametrized Hamiltonian systems modeling non-dissipative phenomena, a rank-adaptive structure-preserving reduced basis method was introduced in [11]. Very recently, in [10] a Predictor–Corrector strategy for adaptivity was proposed, and in [31] the authors developed a rank-adaptive splitting method for the extended Fisher–Kolmogorov equation.
In the present paper, we discuss a general strategy for selecting the rank adaptively in the projector-splitting integrator. Increasing or decreasing the rank from one time step to the next was already proposed in [19] and quite recently in [13]. Our main contribution is a strategy for which the time step of the underlying splitting method, i.e., the Lie-Trotter splitting for first-order and the Strang splitting for second-order problems, is the only input parameter. We determine the rank such that the error of the dynamical low-rank approximation does not spoil the order of the underlying splitting method applied to the full matrix differential equation. This is achieved by propagating one additional singular value which is used for accepting or rejecting the current time step and for selecting the rank in the next step. The decision is based on an estimator of the global time-discretization error. This adaptivity control is also applicable to the dynamical low-rank integrators for stiff problems. The new dynamical low-rank integrator for (2) uses the projector-splitting integrator in the substeps of the Strang splitting, which allows to control the rank adaptively also for second-order equations. Moreover, it can be readily combined with the integrator from [5].
The paper is organized as follows: In Sect. 2, we briefly recall the projector-splitting integrator introduced in [19] and the LRLF scheme from [12]. Additionally, we sketch variants of both methods for (stiff) semilinear first-order and second-order differential equations, respectively. Section 3 is devoted to rank-adaptivity. In Sect. 4, numerical experiments illustrate the performance of the new schemes.
Throughout this paper, \(m,n\), and \(r\) are natural numbers, where w.l.o.g. \(m\ge n\gg r\). If \(n> m\), we consider the equivalent differential equation for the transpose. By \(\mathscr {M}_r\) we denote the manifold of complex \(m\times n\) matrices with rank \(r\),
The Stiefel manifold of \(m\times r\) unitary matrices is denoted by
where \(I_r\) is the identity matrix of dimension \(r\) and \(U^H\) is the conjugate transpose of U.
The singular value decomposition of a matrix \(Y \in \mathbb {C}^{m\times n}\) is given by
where \(\sigma _1 \ge \cdots \ge \sigma _n\ge 0\) are its singular values. It is well known that for \(r< n\), the rank-\(r\) best-approximation to Y w.r.t. the Frobenius norm is
where \({\widetilde{\Sigma }} = {{\,\textrm{diag}\,}}(\sigma _1, \ldots , \sigma _r, 0, \ldots , 0)\) and
The Frobenius norm is denoted by \(\Vert \cdot \Vert \), and the Frobenius inner product by \(\langle \cdot , \cdot \rangle \). The symbol \(\bullet \) denotes the entrywise or Hadmard product of matrices. For a given step size \(\tau \) we use the notation \(t_k = k \tau \) for any k with \(2 k \in \mathbb {N}_0\).
2 Dynamical low-rank integrators with fixed rank
In this section we give a review on various low-rank integrators for first and second-order matrix differential equations.
2.1 First-order differential equations
In the dynamical low-rank approximation of the solution to first-order matrix differential equations (1), the approximation \({{\widehat{A}}}\approx A\) is determined as solution of the projected differential equation
Here, \(P\big ({{\widehat{A}}}(t)\big )\) denotes the orthogonal projector onto the tangent space of the low-rank manifold \(\mathscr {M}_r\) at \({{\widehat{A}}}(t) \in \mathscr {M}_r\), i.e.,
cf. [17, Lemma 4.1], where \({{\widehat{A}}}(t) \in \mathscr {M}_r\) is decomposed in a non-unique fashion resembling the singular value decomposition into
For the initial value \({{\widehat{A}}}_0\), typically the rank-\(r\) best-approximation to A(0) computed by a truncated SVD is used.
The dynamical low-rank integrator developed in [19], also called the projector-splitting integrator, is constructed by performing a Lie-Trotter splitting on the right-hand side of (3) and solving the three subproblems on the low-rank manifold. This approach yields an efficient time-stepping algorithm for computing the desired low-rank approximations. One time-step of the projector-splitting integrator is given in Algorithm 1.
2.2 Second-order differential equations
For the second-order problem (2), a novel dynamical low-rank integrator named LRLF (low-rank leapfrog) scheme was presented in [12, Section 3]. Given \({r_A},{r_B}\in \mathbb {N}\) and approximations \({{\widehat{A}}}_k\approx A(t_k)\) of rank \({r_A}\) and \({{\widehat{B}}}_{k-\frac{1}{2}}\approx A'(t_{k-\frac{1}{2}})\) of rank \({r_B}\), it computes approximations \({{\widehat{A}}}_{k+1}\in \mathscr {M}_{r_A}\) and \({{\widehat{B}}}_{k+ \frac{1}{2}}\in \mathscr {M}_{r_B}\) with \({{\widehat{A}}}_{k+1}\approx A(t_{k+1})\) and \({{\widehat{B}}}_{k+ \frac{1}{2}}\approx A'(t_{k+ \frac{1}{2}})\), respectively. This integrator is based on the first-order formulation of (2),
combined with a Strang splitting. The subproblems are first-order matrix differential equations, (5a, 5b)
which can be solved exactly. The low-rank matrices \({{\widehat{A}}}_{k+1}\) and \({{\widehat{B}}}_{k+ \frac{1}{2}}\) are obtained by approximating the solutions of (5a) by application of Algorithm 1 to
This leads to the dynamical low-rank integrator LRLF shown in Algorithm 2.
2.3 Semilinear problems
A fixed-rank dynamical low-rank integrator for the stiff semilinear first-order problem
where the norms of \(L_1\in \mathbb {C}^{m\times m}\) and \(L_2\in \mathbb {C}^{n\times n}\) are large and f is a Lipschitz continuous function with moderate Lipschitz constant, was introduced in [24]. It is based on the subproblems (7a, 7b)
The solution to (7a) is given by
Note that the rank of the initial value is preserved for all times [24, Section 3.2]. In contrast, the rank of the solution to the nonlinear subproblem (7b) may vary in time.
In [24], a low-rank approximation has been computed by applying a Lie-Trotter splitting to (6) and solving the subproblems (7) by the projector-splitting integrator in Algorithm 1. This method is called PSI-stiff in the following.
For semilinear second-order equations of the form
with Hermitian, positive semidefinite matrices \(\Omega _1\in \mathbb {C}^{m\times m}\), \(\Omega _2\in \mathbb {C}^{n\times n}\) and f again Lipschitz continuous, a dynamical low-rank integrator named LRLF-semi was proposed in [12, Section 5]. It is based on the equivalent first-order formulation of the second-order problem (9), where the right-hand side is split into
The weights \(\omega _{i}\ge 0\), \(i=1,2,3\), can be chosen arbitrarily such that \(\omega _1^2 + \omega _2^2+\omega _3^2= 1\). A natural choice is \(\omega _i^2 = 1/3\). The linear subproblems can be solved exactly. Low-rank approximations to these solutions are obtained by application of the projector-splitting integrator. The nonlinear subproblem is solved approximately with a variant of the LRLF scheme, cf. [12, Algorithm 3]. Denoting the numerical flows of the linear subproblems by \(\phi ^{\Omega _1}_\tau \) and \(\phi ^{\Omega _2}_\tau \), and the numerical flow of the nonlinear subproblem as \(\phi ^\mathscr {S}_\tau \), respectively, one step of the LRLF-semi scheme reads
3 Rank adaptivity
In many applications, an appropriate rank for computing a low-rank approximation to the exact solution of (2) or (1) is not known a priori and it may also vary with time. If the rank is chosen too small, the low-rank approximation lacks accuracy. Conversely, if the rank is chosen too large, the algorithm becomes inefficient.
In the following, we develop rank-adaptive variants of the projector-splitting integrators PSI and PSI-stiff for first-order problems and for the LRLF and the LRLF-semi schemes for second-order problems.
3.1 Selecting the rank
We first discuss the projector-splitting integrator for the first-order problem (1). The general idea of our rank-adaptive strategy is to approximate the exact solution of (1) by a low-rank solution of rank \(r_k\) in the \(k\)th time step, but to propagate a solution of rank \(r_k+1\). The additional information is used as an indicator whether to accept or reject the current time step, and for selecting the rank \(r_{k+1}\) for the next time step.
Given \({{\widehat{A}}}_k= {{\widehat{U}}}_k{{\widehat{S}}}_k{{\widehat{V}}}^H_k\in \mathscr {M}_{r_k+1}\), a single step of Algorithm 1 yields the approximation \({{\widehat{A}}}_{k+1}= {{\widehat{U}}}_{k+1}{{\widehat{S}}}_{k+1}{{\widehat{V}}}^H_{k+1}\in \mathscr {M}_{r_k+1}\). We then compute the singular value decomposition of \({{\widehat{S}}}_{k+1}\),
with \(P_{k+1},Q_{k+1}\in \mathscr {V}_{r_k+1,r_k+1}\), and \({{\widehat{\sigma }}}_1 \ge \cdots \ge {{\widehat{\sigma }}}_{r_k+1} \ge 0\) so that
is the singular value decomposition of \({{\widehat{A}}}_{k+1}\). Given a tolerance \(\texttt{tol}\), we determine \(r_k\) such that
by distinguishing three cases:
-
1.
Augmentation case If \({{\widehat{\sigma }}}_{r_k+1} \ge \texttt{tol}\), the step is rejected and recalculated with rank \(r_k+ 2\). The ranks of the initial values \({{\widehat{U}}}_k, {{\widehat{S}}}_k, {{\widehat{V}}}_k\) of the current integration step are increased by adding a zero entry to \({{\widehat{S}}}_k\),
$$\begin{aligned} S^*= \begin{bmatrix} {{\widehat{S}}}_k&{} 0 \\ 0 &{} 0 \end{bmatrix} \in \mathbb {C}^{(r_k+2) \times (r_k+2)}. \end{aligned}$$This choice has been motivated by [19, Section 5.2]. The matrices \({{\widehat{U}}}_k\) and \({{\widehat{V}}}_k\) are augmented by unit vectors \(u \in \mathbb {C}^m\) and \(v \in \mathbb {C}^n\) such that
$$\begin{aligned} U^*= \begin{bmatrix} {{\widehat{U}}}_k&u \end{bmatrix} \in \mathscr {V}_{m, r_k+2}, \quad V^*= \begin{bmatrix} {{\widehat{V}}}_k&v \end{bmatrix} \in \mathscr {V}_{n, r_k+2}. \end{aligned}$$Numerical tests indicate that choosing u and v as random vectors and orthonormalizing them against \({{\widehat{U}}}_k\) and \({{\widehat{V}}}_k\) is reliable and robust, but other choices are also possible. Clearly, \(U^*S^*(V^*)^H = {{\widehat{U}}}_k{{\widehat{S}}}_k{{\widehat{V}}}_k^H = {{\widehat{A}}}_k\), thus the initial value of the current integration step has not changed. However, the numerical approximation has been enabled to evolve to rank \(r_k+2\). The step is recomputed with the new initial values \(U^*, S^*, V^*\), and it is again checked if the new smallest singular value is sufficiently small for accepting the step. This procedure is repeated until (11) is satisfied and the step is finally accepted, see Algorithm 3 for details.
-
2.
Reduction case If \({{\widehat{\sigma }}}_{r_k} < \texttt{tol}\), this indicates that a sufficiently accurate approximation is available with a smaller rank. The step is accepted, but the rank for the next step is set to
$$\begin{aligned} r_{k+1}= \max \left\{ {{\,\textrm{argmin}\,}}\{ j ~|~{{\widehat{\sigma }}}_{j+1} < \texttt{tol}\}, r_k-2 \right\} , \end{aligned}$$i.e., the rank is reduced by either 1 or 2. Thus, the rank may only decay slowly. Sudden rank-drops are prohibited. For the initial values in the next time step we use
$$\begin{aligned} {\widetilde{S}} = {\widetilde{I}} ^T \Sigma _{k+1}{\widetilde{I}}, \quad {\widetilde{U}} = ({{\widehat{U}}}_{k+1}P_{k+1}) {\widetilde{I}}, \quad \widetilde{V} = ({{\widehat{V}}}_{k+1}Q_{k+1}) {\widetilde{I}}, \end{aligned}$$where \({\widetilde{I}} = \begin{bmatrix} I_{r_{k+1}+1} \\ 0 \end{bmatrix} \in \mathbb {C}^{(r_k+1) \times (r_{k+1}+1)}\). To prevent rank-oscillations, rank reduction is prohibited within the first 10 steps after an augmentation step.
-
3.
Persistent case If \({{\widehat{\sigma }}}_{r_k} \ge \texttt{tol}>{{\widehat{\sigma }}}_{r_k+1}\), the time step is accepted and the same rank \(r_{k+1}= r_k\) is used for the next one.
3.2 Choice of tolerance
It remains to find a suitable tolerance parameter \(\texttt{tol}\). The global error of our integrator is a combination of a time-discretization error and a low-rank approximation error. We suggest to choose the rank such that the low-rank error is of about the same size as the time discretization error, but does not exceed the latter. If the time discretization error is large, the low-rank error is allowed to be large as well, and hence the approximation rank might be chosen small. If the time discretization error is small, then the approximation rank needs to be sufficiently large for an equally small low-rank error. By this procedure, we hope to ensure that the rank-adaptivity does not impair the convergence order of the integrator.
To balance the low-rank error with the time discretization error, we need to approximate the global time discretization error. This comprises an estimate of the local error and a simple model for the global error. While the error analysis of the projector-splitting integrator [16] and of the LRLF scheme [12, Theorem 6] show exponential error growth w.r.t. time, numerical experiments indicate that this is a far too pessimistic bound, and that piecewise linear accretion in time is a more realistic scenario. Since the estimation of the local error induces some computational overhead, we keep an estimate for M steps before we recompute it. To be more precise, let \(e_\ell \) be an approximation to the local error at time \(t_{\ell M + 1}\). Since we assume that the local error is constant for the next M time steps, the global error at time \(t_{\ell M + j}\) is approximated by \(E_\ell + j e_\ell \), \(j=1,\ldots ,M\), where \(E_{\ell }\) is defined recursively by
This simple technique worked very well in numerous numerical simulations. Note that the linear model is a conservative choice in the sense that if the error growth is faster than linear (e.g., quadratic or even exponential), then we underestimate the global error which enforces the integrator to use a larger rank. We thus still compute a numerical solution where the low-rank approximation does not impair the time integration error.
The tolerance threshold \(\texttt{tol}\) is then determined heuristically by the following steps:
-
1.
Estimation of the local error (every M steps): Starting from an approximation \({{\widehat{A}}}_{\ell M} \approx A(t_{\ell M})\), we compute an approximation \({{\widehat{A}}}_{\ell M+1}\) to \(A(t_{\ell M+1})\) by performing one integration step with step size \(\tau \) and rank r. Additionally, we perform two time steps with step size \(\frac{\tau }{2}\) and the same rank r, starting again from the initial value \({{\widehat{A}}}_{\ell M}\). By this, we obtain an alternative approximation \(\breve{A}_{\ell M+1} \approx A(t_{\ell M+1})\). Assuming that the method converges with order \(p \in \mathbb {N}\) in time, we apply Richardson extrapolation [9, Section II.4] to estimate the local error \(e_{\ell }\) at \(t_{\ell M+1}\) as
$$\begin{aligned} e_{\ell } {:}{=}\frac{2^p}{2^p -1} \Vert {{\widehat{A}}}_{\ell M +1} - \breve{A}_{\ell M +1} \Vert , \end{aligned}$$cf. [7, Section 5]. The global error is modeled as
$$\begin{aligned} \Vert A(t_{{\ell M}+j}) - {{\widehat{A}}}_{{\ell M}+j} \Vert \approx E_{\ell } + j e_{\ell }, \quad j = 1,2,\dots ,M. \end{aligned}$$(13) -
2.
Estimation of the low-rank error: If \(\sigma _1 \ge \cdots \ge \sigma _n\ge 0\) are the singular values of the exact solution \(A(t_{k+1})\), then the rank-\(r_{k+1}\) best-approximation \({{\widehat{A}}}^\text {best}_{k+1}\) to \(A(t_{k+1})\) fulfills
$$\begin{aligned} \frac{\Vert A(t_{k+1}) - {{\widehat{A}}}^\text {best}_{k+1}\Vert ^2}{\Vert A(t_{k+1}) \Vert ^2}&= \frac{ \sigma _{r_{k+1}+1}^2 + \cdots + \sigma _n^2}{\sigma _1^2 + \cdots + \sigma _n^2} \le \frac{(n- r_{k+1}) \sigma _{r_{k+1}+1}^2}{\Vert {{\widehat{A}}}^\text {best}_{k+1}\Vert ^2} , \end{aligned}$$so that
$$\begin{aligned} \Vert A(t_{k+1}) - {{\widehat{A}}}^\text {best}_{k+1}\Vert \le \sigma _{r_{k+1}+1} \frac{\Vert A(t_{k+1}) \Vert }{\Vert {{\widehat{A}}}^\text {best}_{k+1}\Vert } \sqrt{n-r_{k+1}} \approx {{\widehat{\sigma }}}_{r_{k+1}+1} \sqrt{n-r_{k+1}}.\nonumber \\ \end{aligned}$$(14) -
3.
Tolerance threshold: The parameter \(\texttt{tol}_k\) is set by equating the right-hand sides of (13) and (14) for \(k= \ell M + j\), \(j=0,\dots ,M-1\). This yields the condition
$$\begin{aligned} {{\widehat{\sigma }}}_{r_{k+1}+1} \le \frac{ E_{\ell } + j e_{\ell }}{ \sqrt{n- r_{k+1}} } {=}{:}\texttt{tol}_k, \quad k= \ell M + j. \end{aligned}$$(15) -
4.
Initial rank: An obvious choice for the initial rank for the integration is \(r_0={{\,\textrm{rank}\,}}(A_0)\). However, if the rank of \(A_0\) is very small, this may not necessarily hold for the rank of the exact solution A(t), even for small t. On the other hand, if the rank of \(A_0\) is large, this choice is also questionable. In our implementation, we first perform \(\nu \) integration steps (with \(\nu \) small, e.g., \(\nu = 5\)) with an initial rank \(r_1\) given by the user (say \(r_1=5\)). Rank reduction is disabled in this phase. Then let \(r^*\) denote the number of singular values of \({\widehat{A}}_\nu \) greater than or equal to \(\texttt{tol}_\nu \) defined in (15). If \(r^*< r_1\), we continue the integration with \(r_{\nu +1} = r^*\). Otherwise, we multiply \(r_1\) by 2 and rerun the initializing process, until \(r^*< r_1\) holds.
3.3 Rank-adaptive algorithms
The rank-adaptive version of the projector-splitting integrator Algorithm 1 is called RAPSI for r ank-a daptive projector-splitting integrator in the following. A single step of the RAPSI scheme is given in Algorithm 4.
The rank-adaptive version of the LRLF scheme is derived by replacing the PSI routines by the RAPSI routines. We name this new integrator rank-adaptive LRLF (RALRLF) method. It is presented in Algorithm 5.
The rank-adaptive counterpart of the PSI-stiff scheme is named RAPSI-stiff. Since the linear subproblem preserves the rank, rank-adaptivity is only applied in the integration of the nonlinear subproblem (7b).
For semilinear second-order matrix differential equations of the form (9), we equip the integrator LRLF-semi with the adaptivity schemes described above. For the sake of efficiency, rank changes are only implemented in the integration of the nonlinear subproblem, even though the linear subproblems are in general not rank-preserving. Only in the case of rank augmentation in the integration of the nonlinear subproblem, the affected substeps of [12, Algorithm 3] are recomputed. This adaptive integrator is named RALRLF-semi.
4 Numerical experiments
We now report on numerical experiments for matrix differential equations resulting from space discretizations of PDEs on a rectangular domain
For simplicity, we use a uniform mesh with n grid points in x- and m grid points in y-direction, i.e.,
Errors of the low-rank solutions are measured w.r.t. numerically computed reference solutions. Details are given in the respective subsections. Since we are only interested in the time discretization error, reference solutions and low-rank solutions are computed on the same spatial grid. The computation of the tolerance threshold as explained in Sect. 3.2 is performed with \(M = 100\), i.e., every 100 steps we perform four additional steps with step size \(\frac{\tau }{2}\), so that we increase the computational effort by \(4\%\). Coosing a smaller value of M may sometimes be advantageous to reduce the global error.
All algorithms have been implemented in Python and were performed on a computer with an Intel(R) Core(TM) i7-7820X @ 3.60GHz CPU and 128 GB RAM storage. The codes are available from [25].
4.1 Nonlinear fractional Ginzburg–Landau equation
The fractional Ginzburg–Landau equation describes a variety of physical phenomena, cf. [22, 23, 28]. Here, we consider the problem in two space dimensions [29] and with homogeneous Dirichlet boundary conditions. Discretization in space by the second-order fractional centered difference method [3] yields the stiff semilinear matrix differential equation
Here, \(D_x\) and \(D_y\) are symmetric Toeplitz matrices [18] with first rows
respectively. Further, \(\textrm{i}= \sqrt{-1}\), \(\nu ,\kappa >0\), \(\eta , \xi , \gamma \in \mathbb {R}\), \(1< \alpha ,\beta < 2\) denote given parameters, and
where \(\varGamma (\cdot )\) denotes the Gamma-function.
In [29], (18) was solved with the linearized second-order backward differential scheme (LBDF2). A fixed-rank dynamical low-rank integrator for (18) was proposed in [32], based on considerations from [24]. We compute low-rank solutions of (18) with PSI-stiff and RAPSI-stiff. The solution to the linear subproblem of (18), which is of form (8), is computed with the Krylov subspace method proposed in [18].
An efficient implementation of products \(F({{\widehat{A}}}) E\) of the right-hand side F(A) in (18) with a skinny matrix E is crucial. For the linear part, this is achieved by computing the matrix products in
successively from the right to the left. The implementation of the cubic nonlinear part of F is more involved, see “Appendix A.2”.
In our first experiment, we use the same parameter values as in [32], namely \(L_x = L_y = 10\), \(m = n = 512\), \(\nu = \eta = \kappa = \xi = \gamma = 1\), \(T=1\), \(\alpha = 1.2\), \(\beta = 1.9\), and the initial value
The (full-rank) reference solution is computed with LBDF2 on the same spatial grid, with time step size \(\tau = 10^{-4}\). Figure 1 shows the relative global errors
between the reference solution A and the respective low-rank solutions \({{\widehat{A}}}\) at \(t= T\) for different step sizes \(\tau \). Convergence order one is observed for the PSI-stiff scheme. For large step sizes, the approximations computed with the RAPSI-stiff method exhibit large errors. This is explained by the behavior of the singular values, cf. Fig. 1 (right picture). The time-discretization error is overestimated in this experiment, so that the tolerance threshold becomes so large that the second largest singular value is discarded. The induced low-rank error is then of magnitude \(10^{-1}\). If the parameter M is reduced to 10, this unfortunate rank reduction vanishes. However, reducing M increases the workload for the updates of \(\texttt{tol}\), while \(M=100\) worked well in all other experiments and also in this experiment for smaller step sizes.
Fractional Ginzburg–Landau equation, first experiment. The left picture shows the relative global error at \(T=1\) for \((\alpha ,\beta ) = (1.2, 1.9)\), where the fixed-rank approximation (yellow) is computed with \(r=5\). The rank-adaptive approximation was computed with \(M=100\) (orange) and \(M=10\) (blue). The trajectories of the ten largest singular values of the reference solution (gray), the singular values of RAPSI-stiff (\(M=10\)) for \(\tau = 1\cdot 10^{-3}\) (blue), and the computed tolerance threshold (red, dashed) are displayed on the right (color figure online)
Fractional Ginzburg–Landau equation, second experiment. The left picture shows the relative global error at \(T=1\) for \((\alpha ,\beta ) = (1.2, 1.9)\), where the fixed-rank approximation is computed with \(r=8\). The trajectories of the ten largest singular values of the reference solution (gray), the singular values of RAPSI-stiff for \(\tau = 10^{-3}\) (orange), and the computed tolerance threshold (red, dashed) are displayed on the right (color figure online)
For our second example, we choose the second parameter set from [32], \(L_x = L_y = 8\), \(n = m = 512\), \(\nu = \kappa = 1\), \(\eta = 0.5\), \(\xi = -5\), \(\gamma =3\), \(\alpha = 1.2\), \(\beta = 1.9\), and the initial values
for \(i=1,\ldots ,m-1\), \(j=1, \ldots , n-1\). The relative global errors at \(T=1\) are displayed in Fig. 2. In contrast to the previous experiment, order one is observed for both integrators. Now the error curves for the fixed-rank integrator and its rank-adaptive variant align almost perfectly, and the singular values follow nicely the trajectories of the singular values of the reference solution.
Similar results for both experiments were obtained for the parameter values \((\alpha , \beta ) = (1.5,1.5),\, (1.7,1.3)\), and (1.9, 1.2), cf. [25].
4.2 Nonlinear fractional Schrödinger equation
The nonlinear fractional Schrödinger equation [30] is a special case of the fractional Ginzburg–Landau equation (18) with \(\nu = \kappa = \gamma = 0\). For the limit \(\alpha , \beta \rightarrow 2\) it becomes the classical Schrödinger equation.
In our experiment, the reference solution to the problem was again computed with the LBDF2 method, using the step size \(\tau = 2 \cdot 10^{-5}\). Figure 3 shows the results for the parameter values from [30], \(L_x = L_y = 10\), \(n = m = 512\), \(\eta = 1\), \(\xi = -2\), \(T=0.2\), \(\alpha = 1.2\), \(\beta = 1.9\), and the initial value
Fractional Schrödinger equation. The left picture shows the relative global error at \(T=0.2\) for \((\alpha ,\beta ) = (1.2, 1.9)\), where the fixed-rank approximation is computed with \(r=5\). The trajectories of the ten largest singular values of the reference solution (gray), the singular values of RAPSI-stiff (\(M=100\)) for \(\tau = 4\cdot 10^{-4}\) (orange), and the computed tolerance threshold (red, dashed) are displayed on the right (color figure online)
Again, the relative global error curves match almost perfectly for both low-rank methods, and are also clearly indicating convergence of order one. The results for other choices of \(\alpha \) and \(\beta \) are available in [25].
4.3 Laser-plasma interaction
As an example for second-order problems, we consider a reduced model of laser-plasma interaction from [14, 15, 26]. It is given by a wave equation with space-dependent cubic nonlinearity on a bounded, rectangular domain \(\Omega \) given in (16) with periodic boundary conditions. After space discretization according to [26, Section 4.1.3], we obtain the matrix differential equation
with initial values \(A(0) = A_0\) and \(A'(0) = B_0\) given by
where \(x_j,y_i\) are defined in (17) and \(i=1,\ldots ,m,~j = 1,\ldots ,n\). The discrete Laplacian \(\mathscr {L}\) acts on A(t) via
where \(D_x \in \mathbb {R}^{n \times n}\) denotes the symmetric Toeplitz matrix with first row
and
\(\mathscr {F}_{m}\) denotes the discrete Fourier transformation operator for m Fourier modes and \(\mathscr {F}^{-1}_{m}\) its inverse. Hence we use fourth order finite differences with n equidistant grid points in transversal direction and a pseudospectral method with m equidistant grid points in longitudinal direction.
Equation (19) describes the propagation of a laser pulse with wavelength \(\lambda _0\) in the direction of the positive y-axis through vacuum and through a strongly localized plasma barrier. The plasma is located between \(y = 50 \pi \) and \(y= 300\pi \) and has constant density 0.3. The localization is modeled by the matrix \({\widetilde{\chi }} \in \mathbb {R}^{m \times n}\) with entries
The interaction between the pulse and the plasma is modeled by a cubic nonlinearity. As in [15] we use the parameters \(\lambda _0=\pi \), \(l_0=10 \pi \), \(w_0=100\pi \), \(L_x = 300 \pi \), and \(L_y = 600 \pi \).
Our numerical experiments were carried out for \(t \in [0, 600 \pi ]\) with \(n = 1024\) and \(m = 8192\) discretization points in transversal and longitudinal direction, respectively. The reference solution was computed with the Gautschi-type method from [26], with step size \(\tau _0 = L_y/(80 m)\). For the low-rank solutions we used the step sizes \(\tau = 2^k \tau _0\), \(k=2,\ldots ,6\). The algorithms LRLF and LRLF-semi were performed with fixed ranks \({r_A}= {r_B}=4\).
In [12] it was obeserved that choosing the weights \(\omega _i\) in LRLF-semi according to the direction of motion can improve the approximation significantly. Since the laser pulse moves mainly in longitudinal direction, we therefore used the weights
in (10) for both LRLF-semi and RALRLF-semi.
Laser-plasma interaction. Relative global error between reference solution and low-rank approximations at \(T=600 \pi \) (left), and absolute error in the maximal intensity for \(\tau = 4 \tau _0 = L_y/(20m)\) (right). The fixed-rank methods were computed with \({r_A}={r_B}=4\), the rank-adaptive methods with \(M=100\)
Laser-plasma interaction. Trajectories of the ten largest singular values of the reference solution (gray) together with the trajectories of the singular values of the rank-adaptive low-rank integrators for \(\tau = 4 \tau _0\) (left: RALRLF, right: RALRLF-semi), and the respective computed tolerance thresholds (red, dashed) (color figure online)
The left picture in Fig. 4 shows the relative global error at \(T= 600 \pi \) between the reference solution and the different low-rank integrators. Second-order convergence is observed in all cases, and the integrators designed for semilinear problems yield better approximations than the other methods. The accuracy of the rank-adaptive schemes is comparable to those of the fixed-rank integrators, showing nicely that the heuristics works well for this example, cf. Fig. 5.
In physics, the maximal intensity \(\max _{i,j} | A_{ij}(t) |^2\) of the propagating pulse over time is sometimes of higher interest than A itself. In the right picture of Fig. 4, the absolute error between the maximal intensity of the numerical pulse computed with the Gautschi-type integrator and the maximal intensity of the approximations obtained by the low-rank integrators is displayed.
4.4 Sine-Gordon equation
In our last experiment, we consider the two-dimensional sine-Gordon equation on the domain \(\Omega \) from (16) with homogeneous Neumann boundary conditions [1]. Using finite differences of second order on the grid (17) with \(L_x = L_y = 7\), \(n = m = 1001\) in both x- and y-direction, we obtain the semi-discretized matrix differential equation
Here, \({{\,\mathrm{\underline{\sin }}\,}}(A)\) denotes the entrywise evaluation of the sine function. The matrix D is given by
The storage-economical evaluation of the products \({{\,\mathrm{\underline{\sin }}\,}}({{\widehat{A}}})E\) and \({{\,\mathrm{\underline{\sin }}\,}}({{\widehat{A}}})^H E\) is presented in “Appendix A”. There is no preferred direction of propagation, so that we used the weights
in (10) for the LRLF-semi and RALRLF-semi methods .
First we consider the initial values
and
\(i,j=0,\ldots ,m\), of a line soliton in an inhomogeneous medium [2, Section 3.1.3]. The reference solution is computed by the leapfrog scheme on the same spatial grid with time step size \(\tau = 2.5 \cdot 10^{-5}\). Figure 6 shows the relative global errors between the low-rank approximations and the reference solution. Convergence order two is observed for all methods. The fixed-rank integrators are slightly more accurate than their rank-adaptive pendants, probably because they use a higher rank.
Sine-Gordon equation, first setting. Relative global error at \(T=9\) between reference solution and low-rank approximations (left), and rank evolution of the solutions computed with the rank-adaptive integrators over time (right) for \(\tau = 10^{-4}\). For the fixed-rank integrators, we used \({r_A}= {r_B}=20\), for the rank-adaptive methods \(M=100\) (color figure online)
Sine-Gordon equation, second setting. Relative global error at \(T=11\) between reference solution and low-rank approximations (left), and rank evolution of the solutions computed with the rank-adaptive integrators over time (right) for \(\tau = 10^{-4}\). For the fixed-rank integrators, we used \({r_A}= {r_B}=50\), for the rank-adaptive methods \(M=100\) (color figure online)
In a second setting, we consider the symmetric perturbation of a static line soliton [2, Section 3.1.2] with \(\Phi _{ij}= 1\), \((B_0)_{ij} = 0\), and
Figure 7 shows a similar behavior of the methods as in the first setting. For the \(\textsc {RALRLF} \) scheme however, the initial rank is rather large, and drops significantly after a few steps. This is caused in the routine for determining an appropriate initial rank. As explained in Sect. 3.2, the initial guess \(r_1=5\) is doubled repeatedly until the criterion for continuing the integration beyond the first \(\nu \) steps is satisfied. In this experiment, an initial rank of \(\sim 23\) is adequate. Therefore, the guesses 5, 10, and 20 are rejected, until \(r_1 = 40\) is accepted and rank reduction applies in the subsequent integration steps.
5 Conclusion and outlook
In the present paper, we developed adaptive schemes for dynamical low-rank integrators for first and second-order matrix differential equations. The performance of these schemes have been illustrated by numerical experiments.
Both the projector-splitting integrator and the unconventional robust integrator have been successfully adapted to first-order tensor differential equations, cf. [6, 20, 21] and references therein. We are confident that the strategy for the adaptive algorithm for matrix differential equations also works for the tensor case. This is part of ongoing research and will be reported in the future.
References
Bratsos, A.G.: An explicit numerical scheme for the sine-Gordon equation in \(2+1\) dimensions. Appl. Numer. Anal. Comput. Math. 2(2), 189–211 (2005). https://doi.org/10.1002/anac.200410035
Bratsos, A.G.: The solution of the two-dimensional sine-Gordon equation using the method of lines. J. Comput. Appl. Math. 206(1), 251–277 (2007). https://doi.org/10.1016/j.cam.2006.07.002
Çelik, C., Duman, M.: Crank-Nicolson method for the fractional diffusion equation with the Riesz fractional derivative. J. Comput. Phys. 231(4), 1743–1750 (2012). https://doi.org/10.1016/j.jcp.2011.11.008
Ceruti, G., Kusch, J., Lubich, C.: A rank-adaptive robust integrator for dynamical low-rank approximation. BIT 62, 1149–1174 (2022). https://doi.org/10.1007/s10543-021-00907-7
Ceruti, G., Lubich, C.: An unconventional robust integrator for dynamical low-rank approximation. BIT 62, 23–44 (2022). https://doi.org/10.1007/s10543-021-00873-0
Ceruti, G., Lubich, C., Walach, H.: Time integration of tree tensor networks. SIAM J. Numer. Anal. 59(1), 289–313 (2021). https://doi.org/10.1137/20M1321838
Constantinescu, E.M.: Generalizing global error estimation for ordinary differential equations by using coupled time-stepping methods. J. Comput. Appl. Math. 332, 140–158 (2018). https://doi.org/10.1016/j.cam.2017.05.012
Dektor, A., Rodgers, A., Venturi, D.: Rank-adaptive tensor methods for high-dimensional nonlinear PDEs. J. Sci. Comput. 88(2), 27 (2021). https://doi.org/10.1007/s10915-021-01539-3
Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems Springer. Series in Computational Mathematics, vol. 8, 2nd edn. Springer, Berlin (1993). https://doi.org/10.1007/978-3-540-78862-1
Hauck, C., Schnake, S.: A Predictor-Corrector Strategy for Adaptivity in Dynamical Low-Rank Approximations. https://arxiv.org/abs/2209.00550 (2022)
Hesthaven, J.S., Pagliantini, C., Ripamonti, N.: Rank-adaptive structure-preserving model order reduction of Hamiltonian systems. ESAIM Math. Model. Numer. Anal. 56(2), 617–650 (2022). https://doi.org/10.1051/m2an/2022013
Hochbruck, M., Neher, M., Schrammer, S.: Dynamical low-rank integrators for second-order matrix differential equations. CRC 1173 Preprint 2022/12, Karlsruhe Institute of Technology. https://doi.org/10.5445/IR/1000143003. https://www.waves.kit.edu/downloads/CRC1173_Preprint_2022-12.pdf (2022)
Hu, J., Wang, Y.: An adaptive dynamical low rank method for the nonlinear Boltzmann equation. J. Sci. Comput. 92(2), 24 (2022). https://doi.org/10.1007/s10915-022-01934-4
Karle, C., Schweitzer, J., Hochbruck, M., Laedke, E.W., Spatschek, K.H.: Numerical solution of nonlinear wave equations in stratified dispersive media. J. Comput. Phys. 216(1), 138–152 (2006). https://doi.org/10.1016/j.jcp.2005.11.024
Karle, C., Schweitzer, J., Hochbruck, M., Spatschek, K.H.: A parallel implementation of a two-dimensional fluid laser-plasma integrator for stratified plasma-vacuum systems. J. Comput. Phys. 227(16), 7701–7719 (2008). https://doi.org/10.1016/j.jcp.2008.04.024
Kieri, E., Lubich, C., Walach, H.: Discretized dynamical low-rank approximation in the presence of small singular values. SIAM J. Numer. Anal. 54(2), 1020–1038 (2016). https://doi.org/10.1137/15M1026791
Koch, O., Lubich, C.: Dynamical low-rank approximation. SIAM J. Matrix Anal. Appl. 29(2), 434–454 (2007). https://doi.org/10.1137/050639703
Lee, S.T., Pang, H.K., Sun, H.W.: Shift-invert Arnoldi approximation to the Toeplitz matrix exponential. SIAM J. Sci. Comput. 32(2), 774–792 (2010). https://doi.org/10.1137/090758064
Lubich, C., Oseledets, I.V.: A projector-splitting integrator for dynamical low-rank approximation. BIT 54(1), 171–188 (2014). https://doi.org/10.1007/s10543-013-0454-0
Lubich, C., Oseledets, I.V., Vandereycken, B.: Time integration of tensor trains. SIAM J. Numer. Anal. 53(2), 917–941 (2015). https://doi.org/10.1137/140976546
Lubich, C., Vandereycken, B., Walach, H.: Time integration of rank-constrained Tucker tensors. SIAM J. Numer. Anal. 56(3), 1273–1290 (2018). https://doi.org/10.1137/17M1146889
Milovanov, A., Rasmussen, J.J.: Fractional generalization of the Ginzburg–Landau equation: an unconventional approach to critical phenomena in complex media. Phys. Lett. A 337(1), 75–80 (2005). https://doi.org/10.1016/j.physleta.2005.01.047
Mvogo, A., Tambue, A., Ben-Bolie, G.H., Kofané, T.C.: Localized numerical impulse solutions in diffuse neural networks modeled by the complex fractional Ginzburg-Landau equation. Commun. Nonlinear Sci. Numer. Simul. 39, 396–410 (2016). https://doi.org/10.1016/j.cnsns.2016.03.008
Ostermann, A., Piazzola, C., Walach, H.: Convergence of a low-rank Lie–Trotter splitting for stiff matrix differential equations. SIAM J. Numer. Anal. 57(4), 1947–1966 (2019). https://doi.org/10.1137/18M1177901
Schrammer, S.: Codes for numerical experiments (2022). https://doi.org/10.5445/IR/1000153539
Schweitzer, J.: Numerical Simulation of Relativistic Laser–Plasma Interaction. PhD thesis, Heinrich Heine University Düsseldorf. https://docserv.uni-duesseldorf.de/servlets/DocumentServlet?id=8401 (2008)
Styan, G.P.H.: Hadamard products and multivariate statistical analysis. Linear Algebra Appl. 6, 217–240 (1973). https://doi.org/10.1016/0024-3795(73)90023-2
Tarasov, V.: Psi-series solution of fractional Ginzburg–Landau equation. J. Phys. A 39, 8395–8407 (2006)
Zhang, Q., Lin, X., Pan, K., Ren, Y.: Linearized ADI schemes for two-dimensional space-fractional nonlinear Ginzburg–Landau equation. Comput. Math. Appl. 80(5), 1201–1220 (2020). https://doi.org/10.1016/j.camwa.2020.05.027
Zhao, X., Sun, Z., Hao, Z.: A fourth-order compact ADI scheme for two-dimensional nonlinear space fractional Schrödinger equation. SIAM J. Sci. Comput. 36(6), A2865–A2886 (2014). https://doi.org/10.1137/140961560
Zhao, Y.L., Gu, X.M.: An adaptive low-rank splitting approach for the extended Fisher–Kolmogorov equation. https://arxiv.org/abs/2210.15900 (2022)
Zhao, Y.L., Ostermann, A., Gu, X.M.: A low-rank Lie–Trotter splitting approach for nonlinear fractional complex Ginzburg–Landau equations. J. Comput. Phys. 446, 12 (2021). https://doi.org/10.1016/j.jcp.2021.110652
Acknowledgements
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 258734477—SFB 1173. We thank the anonymous referees for their helpful comments on an earlier version of this paper.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Antonella Zanna Munthe-Kaas.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Evaluation of entrywise functions for low-rank matrices
A Evaluation of entrywise functions for low-rank matrices
Let f be a (nonlinear) function that acts entrywise on matrices, let \( E \in \mathbb {C}^{n\times r}\) be an arbitrary matrix, and let \({{\widehat{A}}}= {{\widehat{U}}}{{\widehat{S}}}{{\widehat{V}}}^H \in \mathscr {M}_r\) be a low-rank matrix with factors \({{\widehat{U}}},{{\widehat{S}}},{{\widehat{V}}}\) given in (4). For the efficiency of low-rank integrators for nonlinear problems, it is crucial to evaluate the product
using the factors \({{\widehat{U}}}\), \({{\widehat{S}}}\), and \({{\widehat{V}}}\) instead of the full matrix \({{\widehat{A}}}\). However, the entrywise computation of all components \({{\widehat{A}}}_{ij}\) of \({{\widehat{A}}}\) can not be avoided in general. Nevertheless, it is not necessary to store the full matrix \({{\widehat{A}}}\in \mathbb {C}^{m\times n}\), but it suffices to compute the rows of \({{\widehat{A}}}\) successively.
1.1 A.1 General functions
The ith row of \({{\widehat{A}}}\) is given by
where \(\widetilde{U}= {{\widehat{U}}}{{\widehat{S}}}\). The matrix products \((e_i \widetilde{U}) {{\widehat{V}}}^H\) are carried out with complexity \(\mathscr {O}(nr)\). We now evaluate f entrywise at \(e_i {{\widehat{A}}}\) and multiply the result with E, which gives the ith row of the product (21). This sequence of operations can be performed for each \(i = 1, \ldots ,m\) independently and thus in parallel. This allows for a fast computation, even for large \(m\). Similarly, the evaluation of
is realized by computing the jth row via
This approach is suitable for any function f acting entrywise on its argument, e.g., trigonometric functions. For monomials, the successive computation of the entries of \({{\widehat{A}}}\) can be avoided, which speeds up the computation further.
1.2 A.2 Monomials
Consider a (complex) monomial of degree \(p \in \mathbb {N}\), \(f(z) = z^p\) for \(z \in \mathbb {C}\). The entrywise evaluation of f at \({{\widehat{A}}}\in \mathscr {M}_r\) is defined as \(\big (f({{\widehat{A}}})\big )_{ij} = {{\widehat{A}}}^p_{ij}\).
We now show how to compute the product (21) without computing or storing the elements of \({{\widehat{A}}}\) explicitly. Let \(\widetilde{U}= {{\widehat{U}}}{{\widehat{S}}}\in \mathbb {C}^{m\times r}\). Then we have
where \(\widetilde{U}_j\) is the jth column of \(\widetilde{U}\) and \({{\widehat{V}}}_j\) the jth column of \({{\widehat{V}}}\), respectively. The Hadamard product is distributive and satisfies
for \(1 \le j,k \le r\), cf. [27, Section 2]. Hence, for \(E \in \mathbb {C}^{n\times r}\) it holds
For the special case of the cubic nonlinearity which appears in the examples in Sects. 4.1, 4.2, and 4.3,
where \({\overline{{{\widehat{A}}}}}\) denotes the complex conjugate of \({{\widehat{A}}}\), (23) reads
The computational cost is further reduced by exploiting the symmetry in j and \(\ell \),
The product \(({{\widehat{A}}}\bullet {\overline{{{\widehat{A}}}}} \bullet {{\widehat{A}}})^H E\) with \(E \in \mathbb {C}^{m\times r}\) can be computed analogously.
Additional simplifications apply to the product \(({\widetilde{\chi }} \bullet {{\widehat{A}}}\bullet {\overline{{{\widehat{A}}}}} \bullet {{\widehat{A}}}) E\) with \({\widetilde{\chi }} \in \mathbb {C}^{m\times n}\) given in (20). It satisfies
where \(0_n\), \(\mathbb {1}_n\) are the vectors of length n filled with zeros and ones, respectively, and \(0_{n\times p}\) and \(\mathbb {1}_{n \times p}\) the matrices of dimension \(n \times p\) with all entries being zeros and ones, respectively. From (22) and (25), we obtain
where \(\vartheta (\widetilde{U}_j)\) denotes the restriction of \(\widetilde{U}_j\) to its \(\eta \)th to \(\xi \)th entries. Here, we made use of the following property of the Hadarmard product, cf. [27, Section 2]: If \(A \in \mathbb {C}^{m\times n}\), \(x \in \mathbb {R}^m\), and \(y \in \mathbb {R}^n\), then
Hence, it suffices to compute and sum up the small matrices \(\vartheta (\widetilde{U}_j) ({{\widehat{V}}}^H_jE)\). Likewise, using (24), we have
The implementation of \(({\widetilde{\chi }} \bullet {{\widehat{A}}}\bullet \overline{\!{{\widehat{A}}}} \bullet {{\widehat{A}}})^H E\), \( E \in \mathbb {C}^{m\times r}\), is realized in the same manner.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hochbruck, M., Neher, M. & Schrammer, S. Rank-adaptive dynamical low-rank integrators for first-order and second-order matrix differential equations. Bit Numer Math 63, 9 (2023). https://doi.org/10.1007/s10543-023-00942-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10543-023-00942-6