[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to content
Publicly Available Published by De Gruyter May 31, 2017

Spectral Analysis, Properties and Nonsingular Preconditioners for Singular Saddle Point Problems

  • Na Huang , Chang-Feng Ma and Jun Zou EMAIL logo

Abstract

We first derive some explicit bounds on the spectra of generalized non-symmetric singular or nonsingular saddle point matrices. Then we propose two new nonsingular preconditioners for solving generalized singular saddle point problems, and show that GMRES determines a solution without breakdown when applied to the resulting preconditioned systems with any initial guess. Furthermore, the detailed spectral properties of the preconditioned systems are analyzed. The nonsingular preconditioners are also applied to solve the singular finite element saddle point systems arising from the discretization of the Stokes problems to test their performance.

MSC 2010: 65F10; 65F50

1 Introduction

In this work we shall mainly study the spectral behavior and preconditioners of the generalized singular or nonsingular saddle point problems of the form

(1.1)(ABT-BC)(xy)=(f-g),

where An×n is a symmetric positive definite matrix, Cm×m is a symmetric positive semi-definite matrix, and B is a general matrix in m×n, with mn (possibly mn); fn and gm are two given vectors.

Systems of the form (1.1) arise from a variety of scientific and engineering applications, such as computational fluid dynamics, optimal control problems, constrained optimizations, weighted least-squares formulations, mixed or hybrid finite element approximations of partial differential equations, electronic networks and computer graphics, boundary element discretization based on domain decomposition, and so on; see, e.g., [14, 11, 29, 20, 23, 21, 10, 22, 6] and the references therein.

Although the linear system (1.1) can be equivalently and easily transformed into a symmetric linear system, the coefficient matrix of non-symmetric form in (1.1) is semi-positive real and positive semi-stable (see [5] for more details). So the use of the non-symmetric form (1.1) can be advantageous when using certain Krylov subspace methods, like restarted GMRES [5], and has motivated many studies on the numerical solution of the generalized saddle point systems of the form (1.1), see, e.g., [18, 7, 8, 2, 5, 3, 31, 32, 36]. This will be also the central focus of the current work, namely, we shall mainly study the spectral properties of the generalized non-symmetric saddle point system (1.1), especially for the case that it is singular, and propose some effective nonsingular preconditioners for the singular systems and analyze the spectral behavior of the preconditioned systems. When the generalized saddle point systems of the form (1.1) are large and sparse, iterative methods appear to be more attractive than direct methods. However, most existing iterative methods, including Krylov subspace-type methods, converge quite slowly, or even fail to converge if they are ineffectively preconditioned. Therefore, great attention has been paid to the preconditioned iterative methods and the analysis on their convergence behavior in the last decade; see, e.g., [16, 4, 30, 17, 25, 26, 27]. Quite many preconditioners have been proposed and studied recently for the saddle point systems of the form (1.1), such as symmetric indefinite preconditioners [7, 8], inexact constraint preconditioners [8, 28, 3, 36], block triangular preconditioners [13, 34, 21, 33], shift-splitting preconditioners [5, 15, 35] and primal-based penalty preconditioners [19]. In particular, when the saddle point system (1.1) is generated by some special discretizations, e.g., finite element or boundary element methods based on non-overlapping domain decomposition, effective preconditioners can be constructed by making full use of the special structure of discretizations [14, 9]. But we emphasize that most of the above studies were done for the cases when the saddle point systems (1.1) are nonsingular, and few analytical results on preconditioning effects are available for the cases when the saddle point systems (1.1) are singular, which have wide applications in many areas as we mentioned earlier. This will be the central task of our current work, and several new analysis tools and techniques are introduced for the analytical purpose. As it is widely known, an effective and efficient preconditioner is mostly expected to have the clustered eigenvalues for its resultant preconditioned system. In order to see such clustering behavior, one needs some careful and sharp estimates of the spectral bounds for the preconditioned system. For non-symmetric generalized saddle point matrices of the form

(1.2)𝒜=(ABT-BC)

in (1.1), there are already quite many investigations available in literature; see [7, 8, 32, 34, 2, 3, 24]. Interesting estimates were established in [7] on spectral bounds of the complex eigenvalues of 𝒜, and further improved in [32] from the previous lower bound (ρ1+μ1)/2 to the lower bound max{(ρ1+μ1)/2,μ1-δm}, where ρ1 and μ1 are the minimum eigenvalues of A and C, respectively, and δm is the maximum eigenvalue of BBT. But the matrix C is often singular or even vanishes in many applications, so we have μ1=0 and the improved estimates of [32] reduce to the original ones in [7]. The first results of this work are to improve the estimates in [32] for the important cases when matrix C is singular or zero. With the help of those improved results, we shall derive some explicit and sharp bounds on the spectra of the generalized non-symmetric singular saddle point matrices of the form (1.1). Then we propose new nonsingular preconditioners for solving the generalized singular saddle point problems, and establish the spectral bounds of the preconditioned systems.

The arrangement of this work is in the following order. We first present some improved estimates of the spectra for the generalized singular or nonsingular saddle point matrix 𝒜 in Section 2, then study more specific spectral properties of singular matrices 𝒜 in Section 3. We will then propose a new nonsingular preconditioner for the singular saddle point system in (1.1) in Section 4, and derive the spectral bounds of the preconditioned systems in Section 5. Finally, in Section 6, the nonsingular preconditioners are applied to solve the singular finite element saddle point systems arising from the discretization of the Stokes problems to demonstrate their effectiveness and efficiencies.

Notation.

We introduce some notation that will be used in the subsequent analysis. For any matrix Hl×l, we shall often write its inverse, transpose, minimum and maximum eigenvalues, rank and null space as H-1, HT, λmin[H], λmax[H], rank[H] and null[H], respectively. Moreover, we write H1H2 if H1 is similar to H2 (i.e., there exists an invertible matrix Tl×l such that H1=TH2T-1), and H>0 (resp. H0) if H is symmetric positive definite (resp. positive semi-definite). In addition, we use Il to denote the identity matrix of order l, and Re(λ) and Im(λ) the real and imaginary parts of any λ. And we shall denote the eigenvalues of matrices A, C, BBT, and S respectively by ρ1ρ2ρn, μ1μ2μm, δ1δ2δm, and ζ1ζ2ζm, where S=C+BA-1BT is the Schur complement matrix.

2 Estimates of Complex Eigenvalues of Singular or Nonsingular Saddle Point Matrices

In this section, we make an effort to improve the following estimates of eigenvalues of matrix 𝒜 in [32], which will be important to our subsequent studies of the spectral distributions of matrix 𝒜 and its preconditioned system.

Theorem 2.1 ([32]).

Let A be the saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric positive semi-definite. Then any complex eigenvalue λ of A with Im(λ)0 meets the following estimates:

(2.1)max{12(ρ1+μ1),μ1-δm}Re(λ)min{12(ρn+μm),μm+δm},
(2.2)|Im(λ)|{δm-(ρ1+μ1-2μm)24if μm<12(ρ1+μ1),δm-(ρn+μm-2μ1)24if μ1>12(ρn+μm),δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

It is often encountered in many applications that the matrix C is singular. In this case, Theorem 2.1 implies that the lower bound of the real part of any complex eigenvalue of 𝒜 reduces to (ρ1+μ1)/2, which deteriorates to the same result in [7]. In the next theorem we present some estimates that improve the ones in Theorem 2.1 for the important case that the matrix C is singular. There are no existing analytical tools to apply for this singular case. Instead, we choose to consider the eigensystem of the saddle point equation (1.1) directly, from where we solve the variable x in terms of the variable y and substitute into the second equation of the eigensystem. Then we separate the real and imaginary part of the quadratic form of the resultant equation, and achieve the desired estimates by using the important relation between the real and imaginary parts.

Theorem 2.2.

Let A be the saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric positive semi-definite. Then any complex eigenvalue λ of A with Im(λ)0 meets the following estimates:

max{12(ρ1+μ1),ρ1-δm}Re(λ)min{12(ρn+μm),ρn+δm},
|Im(λ)|{δm-(ρ1+μ1-2ρn)24if ρn<12(ρ1+μ1),δm-(ρn+μm-2ρ1)24if ρ1>12(ρn+μm),δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

Proof.

Let λ=a+bi with a, 0b and i=-1, and let z=(xT,yT)T be an eigenvector corresponding to λ. Then 𝒜z=λz, or in other words,

(2.3)Ax+BTy=λx,-Bx+Cy=λy.

Clearly, ynull[BT], otherwise (2.3) shows Ax=λx. Then λ is an eigenvalue of A if x0. If x=0, then y0, and the second equality in (2.3) leads to Cy=λy. Therefore λ if ynull[BT]. This contradicts the fact b0. By the first equality in (2.3), we have

x=(λI-A)-1BTy.

Together with the second equality in (2.3), this implies

(2.4)-B(λI-A)-1BTy+Cy=λy.

Let A=PDPT be the eigenvalue decomposition of A, where Pn×n is an orthogonal matrix and D=diag(ρ1,ρ2,,ρn). Then (2.4) results in

-BP(λI-D)-1PTBTy+Cy=λy.

Multiplying the above equality from the left with y* leads to

(2.5)-y*BP(λI-D)-1PTBTy+y*Cy=λy*y.

For any 1jn, we have

(λ-ρj)-1=1a+bi-ρj=a-ρj-bi(a-ρj)2+b2.

This leads to

(λI-D)-1=diag(a-ρ1-bi(a-ρ1)2+b2,,a-ρn-bi(a-ρn)2+b2).

The combination with (2.5) leads to

-y*BPdiag(a-ρ1(a-ρ1)2+b2,,a-ρn(a-ρn)2+b2)PTBTy+y*Cy=ay*y

and

(2.6)y*BPdiag(1(a-ρ1)2+b2,,1(a-ρn)2+b2)PTBTy=y*y.

Then we have

ay*y(ρn-a)y*BPdiag(1(a-ρ1)2+b2,,1(a-ρn)2+b2)PTBTy+y*Cy
=(ρn-a)y*y+y*Cy,

which leads to

(2.7)a12(ρn+y*Cyy*y)12(ρn+μm).

Similarly, we deduce

(2.8)a12(ρ1+μ1).

On the other hand, from (2.6), we have

min1jn{(a-ρj)2}<y*BBTyy*yδm,

which implies

ρ1-δm<a<ρn+δm.

The combination with (2.7)–(2.8) concludes the proof of the first result.

In the subsequence, we show the upper bounds for |Im(λ)|. It follows from (2.6) that

min1jn{(a-ρj)2}+b2y*BBTyy*yδm,

which implies

(2.9)|b|δm-min1jn{(a-ρj)2}δm.

If 12(ρ1+μ1)>ρn, then, for any 1jn, the first result implies

a-ρj12(ρ1+μ1)-ρj12(ρ1+μ1)-ρn>0.

Then

min1jn(a-ρj)2(12(ρ1+μ1)-ρn)2=(ρ1+μ1-2ρn)24.

Along with (2.9), this shows that

(2.10)|b|δm-(ρ1+μ1-2ρn)24.

If 12(ρn+μm)<ρ1, then, for any 1jn, the first result implies

a-ρj12(ρn+μm)-ρj12(ρn+μm)-ρ1<0.

Then

min1jn{(a-ρj)2}(12(ρn+μm)-ρ1)2=(ρn+μm-2ρ1)24.

Along with (2.9), this shows that

(2.11)|b|δm-(ρn+μm-2ρ1)24.

The combination of (2.9), (2.10), and (2.11) concludes the proof of the second result. ∎

Since A is symmetric and positive definite, we have ρ1>0. Theorem 2.2 then shows that our new estimates improve the previous estimates in Theorem 2.1 for the cases that C is singular, provided ρ1-δm>0. The combination of Theorem 2.2 and Theorem 2.1 leads to the following estimates.

Theorem 2.3.

Under the same settings and conditions as in Theorem 2.2, any complex eigenvalue λ of A with Im(λ)0 meets the following estimates:

(2.12)max{12(ρ1+μ1),ρ1-δm,μ1-δm}Re(λ)min{12(ρn+μm),ρn+δm,μm+δm},
(2.13)|Im(λ)|{δm-(ρ1+μ1-2min{ρn,μm})24if min{ρn,μm}<12(ρ1+μ1),δm-(ρn+μm-2max{ρ1,μ1})24if max{ρ1,μ1}>12(ρn+μm),δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

Proof.

The estimates of the real part of eigenvalues of 𝒜 follow directly from Theorems 2.1 and 2.2. The upper bound of |Im(λ)| follows in four different cases.

Case I: max{ρn,μm}<12(ρ1+μ1). Then ρn<12(ρ1+μ1) and μm<12(ρ1+μ1). It follows from Theorem 2.2 and Theorem 2.1 that

|Im(λ)|δm-(ρ1+μ1-2ρn)24and|Im(λ)|δm-(ρ1+μ1-2μm)24.

This shows that

|Im(λ)|min{δm-(ρ1+μ1-2ρn)24,δm-(ρ1+μ1-2μm)24}
=δm-max{(ρ1+μ1-2ρn)2,(ρ1+μ1-2μm)2}4
=δm-(ρ1+μ1-2min{ρn,μm})24.

Case II: min{ρn,μm}<12(ρ1+μ1)max{ρn,μm}. If ρn>μm, then μm<12(ρ1+μ1). Theorem 2.1 implies

|Im(λ)|δm-(ρ1+μ1-2μm)24=δm-(ρ1+μ1-2min{ρn,μm})24.

If ρnμm, then ρn<12(ρ1+μ1). Theorem 2.2 implies

|Im(λ)|δm-(ρ1+μ1-2ρn)24=δm-(ρ1+μ1-2min{ρn,μm})24.

The combination of Case I and Case II leads to

|Im(λ)|δm-(ρ1+μ1-2min{ρn,μm})24,if min{ρn,μm}<12(ρ1+μ1).

Case III: min{ρ1,μ1}>12(ρn+μm). Then ρ1>12(ρn+μm) and μ1>12(ρn+μm). Theorem 2.2 and Theorem 2.1 imply

|Im(λ)|δm-(ρn+μm-2ρ1)24and|Im(λ)|δm-(ρn+μm-2μ1)24.

This is rewritten as

|Im(λ)|δm-(ρn+μm-2max{ρ1,μ1})24.

Case VI: max{ρ1,μ1}>12(ρn+μm)min{ρ1,μ1}. By an analogous proof, we can prove the same result as that of Case III. The combination with Case III leads to

|Im(λ)|δm-(ρn+μm-2max{ρ1,μ1})24,if max{ρ1,μ1}>12(ρn+μm).

Collecting the results of the four cases above, we complete the proof. ∎

Remark 2.4.

In many cases, the estimates in (2.12) and (2.13) of Theorem 2.3 are sharper than the estimates in (2.1) and (2.2) of Theorem 2.1, respectively. Consider the following example of a non-symmetric saddle point matrix:

𝒜=(600100600100600-100500-1000).

In this example, we have m=2 and n=3, and we can compute that ρ1=ρn=6, μ1=0, μm=5 and δm=1. Using MATLAB, we can find all the eigenvalues of 𝒜 as follows:

5.5000+0.8660i,5.5000-0.8660i,0.1716,5.8284,6.0000.

But the bounds (2.1)–(2.2) give the following estimates of an eigenvalue λ of 𝒜 with Im(λ)0:

3Re(λ)5.5,|Im(λ)|1,

while the bounds (2.12)–(2.13) provide indeed clearly sharper estimates:

5Re(λ)5.5,|Im(λ)|0.8660.

3 Spectral Behavior of Singular Saddle Point Matrices

In recent years, several studies have been carried out to analyze the spectral properties of the saddle point matrix 𝒜 of form (1.2) when 𝒜 is nonsingular; see, e.g., [7, 8, 1]. But it is often encountered in many applications that matrix 𝒜 is singular. To the best of our knowledge, not much effort has been made yet in literature to study this important singular case. This is our major focus of this section. We shall first present some estimates of the rank of the singular saddle point matrix 𝒜, which help to determine the multiplicity of the eigenvalue 0. Then we will establish some important estimates of both real and complex parts of all eigenvalues of the singular matrix 𝒜. Those estimates are the essential tools for our analysis in Section 4 on the spectral behavior of the preconditioned systems for singular saddle point matrices by nonsingular preconditioners.

Theorem 3.1.

Let A be a saddle point matrix of form (1.2), where A is symmetric positive definite, C is symmetric positive semi-definite, and rank[B]=p and rank[C]=r. Then the rank of matrix A can be estimated by

n+max{r,p}rank[𝒜]n+min{m,p+r}.

Proof.

A direct calculation verifies the identity

𝒜=(ABT-BC)=(In0-BA-1Im)(A00S)(InA-1BT0Im).

This shows that

(3.1)rank[𝒜]=rank[(A00S)]=rank[A]+rank[S]=n+rank[S].

Since SC and SBA-1BT, we have

rank[S]max{rank[C],rank[BA-1BT]}=max{r,p}.

Since A is symmetric and positive definite, it follows

rank[S]=rank[C+BA-1BT]rank[C]+rank[BA-1BT]=r+p.

This and (3.1) complete the proof. ∎

By Theorem 3.1, we know that rank[𝒜]=n+m if rank[C]=m or rank[B]=m. This implies immediately that rank[C]<m, μ1=0 and rank[B]<m if 𝒜 is singular. This leads to the following estimate of the multiplicity of the eigenvalue 0 of 𝒜.

Theorem 3.2.

Under the same settings for the matrices A, A and C as in Theorem 3.1, except that A is singular, the multiplicity of eigenvalue 0 of A can be estimated by

max{1,m-p-r}m-max{r,p}.

For the real eigenvalues of the singular saddle point matrix 𝒜, we can get the following results by a similar argument as that used in [1, Proposition 2.2] and [31, Theorem 2.1].

Theorem 3.3.

Let A, A and C be matrices as in Theorem 3.1, and S the Schur complement S=C+BA-1BT, with 0<q=rank[S]<m. Then any non-zero real eigenvalue λ of A can be estimated by

min{ρ1,ζm-q+1}λmax{ρn,μm}.

The following results are direct consequences of Theorems 3.2 and 3.3.

Corollary 3.4.

Let A be the singular saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric positive semi-definite. Assume that rank[B]=p, rank[C]=r and 0<q=rank[S]. Then we have the following estimates of the eigenvalues λ of A:

  1. 𝒜 has at least max{1,m-p-r} and at most m-max{r,p} eigenvalues of 0.

  2. If λ>0, then it has the bounds

    min{ρ1,ζm-q+1}λmax{ρn,μm}.
  3. If Im(λ)0, then

    max{12ρ1,ρ1-δm}Re(λ)min{12(ρn+μm),ρn+δm,μm+δm},
    |Im(λ)|{δm-(ρ1-2μm)24if 2μm<ρ1,δm-(ρn+μm-2ρ1)24if 2ρ1>ρn+μm,δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

If C=0, then Corollary 3.4 can be much simplified as stated in the following corollary.

Corollary 3.5.

Under the same assumptions as in Corollary 3.4, except that C=0, the following estimates of the eigenvalues λ of A hold:

  1. 𝒜 has m-p eigenvalues of 0.

  2. If λ>0 and we set δm+1=+, then

    min{ρ1,δm-p+1ρn}λρn.
  3. If Im(λ)0, then

    |Im(λ)|δm-ρ124𝑎𝑛𝑑max{12ρ1,ρ1-δm}Re(λ)min{12ρn,δm}.

4 Nonsingular Preconditioners for Singular Saddle Point Matrices

Intensive efforts have been made for the studies of preconditioners for nonsingular saddle point systems of form (1.1); see, e.g., [7, 8, 28, 3, 13, 34, 21, 33, 5, 35, 19]. On the other hand, few results are available for the analysis on preconditioners for singular saddle point systems, and we are only aware of the results in [36], which considered the simple case with C=0 in (1.1). Moreover, only singular preconditioners were analyzed in [36] for the singular saddle point system (1.1), but which are not always reliable and reasonable, as the preconditioned systems may not be equivalent to the original system (1.1) if singular preconditioners are used. In this section, we present and study some nonsingular preconditioners 𝒦 for the saddle point matrix 𝒜 in (1.1) when 𝒜 is singular. We shall consider the preconditioners of the general form

(4.1)𝒦=(PABT-BPC),

where PA and PC are symmetric positive definite approximations of A and C, respectively. Then we shall solve the following preconditioned system:

(4.2)𝒦-1𝒜z=𝒦-1b,

or

(4.3)𝒜𝒦-1z^=b,z^=𝒦z,

where z=(xT,yT)T and b=(fT,-gT)T. Since

𝒦-1=(In-PA-1BT(PC+BPA-1BT)-10(PC+BPA-1BT)-1)(PA-10BPA-1Im),

we need only to solve two subsystems PAx=rx and (PC+BPA-1BT)y=ry when applying the GMRES method coupled with the preconditioner 𝒦 to the saddle point problem (1.1). In addition, we may easily see that B is not of full row rank when 𝒜 is singular, so it is natural to require a symmetric positive definite PC in (4.1). By Theorem 3.1, we know that 𝒦 is nonsingular. Some simple choices of PA and PC can be, e.g., PA=diag(A) and PC=αI with a positive parameter α.

The preconditioners of form (4.1) were considered in [8] for the case when 𝒜 is nonsingular, and PC was taken to be the special matrix PC=BPA-1BT-PS, where PS is a symmetric positive definite approximation of the Schur complement S. But we would emphasize that the singular case considered here is much more technical than the nonsingular system. One main technical difficulty one may encounter in solving a singular system by GMRES is that the iteration process may break down at some step. Some conditions were derived in [12] under which the GMRES iterates converge safely to a solution of a singular system. For our subsequent study of the convergence of the GMRES method for solving the preconditioned system (4.2), we introduce the following convergence result from [12, Theorem 2.4].

Theorem 4.1 ([12]).

GMRES determines a least-squares solution of the singular linear system Az=b without breakdown for all b and initial guess z0 if and only if null[A]=null[AT]. Furthermore, if Az=b is consistent and z0R(A), then the least-squares solution is the pseudoinverse solution, i.e., the minimizer in the Euclidean norm 2.

Using the above convergence theory, we can derive the following result when GMRES is applied for solving the singular saddle point problem (1.1).

Theorem 4.2.

Let K be the preconditioner defined in (4.1) and let the saddle point problem (1.1) be consistent. If null[BT]null[C] is the invariant subspace of PC, then GMRES applied for the preconditioned system (4.2) with any initial guess z0 determines a solution of the saddle point problem (1.1) without breakdown. Furthermore, if z0R(K-1A), then the GMRES solution is the pseudoinverse solution.

Proof.

Firstly, we prove that null[𝒦-1𝒜]=null[𝒜T𝒦-T]. Since the preconditioner 𝒦 is nonsingular, we have null[𝒦-1𝒜]=null[𝒜]. For any (xT,yT)Tnull[𝒜], it holds that

(4.4)Ax+BTy=0,-Bx+Cy=0.

This shows x=-A-1BTy. Together with the second equality in (4.4), this implies yTBA-1BTy+yTCy=0. Since both of the matrices BA-1BT and C are symmetric semi-definite, we have BTy=0 and Cy=0. Since x=-A-1BTy=0, the null space of 𝒜 can be expressed as

(4.5)null[𝒜]={(0T,yT)T,ynull[BT]null[C]}.

On the other hand, for any z=(xT,yT)Tnull[𝒜T𝒦-T], it follows 𝒜T𝒦-Tz=0. This shows that 𝒦-Tznull[𝒜T]. Using the same argument as that for deriving null[𝒜] in (4.5), we deduce

null[𝒜T]={(0T,yT)T,ynull[BT]null[C]}.

Then there exists a vector y^null[BT]null[C] such that 𝒦-Tz=(0T,y^T)T; in other words, z=𝒦T(0T,y^T)T. Noticing the form (4.1) and the condition that null[BT]null[C] is the invariant subspace of PC, we deduce

(xy)=(PA-BTBPC)(0y^)=(-BTy^PCy^)=(0PCy^)null[𝒜].

This implies null[𝒜T𝒦-T]null[𝒜]. Since rank[𝒜T𝒦-T]=rank[𝒜], we derive

null[𝒜T𝒦-T]=null[𝒜]=null[𝒦-1𝒜].

Together with Theorem 4.1 and the fact that (1.1) is consistent, this yields that GMRES determines a solution z*n+m of the preconditioned system (4.2) without breakdown for all b and z0. It is easy to see that z* is also a solution of the saddle point problem (1.1). Furthermore, if the initial vector z0(𝒦-1𝒜), then z* is the pseudoinverse solution, which completes the proof of Theorem 4.2. ∎

Remark 4.3.

The condition that null[BT]null[C] is the invariant subspace of PC is not restrictive. Actually, if we set PC as a scalar matrix, i.e., PC=αIm with α>0, then this condition follows naturally.

Similarly to the results in Theorem 4.2, we have the following convergence for GMRES applied for the preconditioned system (4.3).

Theorem 4.4.

Let K be the preconditioner defined as in (4.1). If null[BT]null[C] is the invariant subspace of PC, then GMRES applied for the preconditioned system (4.3) with any initial guess z0 determines a solution of the saddle point problem (1.1) without breakdown.

Proof.

Using a similar argument as that for Theorem 4.2, we can prove null[𝒜𝒦-1]=null[𝒦-T𝒜T]. Then we know from Theorem 4.1 that GMRES determines a least-squares solution z^*n+m of the preconditioned system (4.3) without breakdown for all b and z0. Since

0=(𝒜𝒦-1)T(b-𝒜𝒦-1z^*)=𝒦-T𝒜T(b-𝒜𝒦-1z^*),

we see 𝒜T(b-𝒜𝒦-1z^*)=0. This implies that z*=𝒦-1z^* is a least-squares solution of the original system (1.1). ∎

Remark 4.5.

Unlike Theorem 4.2, Theorem 4.4 can be applied also to the case where system (1.1) is inconsistent.

Although Theorems 4.2 and 4.4 guarantee the convergence of GMRES when applied to the preconditioned system (4.2) and (4.3), respectively, its convergence rate depends strongly on the spectral properties of the preconditioned matrix. It will be our central task in the subsequent section to study the spectral properties of the preconditioned matrix. For these studies, we remark that the general case C0 considered here is much more technical than the simple case C=0 studied in [36], and the analysis of [36] clearly does not apply to the current case with C0. Moreover, unlike the results in [36], which gave only the estimates of the real part of the eigenvalues of the preconditioned system in terms of the spectrum of the preconditioned system for A in (1.1), we shall derive the lower and upper bounds of both the real and complex eigenvalues of the preconditioned matrix.

5 Spectral Estimates of the Preconditioned Matrix

There are two major analysis techniques that enable the study of the spectral behavior of the preconditioned matrix 𝒜𝒦-1 in (4.3). First, we manage to work out two effective similar transformations, which transform our preconditioned system 𝒜𝒦-1 into a matrix, see in (5.2) below, that has exactly the same block form as the original saddle point matrix 𝒜 in (1.2). Then we can apply our results of Section 3 for the saddle point matrix 𝒜 to derive the lower and upper bounds of the real and complex eigenvalues of the preconditioned system 𝒜𝒦-1. But this application is not straightforward. We thus have to establish the spectral estimates for each of the matrices QQT, H and the Schur complement W=:H+QAp-1QT (see (5.2)–(5.4) below) associated with the new saddle point matrix in (5.2), and several delicate new techniques are introduced here.

In the remainder of this section, we will investigate the spectral properties of 𝒜𝒦-1 by using the results we developed in Section 3. To this end, we first define a matrix

𝒫=(PA1/200PC1/2).

We can directly verify that 𝒜𝒦-1 is similar to (𝒫-1𝒜𝒫-1)(𝒫𝒦-1𝒫), i.e.,

(PA-1/2APA-1/2PA-1/2BTPC-1/2-PC-1/2BPA-1/2PC-1/2CPC-1/2)(InPA-1/2BTPC-1/2-PC-1/2BPA-1/2Im)-1.

With the definitions

(5.1)Ap=PA-1/2APA-1/2,Cp=PC-1/2CPC-1/2,R=PC-1/2BPA-1/2,

the above observation is recast as

𝒜𝒦-1(ApRT-RCp)(InRT-RIm)-1.

This and the factorization

(InRT-RIm)=(In0-RIm+RRT)(InRT0Im)

lead to

𝒜𝒦-1(ApRT-RCp)(In-RT0Im)(In0(Im+RRT)-1R(Im+RRT)-1)
(In0(Im+RRT)-1R(Im+RRT)-1)(ApRT-RCp)(In-RT0Im)
=(Ap(In-Ap)RT-(Im+RRT)-1R(In-Ap)(Im+RRT)-1(2RRT-RApRT+Cp))
(5.2)(ApQT-QH)=:,

where Q and H are given by

(5.3)Q=(Im+RRT)-1/2R(In-Ap),
(5.4)H=(Im+RRT)-1/2(2RRT-RApRT+Cp)(Im+RRT)-1/2.

Since 𝒜𝒦-1 and have the same eigenvalues, it suffices to estimate the eigenvalues of . As has the same structure as the original saddle point matrix 𝒜 in (1.2), we can apply our results developed in Section 3 to investigate the spectral properties of .

To do so, we write the eigenvalues of Ap, Cp, RRT and QQT, respectively, as

0<ρ^1ρ^2ρ^nand0μ^1μ^2μ^m,
0ζ^1ζ^2ζ^mand0δ^1δ^2δ^m.

Next, we will derive the spectral estimates for each of the matrices QQT, H, and Schur complement W, so that we can apply the spectral bounds of Section 3 to the new saddle point matrix in (5.2).

Without loss of generality, we assume that ρ^n<1. If ρ^n1, a proper scaling of PA will bring us back to the case that ρ^n<1. Under this assumption, In-Ap is symmetric positive definite. Since R(2In-Ap)RT0 and Cp0, we have

(Im+RRT)1/2H(Im+RRT)1/2Cp0

and

(Im+RRT)1/2H(Im+RRT)1/2R(2In-Ap)RT0.

This shows

rank[H]max{rank[R(2In-Ap)RT],rank[Cp]}=max{rank[B],rank[C]}.

Hence H0 if B0 or C0.

It is easy to verify that

(5.5)rank[Q]=rank[PC-1/2BPA-1/2]=rank[B].

The combination of (5.1), (5.3), and (5.4) leads to

W=(Im+RRT)-1/2[2RRT-RApRT+Cp+R(In-Ap)Ap-1(In-Ap)RT](Im+RRT)-1/2
=(Im+RRT)-1/2(Cp+RAp-1RT)(Im+RRT)-1/2

and

Cp+RAp-1RT=PC-1/2(C+BA-1BT)PC-1/2=PC-1/2SPC-1/2.

This implies

rank[W]=rank[Cp+RAp-1RT]=rank[S].

Together with (5.5), we are ready to establish the first major result in this section.

Lemma 5.1.

Let A be the singular saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric and positive semi-definite, and let Q and H be the two matrices given in (5.3) and (5.4), respectively. Assume that rank[B]=p, rank[C]=r and 0<q=rank[S]. Then rank[Q]=p and max{p,r}rank[W]=qp+r, and the following estimates hold:

0<ζ^m-p+1(1-ρ^n)21+ζ^m-p+1δ^m-p+1δ^mζ^m(1-ρ^1)21+ζ^m,
λmax[H]min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}},
λm-q+1[W]11+ζ^mλm-q+1[Cp+RAp-1RT]>0.

In particular, if C=0, then rank[W]=p and

λm-p+1[W]ζ^m-p+1ρ^n(1+ζ^m-p+1).

Proof.

It is easy to verify that the eigenvalues of (Im+RRT)-1/2RRT(Im+RRT)-1/2 are

ζ^j1+ζ^j,j=1,2,,m.

Since the function x1+x is monotonically increasing at x0, we have

(5.6)λmax[(Im+RRT)-1/2RRT(Im+RRT)-1/2]=ζ^m1+ζ^m.

This, ρ^n<1 and (5.3) lead to

QQT=(Im+RRT)-1/2R(In-Ap)2RT(Im+RRT)-1/2
(1-ρ^1)2(Im+RRT)-1/2RRT(Im+RRT)-1/2
(5.7)ζ^m(1-ρ^1)21+ζ^mIm.

Therefore,

δ^mζ^m(1-ρ^1)21+ζ^m.

On the other hand, (5.7) implies

QQT(1-ρ^n)2(Im+RRT)-1/2RRT(Im+RRT)-1/20.

This gives an estimate of the smallest positive eigenvalue of QQT, namely

δ^m-p+1(1-ρ^n)2λm-p+1[(Im+RRT)-1/2RRT(Im+RRT)-1/2].

Together with the fact that

λm-p+1[(Im+RRT)-1/2RRT(Im+RRT)-1/2]=ζ^m-p+11+ζ^m-p+1>0,

this leads to

δ^m-p+1ζ^m-p+1(1-ρ^n)21+ζ^m-p+1.

The second result is proved in the sequel. From (5.4) and (5.6) we obtain

H=(Im+RRT)-1/2R(2In-Ap)RT(Im+RRT)-1/2+(Im+RRT)-1/2Cp(Im+RRT)-1/2
(2-ρ^1)(Im+RRT)-1/2RRT(Im+RRT)-1/2+μ^m(Im+RRT)-1
[ζ^m(2-ρ^1)1+ζ^m+μ^m]Im

and

Hmax{(2-ρ^1),μ^m}[(Im+RRT)-1/2RRT(Im+RRT)-1/2+(Im+RRT)-1]
=max{(2-ρ^1),μ^m}Im.

This implies

λmax[H]min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}}.

For any matrix Lm×m, we have rank[LLT]=rank[LTL], and LLT and LTL have the same eigenvalues. This leads to

λm-q+1[W]=λm-q+1[(Im+RRT)-1/2(Cp+RAp-1RT)(Im+RRT)-1/2]
=λm-q+1[(Cp+RAp-1RT)1/2(Im+RRT)-1(Cp+RAp-1RT)1/2]
11+ζ^mλm-q+1[Cp+RAp-1RT],

which gives our desired estimate.

Particularly, if C=0, then Cp=0 leads to

W=(Im+RRT)-1/2RAp-1RT(Im+RRT)-1/21ρ^n(Im+RRT)-1/2RRT(Im+RRT)-1/2.

Therefore,

λm-p+1[W]1ρ^nλm-p+1[(Im+RRT)-1/2RRT(Im+RRT)-1/2]=ζ^m-p+1ρ^n(1+ζ^m-p+1),

which completes the proof. ∎

Introduce two constants

ξm=ζ^m(1-ρ^1)21+ζ^mandϑm=min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}}.

The factorization

=(In0-QAp-1Im)(Ap00W)(InAp-1QT0Im)

leads to the following estimates directly from Corollary 3.4 and Lemma 5.1.

Theorem 5.2.

Let A and K be two saddle point matrices of the forms (1.2) and (4.1), respectively, where A is symmetric and positive definite and C is symmetric positive semi-definite, with rank[B]=p and rank[C]=r. Then any eigenvalue λ of the preconditioning matrix AK-1 fulfills the following estimates:

  1. 𝒜𝒦-1 has at least max{1,m-p-r} and at most m-max{p,r} eigenvalues of 0.

  2. If λ>0 and η^ is the minimum non-zero eigenvalue of Cp+RAp-1RT, then

    min{ρ^1,η^1+ζ^m}λmax{ρ^n,ϑm},
  3. If Im(λ)0, then

    max{ρ^12,ρ^1-ξm}Re(λ)min{ρ^n+ϑm2,ρ^n+ξm,ϑm+ξm},
    |Im(λ)|{(ξm2-(ρ^1-2ϑm)24)1/2if 2ϑm<ρ^1,(ξm2-(ρ^n+ϑm-2ρ^1)24)1/2if 2ρ^1>ρ^n+ϑm,ξm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

The two simple inequalities

ϑm=min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}}
min{(2-ρ^1)+μ^m,max{(2-ρ^1),μ^m}}
=max{(2-ρ^1),μ^m}

and

2-ρ^1>ρ^n

simplify the estimates of Theorem 5.2 as below.

Corollary 5.3.

Under the conditions of Theorem 5.2 on the singular saddle point matrix A and its nonsingular preconditioner K, any eigenvalue λ of the preconditioning matrix AK-1 fulfills the following estimates:

  1. If λ>0 and η^ is the minimum non-zero eigenvalue of Cp+RAp-1RT, then

    min{ρ^1,η^1+ζ^m}λmax{2-ρ^1,μ^m}.
  2. If Im(λ)0, then |Im(λ)|1-ρ^1 and

    max{12ρ^1,2ρ^1-1}Re(λ)min{12(ρ^n+max{(2-ρ^1),μ^m}),1+ρ^n-ρ^1}.

Corollary 5.3 shows that all eigenvalues of 𝒜𝒦-1 are located in a square with a side length of 2 if PC is chosen such that μ^m2.

Moreover, if we have C=0, then Cp=0, and μ^m=0 and r=0 accordingly. In this case,

ϑm=min{ζ^m(2-ρ^1)1+ζ^m,max{(2-ρ^1),0}}=min{ζ^m(2-ρ^1)1+ζ^m,(2-ρ^1)}=ζ^m(2-ρ^1)1+ζ^m.

An application of this expression of ϑm in Theorem 5.2 verifies the following corollary.

Corollary 5.4.

Under the same conditions of Theorem 5.2 on the singular saddle point matrix A and its nonsingular preconditioner K, except that we now assume C=0, the following holds:

  1. 𝒜𝒦-1 has m-p eigenvalues at 0.

Moreover, any eigenvalue λ of the preconditioning matrix AK-1 fulfills the following estimates:

  1. If λ>0, then

    min{ρ^1,ζ^m-p+1ρ^n(1+ζ^m-p+1)}λmax{ρ^n,ζ^m(2-ρ^1)1+ζ^m}.
  2. If Im(λ)0, then

    max{12ρ^1,ρ^1-ξm}Re(λ)min{12(ρ^n+ζ^m(2-ρ^1)1+ζ^m),ρ^n+ξm,ζ^m(2-ρ^1)1+ζ^m+ξm},
    2|Im(λ)|{4ξm2-(ρ^1-2ζ^m(2-ρ^1)1+ζ^m)2if ρ^1>4ζ^m1+3ζ^m,4ξm2-(ρ^n+ζ^m(2-ρ^1)1+ζ^m-2ρ^1)2if 2ρ^1-ρ^n2-ρ^1>ζ^m1+ζ^m,2ξm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

It can be seen from Corollary 5.4 that the smaller the maximum eigenvalue ζ^m of RRT is, the more clustered all the eigenvalues of the preconditioned matrix 𝒜𝒦-1 are. And as C=0, we can choose PC=αIm with α>0. Then RRT=1αBPA-1BT, which shows that ζ^m will be small for sufficiently large α. On the other hand, we can infer from Corollary 5.3 that the distribution of the eigenvalues of 𝒜𝒦-1 will be clustered even for large ζ^m.

6 Numerical Experiments

In this section, we present a few numerical examples to test the performance of some nonsingular preconditioners in combination with the preconditioned restarted GMRES(50) method for solving the singular saddle point problems of form (1.1). All experiments were run in MATLAB R2013b on a PC with Intel(R) Core(TM) i5-5257U, CPU @2.70 GHz, 8.00 GB. For each example, we report the number of iterations, the CPU time and the relative residual, which are respectively denoted by “Iter”, “CPU” and “RES”. Let zk be the k-th approximate solution of the saddle point problem (1.1), then “RES” is defined by the relative residuals

RES:=b-𝒜zk2b2.

The initial guess z0 is always set to 0 and the iteration is terminated when RES10-6.

Let L=ichol(A) be the incomplete Cholesky decomposition of the matrix A with droptol = 1e-02. We set PA=LLT. Then it is easy to see that

𝒦=(L0-BL-TIm)(LTL-1BT0PC+BL-TL-1BT).

Since L is a sparse lower triangular matrix, then in each step, we just need to solve some lower triangular linear equations. In our computations, we shall compare the performance of the restarted GMRES(50) method, to be denoted by GMRES, and the preconditioned restarted GMRES(50) method coupled with the nonsingular preconditioner 𝒦 defined as in (4.1) with PC=Im/10,Im, 10Im, to be denoted by Im/10, Im, 10Im, respectively.

Example 6.1.

This example arises from the discretization of the Stokes problem (see [37, Example 4.1]), where the block matrices in the saddle point system (1.1) take the form

A=(IT+TI00IT+TI)2l2×2l2,
B=(E,b1,b2)T(l2+2)×2l2,C=0(l2+2)×(l2+2),

where the matrices T and E and the vectors b1 and b2 are given by

T=1h2tridiag(-1,2,-1)l×l,E=(IFFI)2l2×l2,
b1=E(e0),b2=E(0e),e=(1,1,,1)Tl2/2,

and F=1htridiag(-1,1,0)l×l, h=1/(l+1), and p=2l2, q=l2+2 and n=p+q=3l2+2. As the matrix B is not of full row rank, the saddle point matrix 𝒜 in (1.2) is singular.

In this example, we choose the right-hand-side vector of the saddle point problem (1.1) so that its exact solution is given by the vector (1,1,,1)TRn+m. The numerical results for Example 6.1 are listed in Table 1.

Table 1

Numerical results for Example 6.1.

l (n+m)Im/10Im10ImGMRES
Iter202114350
20 (1202)CPU0.17810.17220.13700.2252
Res8.0573e-074.1426e-075.8910e-079.2265e-07
Iter2315181207
40 (4802)CPU1.00900.64630.77441.5174
Res5.6112e-075.1562e-073.7556e-079.9657e-07
Iter1518221593
60 (10802)CPU1.57971.87472.38423.9278
Res9.7470e-075.4979e-075.2269e-079.9772e-07
Iter1721261857
80 (19202)CPU4.19974.69236.01077.8732
Res6.3100e-075.3369e-075.1136e-079.8980e-07
Iter1924292993
100 (30002)CPU8.629411.288612.524019.2063
Res5.8478e-075.1840e-075.6628e-079.8048e-07

Example 6.2 ([20]).

Consider the following Stokes system:

(6.1)-Δu+p=fin Ω,
(6.2)u=0in Ω,

where Ω is the square domain (-1,1)×(0,1) in 2, the vector field u represents the velocity in Ω, p represents pressure, and the scalar ν>0 is the kinematic viscosity. A Dirichlet no-flow condition is applied on the side and bottom boundaries. The non-zero horizontal velocity is set on the lid, namely u/x=1 on [-1,1]×{1}.

This example is a model of the flow in a square cavity with the lid moving from left to right. By the setting of the non-zero horizontal velocity, we know that the test problem is a leaky two-dimensional lid-driven cavity problem in the square domain. We discretize the Stokes system (6.1)–(6.2) respectively by the Q2-P1 and Q2-Q1 finite elements on some uniform grids and apply the IFISS software package developed by Elman, Ramage, and Silvester [20] to generate the discretized linear systems for the meshes of size 16×16, 32×32, 64×64 and 128×128. The resulting numerical results are listed in Tables 2 and 3.

Table 2

Numerical results for Example 6.2 discretized by the Q2-P1 finite elements.

Grids (DOF)Im/10Im10ImGMRES
Iter243740179
16×16 (770)CPU0.14200.22120.25710.0723
Res7.0355e-074.9405e-079.6511e-079.9986e-07
Iter435862585
32×32 (2946)CPU0.31370.79250.85700.5876
Res5.9034e-079.8726e-079.6535e-079.9612e-07
Iter7897891037
64×64 (11522)CPU2.39312.89612.87813.2114
Res9.3026e-077.0296e-079.8980e-079.9468e-07
Iter1421401182034
128×128 (45570)CPU20.578318.170711.201031.9458
Res9.2068e-079.4374e-078.6544e-079.9401e-07
Table 3

Numerical results for Example 6.2 discretized by the Q2-Q1 finite elements.

Grids (DOF)Im/10Im10ImGMRES
Iter487296693
16×16 (659)CPU0.26350.38220.53500.3049
Res8.8569e-078.3216e-077.7988e-079.8736e-07
Iter1001391361795
32×32 (2467)CPU1.05321.38011.31091.1852
Res8.1105e-079.8396e-079.8363e-079.6033e-07
Iter1983142495808
64×64 (9539)CPU4.17026.10914.849015.7482
Res8.8052e-079.9544e-079.4223e-079.9879e-07
Iter52462441812158
128×128 (37507)CPU24.769327.211720.4702141.0548
Res9.4923e-079.9520e-079.8991e-079.9941e-07

Example 6.3 ([20]).

This example still considers the Stokes system (6.1)–(6.2) as in Example 6.2. We set the non-zero horizontal velocity on the top part of the domain, namely u/x=1 on (-1,1)×{1}. The test problem describes a watertight cavity problem.

In this example, we still discretize the Stokes system (6.1)–(6.2) respectively by the Q2-P1 and Q2-Q1 finite elements on some uniform grids and apply the IFISS software package developed by Elman, Ramage, and Silvester [20] to generate the discretized linear systems for the meshes of size 16×16, 32×32, 64×64 and 128×128. The resulting numerical results are listed in Tables 4 and 5.

Table 4

Numerical results for Example 6.3 discretized by the Q2-P1 finite elements.

Grids (DOF)Im/10Im10ImGMRES
Iter243741193
16×16 (770)CPU0.04890.05170.07480.0771
Res7.8316e-076.4971e-078.4952e-079.6807e-07
Iter435963570
32×32 (2946)CPU0.43140.64330.91030.7155
Res6.4033e-077.7504e-079.1291e-079.8859e-07
Iter7898901107
64×64 (11522)CPU2.01022.81032.70963.8480
Res9.2947e-078.6487e-079.1595e-079.9945e-07
Iter1511641332158
128×128 (45570)CPU12.125214.796311.339631.9458
Res9.8236e-079.6033e-079.2670e-079.9916e-07
Table 5

Numerical results for Example 6.3 discretized by the Q2-Q1 finite elements.

Grids (DOF)Im/10Im10ImGMRES
Iter497596743
16×16 (659)CPU0.28750.41240.52750.2514
Res8.6921e-078.8594e-070. 8.8411e-079.9926e-07
Iter1021711421874
32×32 (2467)CPU1.12161.37161.28041.9340
Res9.9141e-079.3515e-079.5301e-079.9352e-07
Iter2363002916430
64×64 (9539)CPU4.33236.19854.930520.1298
Res9.2762e-079.9989e-079.8999e-079.9953e-07
Iter56976553414523
128×128 (37507)CPU24.353027.499024.6415169.2451
Res9.8758e-079.8506e-079.9415e-079.9986e-07

Example 6.4 ([20]).

This example still considers the Stokes system (6.1)–(6.2) as in Example 6.2, but with the non-zero horizontal velocity on the top part of the domain, namely u/x=1-x4 on [-1,1]×{1}. This test problem is a regularized cavity.

In this example, we still discretize the Stokes system (6.1)–(6.2) respectively by the Q2-P1 and Q2-Q1 finite elements on some uniform grids and apply the IFISS software package developed by Elman, Ramage, and Silvester [20] to generate the discretized linear systems for the meshes of size 16×16, 32×32, 64×64 and 128×128. The resulting numerical results are listed in Tables 6 and 7.

Table 6

Numerical results for Example 6.4 discretized by the Q2-P1 finite elements.

Grids (DOF)Im/10Im10ImGMRES
Iter242525187
16×16 (770)CPU0.16140.22130.24570.1714
Res7.3059e-074.2618e-076.8107e-079.7381e-07
Iter425562552
32×32 (2946)CPU0.36960.80290.93761.1737
Res9.7092e-078.7564e-078.8934e-079.9121e-07
Iter8289821013
64×64 (11522)CPU2.55302.65132.65115.0921
Res6.9653e-079.1581e-077.4481e-079.9341e-07
Iter1321271001789
128×128 (45570)CPU12.141612.406710.878024.1341
Res9.5510e-079.7394e-077.9339e-079.9902e-07
Table 7

Numerical results for Example 6.4 discretized by the Q2-Q1 finite elements.

Grids (DOF)Im/10Im10ImGMRES
Iter487296544
16×16 (659)CPU0.30870.39340.58060.4424
Res9.0198e-078.0978e-077.6102e-079.6448e-07
Iter991291361979
32×32 (2467)CPU1.10701.57861.89562.2722
Res9.2079e-079.9279e-079.2309e-079.8958e-07
Iter1922611904969
64×64 (9539)CPU6.77478.17217.518913.4392
Res9.1734e-079.9986e-079.9663e-079.9895e-07
Iter4614401689095
128×128 (37507)CPU47.984839.651026.7795104.4916
Res9.9336e-079.6920e-079.8466e-079.9989e-07

We may observe from the numerical results shown in Tables 17 that our new preconditioner 𝒦 in (4.1) is effective and stable. In our computation, we set the symmetric and positive definite matrix PC as a scalar matrix, i.e., PC=αIm with α=110, 1, 10. By comparing the numerical results of the three different cases, we can find that the performance of the preconditioner 𝒦 is not sensitive to the change of the parameter α or even the matrix PC. Furthermore, with the dimension increased, the advantages on the required CPU time of the preconditioned GMRES(50) coupled with 𝒦 have become prominent gradually.

Award Identifier / Grant number: 11071041

Award Identifier / Grant number: N_CUHK437/16

Funding statement: The work of the first author was supported by National Postdoctoral Program for Innovative Talents (Grant No. BX201600182), China Postdoctoral Science Foundation (Grant No. 2016M600141) and partially by a Direct Grant for Research from CUHK. The work of the second author was supported by National Natural Science Foundation of China (Grant No. 11071041), Fujian Natural Science Foundation (Grant No. 2016J01005). The work of the third author was substantially supported by Hong Kong RGC grant (project 14306814) and National Natural Science Foundation (NSFC)/Research Grants Council (RGC) Joint Research Scheme 2016/17 (project N_CUHK437/16).

References

[1] O. Axelsson, Unified analysis of preconditioning methods for saddle point matrices, Numer. Linear Algebra Appl. 22 (2015), 233–253. 10.1002/nla.1947Search in Google Scholar

[2] O. Axelsson and M. G. Neytcheva, Eigenvalue estimates for preconditioned saddle point matrices, Numer. Linear Algebra Appl. 13 (2006), 339–360. 10.1007/978-3-540-24588-9_1Search in Google Scholar

[3] Z.-Z. Bai, M. K. Ng and Z.-Q. Wang, Constraint preconditioners for symmetric indefinite matrices, SIAM J. Matrix Anal. Appl. 31 (2009), 410–433. 10.1137/080720243Search in Google Scholar

[4] R. E. Bank, B. D. Welfert and H. Yserentant, A class of iterative methods for solving saddle point problems, Numer. Math. 56 (1990), 645–666. 10.1007/BF01405194Search in Google Scholar

[5] M. Benzi and G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004), 20–41. 10.1137/S0895479802417106Search in Google Scholar

[6] M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005), 1–137. 10.1017/S0962492904000212Search in Google Scholar

[7] M. Benzi and V. Simoncini, On the eigenvalues of a class of saddle point matrices, Numer. Math. 103 (2006), 173–196. 10.1007/s00211-006-0679-9Search in Google Scholar

[8] L. Bergamaschi, On eigenvalue distribution of constraint-preconditioned symmetric saddle point matrices, Numer. Linear Algebra Appl. 19 (2012), 754–772. 10.1002/nla.806Search in Google Scholar

[9] S. Beuchler, T. Eibner and U. Langer, Primal and dual interface concentrated iterative substructuring methods, SIAM J. Numer. Anal. 46 (2008), 2818–2842. 10.1137/070691723Search in Google Scholar

[10] A. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996. 10.1137/1.9781611971484Search in Google Scholar

[11] F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods, Springer, New York, 1991. 10.1007/978-1-4612-3172-1Search in Google Scholar

[12] P. N. Brown and H. F. Walker, GMRES on (nearly) singular systems, SIAM J. Matrix Anal. Appl. 18 (1997), 37–51. 10.1137/S0895479894262339Search in Google Scholar

[13] Z.-H. Cao, Positive stable block triangular preconditioners for symmetric saddle point problems, Appl. Numer. Math. 57 (2007), 899–910. 10.1016/j.apnum.2006.08.001Search in Google Scholar

[14] C. Carstensen, M. Kuhn and U. Langer, Fast parallel solvers for symmetric boundary element domain decomposition equations, Numer. Math. 79 (1998), 321–347. 10.1007/s002110050342Search in Google Scholar

[15] C.-R. Chen and C.-F. Ma, A generalized shift-splitting preconditioner for saddle point problems, Appl. Math. Lett. 43 (2015), 49–55. 10.1016/j.aml.2014.12.001Search in Google Scholar

[16] J. Chen, Y. Xu and J. Zou, An adaptive edge element method and its convergence for a saddle-point problem from magnetostatics, Numer. Methods Partial Differential Equations 28 (2012), 1643–1666. 10.1002/num.20697Search in Google Scholar

[17] X. Chen, On preconditioned Uzawa methods and SOR methods for saddle-point problems, J. Comput. Appl. Math. 100 (1998), 207–224. 10.1016/S0377-0427(98)00197-6Search in Google Scholar

[18] P. Ciarlet, Jr., J. Huang and J. Zou, Some observations on generalized saddle-point problems, SIAM J. Matrix Anal. Appl. 25 (2003), 224–236. 10.1137/S0895479802410827Search in Google Scholar

[19] C. R. Dohrmann and R. B. Lehoucq, A primal-based penalty preconditioner for elliptic saddle point systems, SIAM J. Numer. Anal. 44 (2006), 270–282. 10.1137/040619016Search in Google Scholar

[20] H. C. Elman, A. Ramage and D. J. Silvester, Algorithm 866: IFISS, a MATLAB toolbox for modelling incompressible flow, ACM Trans. Math. Software 33 (2007), 1–18. 10.1145/1236463.1236469Search in Google Scholar

[21] H. C. Elman, D. J. Silvester and A. J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier–Stokes equations, Numer. Math. 90 (2002), 665–688. 10.1007/s002110100300Search in Google Scholar

[22] V. Girault and P. A. Raviart, Finite Element Approximation of the Navier Stokes Equations, Springer, New York, 1979. 10.1007/BFb0063447Search in Google Scholar

[23] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, New York, 1984. 10.1007/978-3-662-12613-4Search in Google Scholar

[24] N. I. M. Gould and V. Simoncini, Spectral analysis of saddle point matrices with indefinite leading blocks, SIAM J. Matrix Anal. Appl. 31 (2009), 1152–1171. 10.1137/080733413Search in Google Scholar

[25] Q. Hu and J. Zou, An iterative method with variable relaxation parameters for saddle-point problems, SIAM J. Matrix Anal. Appl. 23 (2001), 317–338. 10.1137/S0895479899364064Search in Google Scholar

[26] Q. Hu and J. Zou, Two new variants of nonlinear inexact Uzawa algorithms for saddle-point problems, Numer. Math. 93 (2002), 333–359. 10.1007/s002110100386Search in Google Scholar

[27] Q. Hu and J. Zou, Nonlinear inexact Uzawa algorithms for linear and nonlinear saddle-point problems, SIAM J. Optim. 16 (2006), 798–825. 10.1137/S1052623403428683Search in Google Scholar

[28] C. Keller, N. I. M. Gould and A. J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl. 21 (2000), 1300–1317. 10.1137/S0895479899351805Search in Google Scholar

[29] I. Perugia and V. Simoncini, Block-diagonal and indefinite symmetric preconditioners for mixed finite element formulations, Numer. Linear Algebra Appl. 7 (2000), 585–616. 10.1002/1099-1506(200010/12)7:7/8<585::AID-NLA214>3.0.CO;2-FSearch in Google Scholar

[30] T. Rusten and R. Winther, A preconditioned iterative method for saddle point problems, SIAM J. Matrix Anal. Appl. 13 (1992), 887–904. 10.1137/0613054Search in Google Scholar

[31] S.-Q. Shen, T.-Z. Huang and J. Yu, Eigenvalue estimates for preconditioned nonsymmetric saddle point matrices, SIAM J. Matrix Anal. Appl. 31 (2010), 2453–2476. 10.1137/100783509Search in Google Scholar

[32] S.-Q. Shen, L. Jian, W.-D. Bao and T.-Z. Huang, On the eigenvalue distribution of preconditioned nonsymmetric saddle point matrices, Numer. Linear Algebra Appl. 21 (2014), 557–568. 10.1002/nla.1911Search in Google Scholar

[33] D. Silvester, H. Elman, D. Kay and A. Wathen, Efficient preconditioning of the linearized Navier–Stokes equations for incompressible flow, J. Comput. Appl. Math. 128 (2001), 261–279. 10.1016/B978-0-444-50616-0.50011-7Search in Google Scholar

[34] D. J. Silvester and A. J. Wathen, Fast iterative solution of stabilised Stokes systems, Part II: Using general block preconditioners, SIAM J. Numer. Anal. 31 (1994), 1352–1367. 10.1137/0731070Search in Google Scholar

[35] J.-L. Zhang, C.-Q. Gu and K. Zhang, A relaxed positive-definite and skew-Hermitian splitting preconditioner for saddle point problems, Appl. Math. Comput. 249 (2014), 468–479. 10.1016/j.amc.2014.10.059Search in Google Scholar

[36] N.-M. Zhang and P. Shen, Constraint preconditioners for solving singular saddle point problems, J. Comput. Appl. Math. 238 (2013), 116–125. 10.1016/j.cam.2012.08.025Search in Google Scholar

[37] B. Zheng, Z.-Z. Bai and X. Yang, On semi-convergence of parameterized Uzawa methods for singular saddle point problems, Linear Algebra Appl. 431 (2009), 808–817. 10.1016/j.laa.2009.03.033Search in Google Scholar

Received: 2017-3-30
Revised: 2017-4-1
Accepted: 2017-4-4
Published Online: 2017-5-31
Published in Print: 2018-4-1

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 12.12.2024 from https://www.degruyter.com/document/doi/10.1515/cmam-2017-0006/html
Scroll to top button