[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to content
Publicly Available Published by De Gruyter January 4, 2024

Numerical Approximation of Gaussian Random Fields on Closed Surfaces

  • Andrea Bonito EMAIL logo , Diane Guignard and Wenyu Lei ORCID logo

Abstract

We consider the numerical approximation of Gaussian random fields on closed surfaces defined as the solution to a fractional stochastic partial differential equation (SPDE) with additive white noise. The SPDE involves two parameters controlling the smoothness and the correlation length of the Gaussian random field. The proposed numerical method relies on the Balakrishnan integral representation of the solution and does not require the approximation of eigenpairs. Rather, it consists of a sinc quadrature coupled with a standard surface finite element method. We provide a complete error analysis of the method and illustrate its performances in several numerical experiments.

1 Introduction

Random fields appear in many applications. A typical example is when uncertainty is incorporated in mathematical models. The uncertainty in the parameters reflects an intrinsic variability of the system or our inability to adequately characterize all the data, for instance due to measurements. In a probability setting, the uncertainty is modeled by random variables or more generally random fields. A concrete example in the context of partial differential equations (PDEs) is Darcy’s flow, where the permeability is modeled as a log-normal random field [23, 56]. Many methods have been developed to solve stochastic partial differential equations (SPDEs) or to compute statistics of their solutions. The most commonly used are Monte Carlo-type methods [36, 25], the stochastic Galerkin method [35, 7] and the stochastic collocation method [62, 5]. Common to all these methods is the use of samples from random fields.

The standard approach to evaluate a random field is to use a truncated series expansion, e.g., a Karhunen–Loève (KL) [48, 49] or Fourier expansion, thereby consisting of only a finite number of random variables (this process is usually referred to as the finite-dimensional noise assumption in the random PDEs literature).

The truncated KL expansion is widely used in practice as it is the one minimizing the mean square error. In specific cases, the analytic expression of the KL expansion is known, e.g., for random fields with separable exponential covariance function on hyperrectangles (the eigenfunctions being (product of) sine and cosine functions) [45, 50] or for isotropic Gaussian random fields (GRFs) on the sphere. Here the eigenfunctions are the spherical harmonics [52, 44] and are known analytically. However, in general, the computation of KL expansions requires approximating eigenfunctions of an elliptic operator. We refer to [58] for an efficient algorithm (based on discontinuous finite elements) that can be used to approximate the truncated KL expansion of a random field with prescribed mean field and covariance function defined on a Euclidean domain. In the case of GRFs on compact Riemannian manifolds, a numerical method combining a Galerkin approximation of the Laplace–Beltrami operator with a truncated Chebyshev series approximation of the square-root of a covariance matrix is introduced in [43].

As an alternative to KL expansions, multilevel representations have been considered for efficient simulation of random fields. We refer for instance to [8] for (modified) spherical needlet-based representations of GRFs on the sphere or [37] for wavelet-based representations of GRFs indexed by Euclidean domains, manifolds, or graphs.

In this work, we are interested in GRFs which belong to the so-called (Whittle–)Matérn class, see (2.9). They are characterized as the solution to a fractional PDE with additive white noise of the form

(1.1) ( κ 2 I - Δ γ ) s u ~ = w ~ ,

where κ > 0 , s > n - 1 4 and γ is a closed hypersurface in n ( n = 2 , 3 ).

This link between GRFs and fractional SPDEs was already observed in [60, 61] for stationary fields on n and later extended in [47] to more general fields, including GRFs on bounded domains. Numerical solvers for such fractional SPDEs can then be used to approximate random fields, see for instance [12, 11, 26] for GRFs defined in Euclidean domains, [40, 24] for GRFs on general manifold, and [14] for GRFs on graphs. A list of recent applications of the SPDE approach to the approximation of random fields can be found in [46].

We propose a numerical method for approximating the solution to (1.1) when n - 1 4 < s < 1 ; refer to Remark 2.1 for extensions to s 1 . The method relies on the integral representation

(1.2) ( κ 2 I - Δ γ ) - s w ~ = sin ( π s ) π 0 μ - s ( ( μ + κ 2 ) I - Δ γ ) - 1 w ~ 𝑑 μ .

The improper integral is approximated by a sinc quadrature and a continuous linear finite element method on an approximate surface Γ is put forward to approximate the integrand at each sinc quadrature point.

Such a two-step numerical strategy (sinc quadrature coupled with a finite element method) was originally proposed in [21] for Euclidean domains and κ = 0 . We refer to [22, 19, 2] for several extensions and notably to [18] when the domain is a closed hypersurfaces.

All the above mentioned work on fractional PDEs are for deterministic data. Although the approximation of the solution to (1.1) considered in this work is similar to the one introduced in [18, Section 3], realizations of the stochastic right-hand side w ~ are not almost surely in L 2 ( γ ) . Whence, following [38], we replace w ~ by a projection onto a conforming finite element space defined on γ. The resulting approximation is a Gaussian random vector with zero mean and a covariance matrix given by a weighted finite element mass matrix. Alternatively in [13], which considers the Euclidean setting, w ~ is approximated by an expansion with respect to the discrete eigenfunctions of the conforming finite element approximation of the operator ( κ 2 I - Δ γ ) . Using a change of basis, the resulting approximation to w ~ can be expressed in terms of the finite element basis functions and can then be used in practice since both are equal in the mean square sense. This property is critical in our analysis of the error in the mean square norm.

We now comment on the main novelties of this work.

  1. Compared to [13, 38], we improve on the convergence of the sinc quadrature by deriving an exponential convergence rate with respect to the quadrature spacing and in particular not depending on the finite element mesh size; see Theorem 5.2 and 6.1 as well as the numerical validation in Remark 5.3. This implies that the mesh size and the quadrature spacing can be chosen independently.

  2. The abstract analysis proposed in [38] considers a finite element method defined on the exact surface and provides strong error estimates. Instead, we design and analyze a parametric finite element method on approximate surfaces by taking into account the geometric error. Note that setting the finite element on approximate (polygonal) surfaces facilitates the construction of the discrete system. In [40], the surface is restricted to a sphere but strong and mean squared norm error estimates are derived for parametric finite element methods on approximations of the sphere. In that case, the approximation of the white noise strongly relies on the knowledge of the eigenpairs for the sphere. Our analysis, accounts for the intricate influence of the geometric error in the approximation of the eigenpairs for general C 3 surfaces.

  3. Borrowing ideas from [13], we estimate the discrepancy between the mean square norm of the exact solution and that of the proposed approximate solution. Our analysis relies on the asymptotic behavior of the eigenvalues of the Laplace–Beltrami operator (Weyl’s law) as well as the ability for the finite element method to approximate the eigenvalues of ( κ 2 I - Δ γ ) . However, the analysis in [13] relies on an assumption for the approximation of individual eigenvalues (Assumption 2.6 in [13])

    (1.3) 0 < Λ ~ j γ - λ ~ j C λ ~ j r h q , q > 1 , r > 0 ,

    where Λ ~ j γ is the approximation of the jth eigenvalue λ ~ j (see (3.3) and (4.9)). To derive optimal convergence rates in the mean square norm, it appears that one needs q = r = 2 . With this choice of parameters, the constant C in (1.3) may not be uniform with in j; see, e.g., [32, Theorem 47.10]. In this work, we do not rely on such assumption but rather derive a weaker but cluster robust eigenvalue error estimate (Lemma 4.1) instrumental for the optimal convergence rates in the mean square norm provided by Theorem 6.1. It is worth pointing out that the cluster robust estimate of Lemma 4.1 is used in [43, Appendix C] to guarantee (1.3) with a uniform constant C when q = r = 2 .

Organization of the Paper.

We start by recalling in Section 2 known results about random fields, including properties of their KL expansions. In Section 3, we define our model problem (2.13) in Sobolev spaces and introduce the Dunford–Taylor integral representation for the solution. The numerical method is introduced in Section 4. The strong and mean square norm error estimates are discussed in Sections 5 and 6, respectively. Theorem 5.2 (strong convergence) and Theorem 6.1 (convergence in the mean square norm) are the main results of this work. Several numerical experiments illustrating the efficiency of the method and the influence of s and κ on resulting GRFs are provided in Section 7.

2 Random Fields

We start this section by recalling the notion of random fields. A natural approach to represent a random field is to consider its Karhunen–Loève expansion formally described in Section 2.1, see [45] and references therein. However, this representation is computationally challenging as it requires (an approximation of) the eigenpairs of the associated covariance operator, see (2.2) below. Alternatively, Matérn Gaussian random fields defined in Section 2.1.2 and considered in this work are solutions to an SPDE with Gaussian white noise. This representation is critical for the efficient numerical method analyzed in this work.

2.1 Random Fields in Euclidean Domains

Let D n , n = 1 , 2 , 3 , be a bounded domain and let ( Ω , , ) be a complete probability space, where Ω is the set of outcomes, 2 Ω is a σ-algebra of events, and : [ 0 , 1 ] is a probability measure. Let u : D × Ω be a real-valued random field, namely u ( 𝐱 , ) is a real-valued random variable for each 𝐱 D . We suppose that u L 2 ( Ω ) L 2 ( D ) , that is u ( , ω ) L 2 ( D ) -a.e. in Ω and u ( 𝐱 , ) L 2 ( Ω ) a.e. in D. With a slight abuse of notation,[1] we shall often consider u as a square integrable mapping u : Ω L 2 ( D ) . This means that u L 2 ( Ω ; L 2 ( D ) ) , where for any Hilbert space H on D equipped with the norm H , the Bochner space L 2 ( Ω ; H ) is defined by

(2.1) L 2 ( Ω ; H ) := { v : Ω H v  is strongly measurable and  v L 2 ( Ω ; H ) < }

with

v L 2 ( Ω ; H ) 2 := 𝔼 [ v H 2 ] := Ω v ( ω ) H 2 𝑑 ( ω ) < .

We introduce u ¯ : D and cov u : D × D the mean and covariance functions of u defined respectively by

u ¯ ( 𝐱 ) := 𝔼 [ u ( 𝐱 , ) ] for a.e.  𝐱 D ,

and

(2.2) cov u ( 𝐱 , 𝐱 ) := 𝔼 [ ( u ( 𝐱 , ) - u ¯ ( 𝐱 ) ) ( u ( 𝐱 , ) - u ¯ ( 𝐱 ) ) ] for a.e.  𝐱 , 𝐱 D .

We say that u is weakly stationary if the mean field u ¯ is constant and the covariance function depends only on the difference 𝐱 - 𝐱 , namely cov u ( 𝐱 , 𝐱 ) = c ( 𝐱 - 𝐱 ) for some function c : D . If in addition cov u ( 𝐱 , 𝐱 ) depends only on the Euclidean distance | 𝐱 - 𝐱 | , then u is said to be isotropic; see [1].

To motivate the representation of a random field by its Karhunen–Loève expansion, see [45] and references therein, we suppose that u is such that cov u in (2.2) is continuous on D ¯ × D ¯ . This is for instance the case when u is a Gaussian random field with Matérn covariance, see Section 2.1.2 below, and D is convex; see for instance [56, Lemma 11]. Then u can be represented by its Karhunen–Loève expansion

(2.3) u ( 𝐱 , ω ) = u ¯ ( 𝐱 ) + i = 1 λ i ξ i ( ω ) ϕ i ( 𝐱 ) ,

where the sum converges in L 2 ( Ω ; L 2 ( D ) ) . In (2.3), { ( λ i , ϕ i ) } i = 1 with

λ 1 λ 2 0 and D ϕ i ( 𝐱 ) ϕ j ( 𝐱 ) 𝑑 𝐱 = δ i j

are the eigenpairs of the (compact, self-adjoint, positive semi-definite) Hilbert–Schmidt integral operator C u : L 2 ( D ) L 2 ( D ) defined for v L 2 ( D ) by

( C u v ) ( 𝐱 ) := D cov u ( 𝐱 , 𝐱 ) v ( 𝐱 ) 𝑑 𝐱 , 𝐱 D .

That is, the eigenpair ( λ i , ϕ i ) , i , satisfies the following eigenvalue problem (Fredholm integral equation of the second kind):

(2.4) D cov u ( 𝐱 , 𝐱 ) ϕ i ( 𝐱 ) 𝑑 𝐱 = λ i ϕ i ( 𝐱 ) , 𝐱 D .

Moreover, the random variables { ξ i } i = 1 , which are given by

(2.5) ξ i ( ω ) := 1 λ i D ( u ( 𝐱 , ω ) - u ¯ ( 𝐱 ) ) ϕ i ( 𝐱 ) 𝑑 𝐱 ,

have zero mean, unit variance, and are pairwise uncorrelated:

𝔼 [ ξ i ] = 0 and 𝔼 [ ξ i ξ j ] = δ i j .

In other words they are orthonormal in L 2 ( Ω ) .

2.1.1 Properties of the KL Expansion

The KL series in (2.3) converges not only in L 2 ( Ω ; L 2 ( D ) ) but also in L 2 ( Ω ) pointwise in 𝐱 D . Indeed, since cov u is symmetric, continuous, and positive semi-definite, thanks to Mercer’s theorem [55] we have

(2.6) cov u ( 𝐱 , 𝐱 ) = i = 1 λ i ϕ i ( 𝐱 ) ϕ i ( 𝐱 ) ,

where the convergence is absolute and uniform on D × D , and thus

Var [ u ( 𝐱 , ) ] = cov u ( 𝐱 , 𝐱 ) = i = 1 λ i ϕ i ( 𝐱 ) 2 .

Now for any N , let

(2.7) u N ( 𝐱 , ω ) := u ¯ ( 𝐱 ) + i = 1 N λ i ξ i ( ω ) ϕ i ( 𝐱 )

denote the truncated KL expansion with N terms, which satisfies the following relation for any 𝐱 D :

Var [ u N ( 𝐱 , ) ] = 𝔼 [ ( u N ( 𝐱 , ) - u ¯ ( 𝐱 ) ) 2 ]
= i , j = 1 N λ i λ j ϕ i ( 𝐱 ) ϕ j ( 𝐱 ) 𝔼 [ ξ i ξ j ] = δ i j = i = 1 N λ i ϕ i ( 𝐱 ) 2 .

Then for any 𝐱 D , thanks to Fubini’s theorem, relation (2.6), and the fact that { ϕ i } i = 1 forms an orthonormal basis of L 2 ( D ) , we infer that

𝔼 [ ( u ( 𝐱 , ) - u N ( 𝐱 , ) ) 2 ] = 𝔼 [ ( u ( 𝐱 , ) - u ¯ ( 𝐱 ) + u ¯ ( 𝐱 ) - u N ( 𝐱 , ) ) 2 ]
= Var ( u ( 𝐱 , ) ) - Var ( u N ( 𝐱 , ) )
(2.8) = i = N + 1 λ i ϕ i ( 𝐱 ) 2 0 as  N

uniformly in 𝐱 D , from which we deduce that (2.3) holds for all 𝐱 D .

Using the fact that ϕ i L 2 ( D ) = 1 for all i together with Fubini’s theorem, we easily deduce from (2.8) the relation

u - u N L 2 ( Ω ; L 2 ( D ) ) 2 = i = N + 1 λ i .

Therefore, the mean square error between the random field u and its truncated KL expansion u N is controlled by the decay of the eigenvalues, and the latter depends on the smoothness of the covariance function. Estimates on the decay of the eigenvalues can be found for instance in [34].

The truncated KL expansion (2.7) is the one minimizing the mean square error in the sense that (cf. [35])

{ ( λ i ξ i , ϕ i ) } i = 1 N = arg min { ( ζ i , ψ i ) } i = 1 N , D ψ i ψ j = δ i j 𝔼 [ D ( u ( 𝐱 , ) - u ¯ ( 𝐱 ) - i = 1 N ψ i ( 𝐱 ) ζ i ( ) ) 2 𝑑 𝐱 ] .

Finally, note that the KL representation (2.3) of u only requires the knowledge of the mean field and the covariance function, i.e. of the first two moments of u. From a modeling point of view, random fields can be generated by prescribing functions u ¯ and cov u and building the representation (2.3) or (2.7). However, the difficulty lies in the fact that not any function cov u is admissible, in particular it needs to be positive semi-definite. We will consider in the next section a specific family of admissible covariance functions, see (2.9) below, and we refer to [29] for other classes of functions.

2.1.2 Gaussian Random Fields

It is well known that if u is a GRF,[2] then u is fully determined by its mean and covariance functions [50]. Moreover, the KL expansion of a GRF is given by (2.3), where the ξ i are pairwise independent and follow a normal distribution, i.e. ξ i i.i.d. 𝒩 ( 0 , 1 ) .

In particular, we say that a GRF belongs to the Matérn family [53] if the covariance function reads

(2.9) cov u ( 𝐱 , 𝐱 ) = σ 2 2 ν - 1 Γ ( ν ) ( κ | 𝐱 - 𝐱 | ) ν K ν ( κ | 𝐱 - 𝐱 | ) ,

where σ 2 is the marginal variance, K ν is the modified Bessel function of the second kind of order ν, Γ is the gamma function, and κ and νare positive parameters controlling the spatial correlation range and the smoothness of cov u , respectively. Specifically, the underlying random field u is ν - 1 mean square times differentiable. The covariance function (2.9) is usually re-parameterized using, for instance, κ = 2 ν / l c with l c the correlation length. Processes with covariance function as in (2.9) have been used in many applications such as machine learning [57] and spatial statistics [59], in particular in geostatistics [29]. Note that for ν = 1 2 , the covariance function reduces to the exponential

(2.10) cov u ( 𝐱 , 𝐱 ) = σ 2 exp ( - | 𝐱 - 𝐱 | l c )

while in the limit ν we get the squared exponential

(2.11) cov u ( 𝐱 , 𝐱 ) = σ 2 exp ( - | 𝐱 - 𝐱 | 2 2 l c 2 ) .

When D is convex, the covariance function (2.10) is only Lipschitz continuous and the realizations of the underlying random field u are Hölder continuous with parameter α < 1 2 , namely u ( , ω ) C 0 , α ( D ¯ ) -almost surely. Instead, we have cov u C ( D ¯ × D ¯ ) and u ( , ω ) C ( D ¯ ) -almost surely when the covariance function is given by (2.11), see [56].

It was observed in [60, 61] that the stationary (mean zero) solution u : n × Ω to the SPDE

(2.12) ( κ 2 I - Δ ) s u = w in  n , -almost surely ,

has the covariance function (2.9) with smoothness parameter ν = 2 s - n 2 and marginal variance

σ 2 = Γ ( ν ) Γ ( ν + n 2 ) - 1 ( 4 π ) - n 2 κ - 2 ν .

Here, I denotes the identity operator, Δ is the Laplacian operator, and w is Gaussian white noise with unit variance. That is, w is a (so-called generalized) Gaussian random field satisfying

𝔼 [ ( w , φ ) L 2 ( n ) ] = 0 and Cov ( ( w , φ ) L 2 ( n ) , ( w , ψ ) L 2 ( n ) ) = ( φ , ψ ) L 2 ( n )

for all φ , ψ L 2 ( n ) , where ( , ) L 2 ( n ) stands for the L 2 inner product in n , see for instance [46].

Instead of using a (truncated) KL expansion to simulate a GRF with covariance function as in (2.9), we can thus solve the SPDE (2.12). Not only this latter approach does not require the knowledge of the eigenfunctions of the covariance operator, but it also permits several generalizations such as random fields on a bounded domain or non-stationary random fields; we refer to [47] for the case 2 s and to [13, 11] for the general case s > n 4 .

Remark 2.1.

In the case s 1 , namely s = m + s ¯ with m and s ¯ [ 0 , 1 ) , the solution to (2.12) can be computed recursively as described in [40]: first solve the m non-fractional SPDEs u i = u i - 1 for i = 1 , , m , where := ( κ 2 I - Δ ) and u 0 = w , and then solve the fractional SPDE s ¯ u = u m .

2.2 Random Fields on Surfaces

The advantage of the SPDE approach is that it can be straightforwardly generalized to random fields on surfaces [24], while this is not the case when starting from the mean field and covariance function. For instance, replacing the Euclidean distance | 𝐱 - 𝐱 | in (2.9) by the geodesic distance between 𝐱 and 𝐱 does not yield an admissible covariance function as it fails to be positive semi-definite, see, e.g., [33].

In this work, we are thus concerned with the numerical approximation of the following SPDE: find u ~ defined on the closed hypersurface γ n ( n = 2 , 3 ) such that 𝔼 [ u ~ ] = 0 almost everywhere on γ, and satisfies

(2.13) ( κ 2 I - Δ γ ) s u ~ = w ~ on  γ , -almost surely ,

where I denotes the identity operator, Δ γ is the Laplace–Beltrami operator, κ is a positive constant, and w ~ denotes Gaussian white noise with unit variance. The meaning of the fractional power s > n - 1 4 of the elliptic operator L := κ 2 I - Δ γ will be made precise in the next section. In statistics, the above equation models GRFs on closed surfaces [47]. As above, s indicates the smoothness of the corresponding covariance function while κ controls the correlation range [13].

In what follows, we shall restrict the fractional power to the range s ( n - 1 4 , 1 ) , and we mention that the recursive strategy described in Remark 2.1 can be used when s 1 . We also write a b when a C b , where C is a positive constant which does not depend on a, b, or the discretization parameters. We say a b if a b and b a . We will use the notation . ~ for quantities defined on γ and discrete functions in space are denoted with capital letters. Finally, all the equations involving random variables or random fields are to be understood in the -almost surely sense.

3 Dunford–Taylor Formulation of (2.13)

The central SPDE (2.13) involves fractional powers of differential operators on surfaces. In this section, we make these notions precise and justify an integral representation of the solution to (2.13). The latter is critical for the design and analysis of the numerical algorithm proposed in Section 4.

3.1 Differential Operators on Surfaces

We assume that γ is a closed and compact hypersurface in n ( n = 2 , 3 ) of class C 3 . The signed distance function to γ is denoted d : n . The regularity of γ translates into the regularity of d, that is d C 3 ( 𝒩 ) , where 𝒩 is a tubular neighborhood of γ of width depending on the curvatures of γ, see [27]. Note that d | γ is normal to γ and thus d is an extension to 𝒩 of the normal to the surface γ. With this at hand, one can define the orthogonal projection operator 𝐏 : 𝒩 γ by

(3.1) 𝐏 ( 𝐱 ) := 𝐱 - d ( 𝐱 ) d ( 𝐱 ) , 𝐱 𝒩 ,

which is instrumental to define normal extensions of quantities defined on γ. More precisely, for v ~ : γ , the extension is given by v ~ 𝐏 . By construction, these extensions are constant in the normal direction d .

The surface gradient (or tangential gradient) on γ is the tangential component of the regular gradient defined on 𝒩 , i.e.

γ v ~ := ( I - d d ) v | γ : γ n , v := v ~ 𝐏 .

The components of the surface gradient γ are denoted D i , i = 1 , , n so that when applied to a vector-valued function 𝐯 ~ = ( v ~ 1 , , v ~ n ) : γ n , the surface gradient reads

γ 𝐯 ~ := ( D j v ~ i ) i , j = 1 n .

Furthermore, the surface divergence and Laplace–Beltrami operator are given for 𝐯 ~ : γ n and v ~ : γ by, respectively,

div γ 𝐯 ~ := trace ( γ 𝐯 ~ ) , Δ γ v ~ := div γ ( γ v ~ ) .

The functional space L 2 ( γ ) stands for the space of measurable and square integrable functions on γ. It is a Hilbert space with scalar product and norm

( v ~ , w ~ ) γ := γ v ~ w ~ , v ~ L 2 ( γ ) 2 := ( v ~ , v ~ ) γ .

We also define the Sobolev spaces

H 1 ( γ ) := { v ~ L 2 ( γ ) : | γ v ~ | L 2 ( γ ) }

endowed with the norm

v ~ H 1 ( γ ) 2 := v ~ L 2 ( γ ) 2 + | γ v ~ | L 2 ( γ ) 2

and

H 2 ( γ ) := { v ~ H 1 ( γ ) : D i D j v ~ L 2 ( γ ) , i , j = 1 , , n }

with

v ~ H 2 ( γ ) 2 := v ~ H 1 ( γ ) 2 + i , j = 1 n D i D j v ~ L 2 ( γ ) 2 .

Their dual spaces are denoted H - 1 ( γ ) and H - 2 ( γ ) , respectively, and , H l ( γ ) stands for the duality product, l = 1 , 2 .

3.2 Solution Operator

We are now in a position to define a shifted Laplace–Beltrami problem associated with the SPDE (2.13), which formally corresponds to taking s = 1 . Given κ > 0 and f ~ H - 1 ( γ ) , we are interested in u ~ H 1 ( γ ) satisfying

κ 2 u ~ - Δ γ u ~ = f ~

in a weak sense. More precisely, we seek u ~ H 1 ( γ ) satisfying

(3.2) a γ ( u ~ , v ~ ) := κ 2 γ u ~ v ~ + γ γ u ~ γ v ~ = f ~ , v ~ H 1 ( γ ) for all  v ~ H 1 ( γ ) .

The Lax–Milgram theory implies that the above problem admits a unique solution and we denote by

T : H - 1 ( γ ) H 1 ( γ )

the solution operator T f ~ := u ~ . Because γ is of class C 3 , T is an isomorphism from L 2 ( γ ) to H 2 ( γ ) ; see [31, Theorem 3.3] and [17, Lemma 3]. Whence, its inverse L := ( κ 2 I - Δ γ ) = T - 1 is defined on H 2 ( γ ) .

3.3 Fractional Powers of Elliptic Operators

The operator T : H - 1 ( γ ) H 1 ( γ ) is compact thanks to the compact embedding H 1 ( γ ) H - 1 ( γ ) . Thus, the Fredholm alternative guarantees the existence of an L 2 ( γ ) orthonormal basis of eigenfunctions { ψ ~ j } j = 1 of its inverse L with non-decreasing positive eigenvalues

κ 2 = λ ~ 1 < λ ~ 2 λ ~ 3 .

That is, the eigenpair ( λ ~ j , ψ ~ j ) + × H 1 ( γ ) , j = 1 , 2 , , satisfies

(3.3) a γ ( ψ ~ j , v ~ ) = λ ~ j γ ψ ~ j v ~ for all  v ~ H 1 ( γ )

with the eigenfunctions chosen so that

( ψ ~ j , ψ ~ i ) γ = δ i j for all  i , j = 1 , 2 , .

In order to define fractional powers of the shifted Laplace–Beltrami operator L = κ 2 I - Δ γ , we shall define the so-called dotted space as follows. The dotted space H ˙ r ( γ ) for r 0 is the set of functions v ~ L 2 ( γ ) such that

(3.4) v ~ H ˙ r ( γ ) 2 := j = 1 λ ~ j r | ( v ~ , ψ ~ j ) γ | 2 < .

For negative indices, we define

H ˙ - r ( γ ) := { j = 1 c j ψ ~ j , H ˙ r ( γ ) : { c j } j = 1 , j = 1 | c j | 2 λ ~ j - r < } , r > 0 .

Here we have identified the duality product between H ˙ - r ( γ ) and H ˙ r ( γ ) with the L 2 ( γ ) scalar product and set

j = 1 c j ψ ~ j , j = 1 d j ψ ~ j H ˙ r ( γ ) := j = 1 c j d j for  j = 1 d j ψ ~ j H ˙ r ( γ ) .

For r [ 0 , 1 ] , the dotted spaces H ˙ 4 r - 2 ( γ ) are equivalent to the interpolation spaces H 4 r - 2 ( γ ) := ( H - 2 ( γ ) , H 2 ( γ ) ) r , 2 obtained by the real interpolation method, see [21]. To simplify the presentation, we will frequently use this equivalence, typically to estimate the H r ( γ ) norm of a function by the H ˙ r ( γ ) norm defined in (3.4). Moreover, we will simply write the duality pairing with , in what follows.

The fractional powers L β , β ( - 1 , 1 ) , of L are defined on H ˙ r ( γ ) using the eigenpairs of L. For

v ~ = j = 1 v j ψ ~ j , H ˙ r ( γ )

we define

(3.5) L β v ~ := j = 1 λ ~ j β v j ψ ~ j , .

We recall that we have identified the duality product between H ˙ - r ( γ ) and H ˙ r ( γ ) with the L 2 ( γ ) scalar product so that when r 0 we have

(3.6) L β v ~ = j = 1 λ ~ j β v j ψ ~ j with  v j := ( v ~ , ψ ~ j ) γ .

Invoking the Lax–Milgram theory again, we see that L β : H ˙ r ( γ ) H ˙ r - 2 β ( γ ) is an isomorphism and in view of the equivalence of spaces mentioned above,

(3.7) L β : H r ( γ ) H r - 2 β ( γ )  is an isomorphism

for β ( - 1 , 1 ) provided that

max { - 2 , 2 β - 2 } r min { 2 , 2 β + 2 } .

3.4 Regularity of the White Noise w ~ and the Solution u ~

We have established mapping properties of the fractional powers of L in the previous section. To justify the SPDE (2.13), it remains to determine the regularity of the white noise w ~ .

Recall that ( Ω , , ) is a complete probability space. We briefly sketch the proof of [13, Proposition 2.3] which guarantees that for r > n - 1 2 , w ~ satisfies

(3.8) w ~ L 2 ( Ω ; H - r ( γ ) ) 2 = 𝔼 [ w ~ H - r ( γ ) 2 ] < ,

i.e. w ~ L 2 ( Ω ; H - r ( γ ) ) . Consider the formal KL expansion of w ~ with respect to the orthonormal eigenbasis { ψ ~ j } j = 1 and write

(3.9) w ~ = j = 1 ξ j ψ ~ j , ,

where ξ j are pairwise independent real-valued standard normally distributed random variables, i.e. one has ξ j i.i.d. 𝒩 ( 0 , 1 ) . Weyl’s law (see, e.g., [39]) characterizes the asymptotic behavior of the eigenvalues

(3.10) λ ~ j j 2 n - 1 for  j = 1 , 2 , .

Whence, for r > n - 1 2 there holds

(3.11) w ~ L 2 ( Ω ; H - r ( γ ) ) 2 j = 1 j - 2 r n - 1 r r - n - 1 2 ,

which guarantees that w ~ L 2 ( Ω ; H - r ( γ ) ) as anticipated.

We can now link this regularity to the regularity of the solution to the SPDE (2.13) thanks to the isomorphic property (3.7) of L β . Indeed, for β ( 0 , 1 ) and n - 1 2 < r 2 , we have L - β w ~ H 2 β - r ( γ ) -almost surely. Moreover, (3.8) implies

L - β w ~ L 2 ( Ω ; H 2 β - r ( γ ) ) w ~ L 2 ( Ω ; H - r ( γ ) ) <

so that

(3.12) L - β w ~ L 2 ( Ω ; H 2 β - r ( γ ) )

and in particular, for s ( 0 , 1 ) there exists a unique solution u ~ = L - s w ~ to (2.13) in L 2 ( Ω ; H 2 s - r ( γ ) ) .

We end this subsection by recalling our convention: expressions relating functions from Ω to a Banach space are always understood -almost surely.

3.5 Dunford–Taylor Integral Representation

The regularity (3.12) implies that restricting s ( n - 1 4 , 1 ) the solution u ~ = L - s w ~ to our model problem (2.13) belongs to L 2 ( Ω ; H 2 s - r ( γ ) ) L 2 ( Ω ; L 2 ( γ ) ) for any r ( n - 1 2 , 2 s ) . Under these assumptions, we can proceed as in [21] and resort to a Dunford–Taylor integral representation of u ~ to design a numerical algorithm for its approximation. The next proposition justifies the integral representation (1.2) by extending [21, Theorem 2.1].

Proposition 3.1.

Let w ~ be a Gaussian white noise. For n - 1 4 < s < 1 , the expression

(3.13) I ( w ~ ) := sin ( π s ) π 0 μ - s ( μ I + L ) - 1 w ~ 𝑑 μ

is in L 2 ( Ω ; L 2 ( γ ) ) and coincides with L - s w ~ , i.e.

L - s w ~ = sin ( π s ) π 0 μ - s ( μ I + L ) - 1 w ~ 𝑑 μ in  L 2 ( Ω ; L 2 ( γ ) ) .

Proof.

We first show that the mapping

(3.14) Ω ζ 0 μ - s ( μ I + L ) - 1 w ~ ( , ζ ) 𝑑 μ L 2 ( γ )

is measurable. Recall that (3.8) establishes that w ~ L 2 ( Ω ; H - r ( γ ) ) for r = n - 1 2 + ϵ , ϵ > 0 , and so r 2 < s < 1 for ϵ sufficiently small. In particular, the mapping Ω ζ w ~ ( , ζ ) H - r ( γ ) is measurable. Consequently, the measurability of (3.14) follows from the continuity (or boundedness) of the linear mapping

(3.15) H - r ( γ ) v ~ 0 μ - s ( μ I + L ) - 1 v ~ 𝑑 μ L 2 ( γ )

discussed now.

Let v ~ H - r ( γ ) and set f ~ := L - r 2 v ~ which belongs to L 2 ( γ ) thanks to (3.7). We compute

( μ I + L ) - 1 v ~ L 2 ( γ ) 2 = ( μ I + L ) - 1 L r 2 f ~ L 2 ( γ ) 2 = j = 1 | λ ~ j r 2 μ + λ ~ j | 2 | f ~ j | 2 ,

where f ~ j := ( f ~ , ψ ~ j ) γ . We estimate the coefficients

| λ ~ j r 2 μ + λ ~ j |

for μ 1 and μ > 1 separately. When μ 1 , we recall that r 2 < s < 1 and that the eigenvalues are non-decreasingly ordered to deduce that

(3.16) λ ~ j r 2 μ + λ ~ j 1 ,

where the hidden constant depends only on λ ~ 1 = κ 2 . Instead, when μ > 1 , Young’s inequality allows us to estimate

(3.17) λ ~ j r 2 μ + λ ~ j μ r 2 - 1 .

Taking into account these two cases and using the fact that f ~ L 2 ( γ ) v ~ H - r ( γ ) , we find that

(3.18) ( μ I + L ) - 1 v ~ L 2 ( γ ) v ~ H - r ( γ ) { 1 when  μ 1 , μ r 2 - 1 when  μ > 1 .

Estimating the integral of this quantity separately for 0 < μ 1 and μ > 1 , we deduce that

(3.19) 0 μ - s ( μ I + L ) - 1 v ~ 𝑑 μ L 2 ( γ ) 0 μ - s ( μ I + L ) - 1 v ~ L 2 ( γ ) 𝑑 μ v ~ H - r ( γ )

and the continuity of the mapping (3.15) follows. In turns this proves the measurability of the mapping (3.14). The estimate

(3.20) I ( w ~ ) L 2 ( Ω ; L 2 ( γ ) ) w ~ L 2 ( Ω ; H - r ( γ ) )

is a direct consequence of (3.19).

We now show that the Dunford–Taylor integral I ( w ~ ) coincides with the spectral definition (3.5) of L - s w ~ , namely L - s w ~ = j = 1 λ ~ j - s ξ j ψ ~ j with w ~ = j = 1 ξ j ψ ~ j , . We only provide a sketch of the proof since the arguments follow those in [41, 9], see also [21, 22]. We start by noting that

(3.21) λ ~ j - s = sin ( π s ) π 0 μ - s ( μ + λ ~ j ) - 1 𝑑 μ

so that for the partial sum w ~ N := j = 1 N ξ j ψ ~ j , we have L - s w ~ N = I ( w ~ N ) . As a consequence, we have

L - s w ~ - I ( w ~ ) L 2 ( Ω ; L 2 ( γ ) ) L - s w ~ - L - s w ~ N L 2 ( Ω ; L 2 ( γ ) ) + I ( w ~ N - w ~ ) L 2 ( Ω ; L 2 ( γ ) ) w ~ - w ~ N L 2 ( Ω ; H - r ( γ ) ) ,

where we used the mapping properties (3.7) of L - s , r 2 < s , and estimate (3.20) to justify the last inequality. To estimate the difference between w ~ and w ~ N , we resort to Weyl’s law (3.10) to obtain

w ~ - w ~ N L 2 ( Ω ; H - r ( γ ) ) 2 j = N + 1 j - 2 r n - 1 0 as  N .

Combining the last two inequalities, we conclude that

L - s w ~ - I ( w ~ ) L 2 ( Ω ; L 2 ( γ ) ) = 0

as desired. ∎

4 Numerical Scheme

We propose a numerical scheme based on the integral representation

(4.1) u ~ = L - s w ~ = sin ( π s ) π 0 μ - s ( μ I + L ) - 1 w ~ 𝑑 μ

of the solution to the SPDE (2.13).

The numerical approximation of u ~ consists of two steps. Following [38], we approximate the white noise in a finite element space via projection. Then the integral representation (4.1) is approximated using a sinc quadrature combined with a standard finite element method for the approximation of the integrand at each quadrature point. We refer to [21, 22] for the original development and to [18] for its extension to the fractional Laplace–Beltrami problem on surfaces. The results in [21, 22, 18] do not apply in the present stochastic context.

We start with the construction of the finite element space.

4.1 Finite Element Methods on Discrete Surfaces

There exist several finite element methods for problems defined on hypersurfaces, see for instance [17] and the references therein. In this work, we consider parametric finite elements [30] which are defined on an approximated surface Γ with the help of the projection 𝐏 = 𝐱 - d ( 𝐱 ) d ( 𝐱 ) mapping Γ to γ assumed to be C 3 . Recall that d is the signed distance function and is of class C 3 , see Section 3.1. This property guarantees that the geometric inconsistency arising from replacing γ by Γ does not affect the finite element method convergence rate. For deterministic right-hand sides, this is the subject of [18, Theorem 4.2]. The numerical study in [18] also shows that using a generic continuous piecewise C 2 lift [54, 16, 15] does not affect the convergence rate, noting that such lifting operator only provides the first order convergence for the approximated surface.

Let Γ be an ( n - 1 ) -dimensional polyhedral surface so that the vertices of Γ lie on γ (see [28] for more general assumptions). The analysis provided below holds for a subdivision made of triangles or quadrilaterals. The former is possibly more standard while the latter is the setting used in the numerical illustrations in Section 7 below. To incorporate both settings simultaneously, we denote by 𝒯 a triangular or quadrilateral subdivision of Γ. Given a subdivision with triangles (resp. quadrilaterals), let τ ^ be the unit triangle (resp. square) and denote 𝒫 the set of linear (resp. bi-linear) polynomials defined on τ ^ .

For each τ 𝒯 , we set F τ : τ ^ τ , F τ [ 𝒫 ] n - 1 to be the map from the reference element to the physical element. We let c J := c J ( 𝒯 ) be such that

c J - 1 | 𝐱 | | D F τ 𝐱 | c J | 𝐱 | for all  𝐱 τ ^  and all  τ 𝒯 .

Furthermore, we denote by h τ , τ 𝒯 , the diameter of τ, h := max τ 𝒯 h τ , and by c q := c q ( 𝒯 ) the quasi-uniformity constant satisfying

h := max τ 𝒯 h τ = c q min τ 𝒯 h τ .

The maximum number of elements sharing the same vertex is

(4.2) c v := c v ( 𝒯 ) := max 𝐯  vertex of  𝒯 # { τ 𝒯 : 𝐯  is a vertex of  τ } .

The constants appearing in the analysis below (but hidden in “ ”) may depend on c J , c q , and c v while the dependency on the mesh size h will be explicitly given. We further assume that the geometry of γ is sufficiently well approximated by Γ, i.e. h is small enough, so that Γ 𝒩 and thus the lift 𝐏 defined in (3.1) restricted to Γ is a bijection. To simplify the notation in what follows, we introduce the operator P : L 1 ( γ ) L 1 ( Γ ) and its inverse P - 1 : L 1 ( Γ ) L 1 ( γ ) defined as follows. For v ~ : γ we set ( P v ~ ) ( 𝐱 ) := ( v ~ 𝐏 ) ( 𝐱 ) , 𝐱 Γ , while for v : Γ we set ( P - 1 v ) ( 𝐱 ) := ( v 𝐏 - 1 ) ( 𝐱 ) , 𝐱 γ .

Remark 4.1.

It is possible to construct sequences of subdivisions with uniform constants c J , c q , and c v . We refer to [20] for uniform refinements and to [16, 15] for local refinements by adaptive algorithms. A typical possibility is to start with an initial polyhedral surface Γ ¯ 0 and consider the sequence { Γ i := i 𝐏 ( Γ ¯ i ) } i = 1 , where { Γ ¯ i } consists of uniform refinements and i is the corresponding nodal interpolant.

The finite element space associated with 𝒯 is denoted 𝕍 ( 𝒯 ) and reads

𝕍 ( 𝒯 ) := { v H 1 ( Γ ) : v | τ F τ 𝒫  for all  τ 𝒯 } .

Its dimension will be denoted by N := dim ( 𝕍 ( 𝒯 ) ) .

We can now define the discrete operator L 𝒯 := T 𝒯 - 1 : 𝕍 ( 𝒯 ) 𝕍 ( 𝒯 ) as the discrete counterpart of L. Here T 𝒯 : 𝕍 ( 𝒯 ) 𝕍 ( 𝒯 ) is defined by T 𝒯 F := U , where U is the unique solution to

(4.3) A Γ ( U , V ) := κ 2 Γ U V + Γ Γ U Γ V = Γ F V  for all  V 𝕍 ( 𝒯 ) ;

compare with (3.2).

We also introduce σ : Γ + to be the ratio between the area element of γ and Γ associated with the parametrization 𝐏 so that for v ~ L 1 ( γ ) we have

(4.4) γ v ~ = Γ σ P v ~ .

The area ratio σ satisfies

(4.5) σ L ( Γ ) 1 , 1 - σ L ( Γ ) h 2 and 1 - σ - 1 L ( Γ ) h 2 ,

see for instance [17].

4.2 Lifted Finite Element Method on γ

At several instances, it will be useful to compare quantities defined on γ with their finite element approximation on the geometrically consistent lifted finite element space

𝕍 ~ ( 𝒯 ) := P - 1 𝕍 ( 𝒯 ) := { P - 1 V : V 𝕍 ( 𝒯 ) } .

Of course, the dimension of 𝕍 ~ ( 𝒯 ) is dim ( 𝕍 ~ ( 𝒯 ) ) = dim ( 𝕍 ( 𝒯 ) ) = N .

The operator L ~ 𝒯 : 𝕍 ~ ( 𝒯 ) 𝕍 ~ ( 𝒯 ) is defined as L 𝒯 , but on the exact surface γ. That is, we set

L ~ 𝒯 := T ~ 𝒯 - 1 : 𝕍 ~ ( 𝒯 ) 𝕍 ~ ( 𝒯 ) ,

where T ~ 𝒯 : 𝕍 ~ ( 𝒯 ) 𝕍 ~ ( 𝒯 ) is defined by T ~ 𝒯 F ~ := U ~ with U ~ uniquely solving

(4.6) κ 2 γ U ~ V ~ + γ γ U ~ γ V ~ = γ F ~ V ~ ,  for all  V ~ 𝕍 ~ ( 𝒯 ) .

We denote by 𝚷 ~ : L 2 ( γ ) 𝕍 ~ ( 𝒯 ) the L 2 ( γ ) projection onto 𝕍 ~ ( 𝒯 ) . It is defined for v ~ L 2 ( γ ) by the relations

(4.7) γ ( 𝚷 ~ v ~ - v ~ ) V ~ = 0 for all  V ~ 𝕍 ~ ( 𝒯 ) .

The Bramble–Hilbert lemma guarantees the approximation property

(4.8) ( I - 𝚷 ~ ) v ~ L 2 ( γ ) + h ( I - 𝚷 ~ ) v ~ H 1 ( γ ) h t v ~ H t ( γ )

for all v ~ H t ( γ ) with t [ 0 , 2 ] .

We now turn our attention to the eigenpairs of L ~ 𝒯 . We define { ( Λ ~ j γ , Ψ ~ j γ ) } j = 1 N * + × 𝕍 ~ ( 𝒯 ) on the exact surface γ as satisfying

(4.9) κ 2 γ Ψ ~ j γ V ~ + γ γ Ψ ~ j γ γ V ~ = Λ ~ j γ γ Ψ ~ j γ V ~ for all  V ~ 𝕍 ~ ( 𝒯 ) .

The eigenfunctions are chosen to be L 2 ( γ ) orthonormal, i.e.

(4.10) γ Ψ ~ i γ Ψ ~ j γ = δ i j , i , j = 1 , , N .

Error estimates for the eigenvalues approximations | λ ~ j - Λ ~ j γ | , j = 1 , , N , are needed in the proof of the mean square norm error estimate derived in Section 6 below. We refer to [6, 10] for standard error estimates on Euclidean domains which typically requires the mesh size to be sufficiently small, where how small depends on the eigenvalue being approximated. Moreover, the constants involved in the error estimates also depend on the magnitude of the eigenvalue being approximated. We refer to [42] for cluster robust error estimates. As a corollary, we obtain the following lemma and note that the provided estimate deteriorates for large eigenvalues, but will nonetheless be sufficient for our analysis.

Lemma 4.1.

For j = 1 , , N there holds

(4.11) 0 Λ ~ j γ - λ ~ j λ ~ j Λ ~ j γ h 2 ,

where the hidden constant does not depend on j nor on h.

Proof.

The desired estimate follows from [42, Theorem 3.1] which guarantees that for j = 1 , , N , there holds

(4.12) 0 Λ ~ j γ - λ ~ j Λ ~ j γ sup v ~ span { ψ ~ l : l { 1 , , j } } , v ~ H 1 ( γ ) = 1 ( I - 𝚷 ~ ) v ~ H 1 ( γ ) 2 .

Recall that 𝚷 ~ is the L 2 ( γ ) projection onto 𝕍 ~ ( 𝒯 ) , the lifted finite element space, see (4.7). It remains to estimate the right-hand side using the approximation property (4.8) of 𝚷 ~ to get

( I - 𝚷 ~ ) v ~ H 1 ( γ ) 2 h 2 v ~ H 2 ( γ ) 2 λ ~ j h 2 ,

where for the last inequality we used the fact that for v l := γ v ~ ψ ~ l ,

v ~ H 2 ( γ ) L v ~ L 2 ( γ ) = ( l = 1 j λ ~ l 2 v l 2 ) 1 2 λ ~ j 1 2 v ~ H 1 ( γ ) .

Inserting the above two inequalities into (4.12) yields (4.11). ∎

4.3 Approximations of the White Noise w ~

In [40], a surface finite element is proposed to approximate (2.13) on the sphere using a truncated KL expansion. For more general surfaces, we follow [38] to construct a finite-dimensional approximation of the white noise w ~ in the conforming finite element space 𝕍 ~ ( 𝒯 ) , namely we approximate w ~ = j = 1 ξ j ψ ~ j , by

(4.13) W ~ := j = 1 ξ j 𝚷 ~ ψ ~ j .

To show that W ~ L 2 ( Ω ; L 2 ( γ ) ) with

W ~ L 2 ( Ω ; L 2 ( γ ) ) 2 = N = dim ( 𝕍 ~ ( 𝒯 ) ) ,

we follow the proof of [38, Lemma 2.4]. Invoking Fubini’s theorem and using the orthonormality of the discrete eigenfunctions { Ψ ~ j γ } j = 1 N defined in (4.9), we compute

(4.14)

W ~ L 2 ( Ω ; L 2 ( γ ) ) 2 = 𝔼 [ j = 1 ξ j 2 ( 𝚷 ~ ψ ~ j , ψ ~ j ) γ 2 ] = j = 1 𝚷 ~ ψ ~ j L 2 ( γ ) 2
= j = 1 i = 1 N ( 𝚷 ~ ψ ~ j , Ψ ~ i γ ) γ 2 = i = 1 N j = 1 ( Ψ ~ i γ , ψ ~ j ) γ 2 = i = 1 N Ψ ~ i γ L 2 ( γ ) 2
= N .

We can now use the map P : L 1 ( γ ) L 1 ( Γ ) to define the approximation to w ~ on the discrete surface Γ, namely

(4.15) W := σ P W ~ L 2 ( Ω ; L 2 ( Γ ) ) ,

where the multiplicative factor σ is introduced to avoid geometric inconsistencies when mapping back to γ; see (4.4).

4.4 Computing the Approximate White Noise W

In this subsection, we discuss how to compute the approximation W = σ P W ~ of the white noise w ~ . We emphasize that it involves the L 2 ( γ ) projection of the continuous eigenfunctions ψ ~ j defined in (3.3), see (4.13).

The construction of the approximate solution U k given in (4.23) below requires the finite element approximation U l satisfying (4.24). In turn, the latter requires the evaluation of

(4.16) α i := Γ W Φ i = Γ σ P W ~ Φ i , i = 1 , , N ,

where { Φ j } j = 1 N denotes the Lagrange nodal basis functions of 𝕍 ( 𝒯 ) .

Using the mapping property (4.4) and the definition (4.13) of W ~ together with (4.7), we write

α i = γ W ~ ( P - 1 Φ i ) = k = 1 ξ k γ ψ ~ k ( P - 1 Φ i ) ,

where we recall that ξ k i.i.d. 𝒩 ( 0 , 1 ) . This implies that 𝔼 [ α i ] = 0 , i = 1 , , N , and

(4.17)

𝔼 [ α i α j ] = k = 1 ( γ ψ ~ k ( P - 1 Φ i ) ) ( γ ψ ~ k ( P - 1 Φ j ) )
= γ ( P - 1 Φ i ) ( P - 1 Φ j ) = Γ σ Φ i Φ j = : M i j , i , j = 1 , , N ,

where we used the orthonormality of the eigenfunctions { ψ ~ j } j = 1 . Hence, the vector 𝜶 := ( α 1 , , α N ) T satisfies 𝜶 𝒩 ( 𝟎 , M ) , where M := ( M i j ) i , j = 1 N is the σ-weighted mass matrix associated to the Lagrange finite element basis { Φ j } j = 1 N . As a consequence, if G denotes a Cholesky factor of M, i.e. G G T = M , the above reasoning indicates that G 𝒛 with 𝒛 𝒩 ( 𝟎 , I N × N ) follows the same distribution as 𝜶 and could thus be used in place of 𝜶 .

For the analysis below, we shall need a different representation of W ~ based on the discrete eigenfunctions { Ψ ~ j γ } j = 1 N defined on the exact surface γ, see Section 4.2. Namely, we set

(4.18) W ~ Ψ := j = 1 N ξ j Ψ ~ j γ ,

where ξ j i.i.d. 𝒩 ( 0 , 1 ) ; compare with (4.13). The corresponding random data vector is given by

𝜶 Ψ := ( α i Ψ ) i = 1 N := ( Γ σ P W ~ Ψ Φ i ) i = 1 N .

In fact, 𝜶 Ψ follows the same distribution as 𝜶 . To verify that this is true, let R N × N be the matrix with entries

R i j := Γ σ ( P Ψ ~ i γ ) Φ j ,

so that α i Ψ = ( R T 𝝃 ) i for i = 1 , , N , where 𝝃 := ( ξ 1 , , ξ N ) T . Recalling that M is the σ-weighted mass matrix with entries M i j = Γ σ Φ i Φ j , using the orthonormality property (4.10) we infer that M = R T R . Thus,

𝔼 [ 𝜶 Ψ ( 𝜶 Ψ ) T ] = 𝔼 [ R T 𝝃 𝝃 T R ] = M .

This shows that 𝜶 Ψ , 𝜶 𝒩 ( 𝟎 , M ) .

Thanks to the eigenvalues approximation estimate (Lemma 4.1), we deduce estimates for L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) when r ( n - 1 2 , 2 s ) ; compare with the estimates (3.11) for w ~ . This is the object of the following lemma.

Lemma 4.2.

For r ( n - 1 2 , 2 s ) , we have

(4.19) L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 r r - n - 1 2 .

Proof.

Using the expansion in the eigenfunction basis { Ψ ~ i γ } i = 1 N of L ~ 𝒯 (discrete operator defined on the lifted space 𝕍 ~ ( 𝒯 ) , see (4.6)), we find that

L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 = j = 1 N ( Λ ~ j γ ) - r = j = 1 N [ ( Λ ~ j γ ) - r - λ ~ j - r ] + j = 1 N λ ~ j - r .

The approximation estimates for the eigenvalues (Lemma 4.1) yield

| ( Λ ~ j γ ) - r - λ ~ j - r | r λ ~ j - r - 1 | Λ ~ j γ - λ ~ j | h 2 λ ~ j - r Λ ~ j γ λ ~ j - r ,

where for the last inequality we used the fact that Λ ~ j γ Λ ~ N γ h - 2 . Therefore, we obtain

L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 j = 1 N λ ~ j - r .

It remains to take advantage of the decay of the eigenvalues (3.10), see also formula (3.11), to derive the desired estimate. ∎

4.5 Finite Element Approximation for L - s

The numerical scheme advocated here for approximating (4.1) follows the original work in [19]. Although the scheme is conceptually the same, its analysis must be extended, see Section 5.

With the change of variable y = ln ( μ ) , so that μ = e y , we rewrite (4.1) as

u ~ = L - s w ~ = sin ( π s ) π - e ( 1 - s ) y ( e y I + L ) - 1 w ~ 𝑑 y .

The improper integral in y is approximated using a sinc quadrature with spacing k > 0 , i.e.

(4.20) u ~ u ~ k := 𝒬 k - s ( L ) w ~ := k sin ( π s ) π l = - 𝙼 𝙽 e ( 1 - s ) y l ( e y l I + L ) - 1 w ~ ,

where

(4.21) 𝙽 := 2 π 2 ( s - n - 1 4 ) k 2 , 𝙼 := π 2 ( 1 - s ) k 2

and y l := k l , l = - 𝙼 , , 𝙽 . This particular choice of 𝙽 and 𝙼 is motivated by the analysis provided in Section 5.

The sinc quadrature approximation is stable as indicated by the following lemma.

Lemma 4.3.

For all v ~ L 2 ( Ω ; H - r ( γ ) ) , there holds

(4.22) 𝒬 k - s ( L ) v ~ L 2 ( Ω ; L 2 ( γ ) ) v ~ L 2 ( Ω ; H - r ( γ ) ) .

In the interest of being concise, we do not provide a proof of this result since estimate (4.22) follows from similar arguments leading to (3.20) (for a finite sum instead of the integral).

The numerical approximation of (4.20) is based on [18]. We recall that to guarantee optimal convergence rate for the deterministic PDE ( - Δ γ ) u ~ = f ~ , the right-hand side used in the numerical algorithm is chosen to be σ P f ~ . In the present context, this suggests the use of σ P W ~ L 2 ( Ω ; L 2 ( Γ ) ) as introduced in Section 4.3, where W ~ is defined in (4.13). Consequently, the approximation U k L 2 ( Ω ; 𝕍 ( 𝒯 ) ) to u ~ is defined as

(4.23) U k := 𝒬 k - s ( L 𝒯 ) ( σ P W ~ ) := k sin ( π s ) π l = - 𝙼 𝙽 e ( 1 - s ) y l U l ,

where U l := ( e y l I + L 𝒯 ) - 1 ( σ P W ~ ) L 2 ( Ω ; 𝕍 ( 𝒯 ) ) approximates u ~ l := u ~ l ( w ~ ) := ( e y l I + L ) - 1 w ~ , namely it satisfies

(4.24) ( e y l + κ 2 ) Γ U l V + Γ Γ U l Γ V = Γ σ ( P W ~ ) V  for all  V 𝕍 ( 𝒯 ) .

In other words, recalling the discussion in Section 4.4 for the computation of the approximate white noise, we have U l ( 𝐱 , ω ) = j = 1 N u j l ( ω ) Φ j ( 𝐱 ) where the random vector 𝐮 l = ( u j l ) j = 1 N : Ω N solves the linear system

(4.25) A l 𝐮 l = G 𝐳 .

Here one has 𝐳 𝒩 ( 𝟎 , I N × N ) , the matrix G is a Cholesky factor of the weighted mass matrix M defined in (4.17), and the matrix A l = ( a i j l ) i , j = 1 N N × N is given by

a i j l := ( e y l + κ 2 ) Γ Φ j Φ i + Γ Γ Φ j Γ Φ i , i , j = 1 , , N .

Regarding the finite element error, we recall that [18, Theorem 4.2] guarantees that for f ~ L 2 ( γ ) with vanishing mean value and L = ( - Δ γ ) , there holds

P 𝒬 k - s ( L ) f ~ - 𝒬 k - s ( L 𝒯 ) ( σ P f ~ ) L 2 ( Γ ) h 2 s ln ( h - 1 ) f ~ L 2 ( γ )

provided h < e - 1 . This result readily extends to our current setting where the operator L is κ 2 I - Δ γ and the right-hand side has a stochastic component. More precisely, we have

(4.26) P 𝒬 k - s ( L ) W ~ - 𝒬 k - s ( L 𝒯 ) ( σ P W ~ ) L 2 ( Ω ; L 2 ( Γ ) ) h 2 s ln ( h - 1 ) W ~ L 2 ( Ω ; L 2 ( γ ) ) ,

where now the hidden constant depends on κ. A similar estimate holds for the finite element method defined on the lifted finite element space 𝕍 ~ ( 𝒯 ) (see Section 4.2), namely for the error between 𝒬 k - s ( L ) W ~ and 𝒬 k - s ( L ~ 𝒯 ) W ~ .

4.6 Lifted Finite Element Method on γ

As mentioned above, the approximation W ~ Ψ of the white noise defined in (4.18) will be needed for the analysis of the method. More precisely, the mean square norm error estimate derived in Section 6 below involves the approximations 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ and 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) obtained with the finite element methods defined on 𝕍 ~ ( 𝒯 ) and 𝕍 ( 𝒯 ) , respectively. In this subsection we state some important results for these various approximations.

First, recalling that the random data vectors 𝜶 and 𝜶 Ψ based on W ~ and W ~ Ψ , respectively, follow the same Gaussian distribution, we have

(4.27) U k L 2 ( Ω ; L 2 ( Γ ) ) = 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) .

Moreover, using the expansion in the eigenfunction basis { Ψ ~ i γ } i = 1 N of L ~ 𝒯 , the mean square norm of 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ is given by

(4.28) 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 = j = 1 N 𝒬 k - s ( Λ ~ j γ ) 2 .

Finally, we can show that the strong mean square error between P 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ and 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) is controlled by the geometric error. The analysis follows from the finite element error analysis in [18, Section 4] and relies on the geometric error estimate for the weak formulation as well as regularity properties for the discrete operators, see Appendix A for the complete proof.

Lemma 4.4.

There holds

P 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ - 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) C ( h ) h 2 ,

where C ( h ) 1 when n = 2 and C ( h ) ln ( h - 1 ) when n = 3 provided that h < e - 1 .

5 Strong Error Estimate

In this section, we provide a mean square error estimate for the numerical approximation U k in (4.23) to the solution u ~ of the model problem (2.13). We start with the convergence of the sinc quadrature scheme for the continuous operator, i.e. the discrepancy between u ~ = L - s w ~ in (4.1) and u ~ k = 𝒬 k - s ( L ) w ~ expressed in (4.20). Then we address the finite element error between u ~ k and U ~ k = P - 1 U k on γ. We note that by considering the sinc quadrature error first, our final estimate for the error u ~ - U ~ k improves the results of [13, 38].

5.1 Analysis of the Sinc Approximation

The following proposition establishes the exponential convergence of the sinc quadrature approximation u ~ k = 𝒬 k - s ( L ) w ~ to u ~ in the strong mean square norm L 2 ( Ω ; L 2 ( γ ) ) .

Proposition 5.1.

Assume that n - 1 4 < s < 1 and set r = n - 1 4 + s so that w ~ L 2 ( Ω ; H - r ( γ ) ) . Let u ~ be the solution to (2.13) and let u ~ k = Q k - s ( L ) w ~ be defined by (4.20) with a sinc quadrature step k > 0 . There holds

u ~ - u ~ k L 2 ( Ω ; L 2 ( γ ) ) e - π 2 k w ~ L 2 ( Ω ; H - r ( γ ) ) .

Proof.

We only sketch a proof of this results since it follows the same arguments used in [19, Theorem 3.2] except that the data w ~ belongs to L 2 ( Ω ; H - r ( γ ) ) rather than L 2 ( γ ) .

Note that the specific choice of r is made so that not only w ~ L 2 ( Ω ; H - r ( γ ) ) but also u ~ L 2 ( Ω ; L 2 ( γ ) ) , see Section 3.4. In particular, we have v ~ := L - r 2 w ~ L 2 ( Ω ; L 2 ( γ ) ) satisfies w ~ = L r 2 v ~ with v ~ L 2 ( Ω ; L 2 ( γ ) ) w ~ L 2 ( Ω ; H - r ( γ ) ) . Hence, using the eigenfunction expansion of v ~ it suffices to investigate the quadrature error in the scalar case, namely

e ( λ ~ ) := | - g λ ~ ( y ) 𝑑 y - k l = - 𝙼 𝙽 g λ ~ ( l k ) |

for all λ ~ λ ~ 1 . Here, the integrand

g λ ~ ( z ) := exp ( ( 1 - s ) z ) ( exp ( z ) + λ ~ ) - 1 λ ~ r 2

arises from the relation

e ( 1 - s ) y ( e y I + L ) - 1 L r 2 ψ ~ j = g λ ~ j ( y ) ψ ~ j .

It is analytic in the complex strip { z : | 𝔪 z | π 2 } and exhibits the decay property (cf. [19, Lemma 3.1])

(5.1) | g λ ~ ( z ) | { exp ( - ( s - r 2 ) 𝔢 ( z ) ) for  𝔢 ( z ) > 0 , exp ( ( 1 - s ) 𝔢 ( z ) ) for  𝔢 ( z ) 0 .

Hence, the fundamental theorem for the sinc approximation ([51, Theorem 2.20]) ensures the error estimate

(5.2)

| e ( λ ~ ) | sinh ( π 2 2 k ) - 1 e - π 2 2 k + e - ( s - r 2 ) 𝙽 k + e - ( 1 - s ) 𝙼 k
e - π 2 k + e - [ s - n - 1 4 ] 𝙽 k 2 + e - ( 1 - s ) 𝙼 k ,

where we used that r = n - 1 4 + s . The desired estimate follows with the choice of 𝙼 and 𝙽 in (4.21), which ensures that the three exponents on the right-hand side above are balanced

e - [ s - n - 1 4 ] 𝙽 k 2 e - π 2 k e - ( 1 - s ) 𝙼 k .

Hence, in view of (3.8) for a fixed r,

u ~ - u ~ k L 2 ( Ω ; L 2 ( γ ) ) ( max λ ~ λ ~ 1 | e ( λ ~ ) | ) w ~ L 2 ( Ω ; H - r ( γ ) ) e - π 2 k w ~ L 2 ( Ω ; H - r ( γ ) ) .

The proof is complete. ∎

Remark 5.1.

Notice that the result of Proposition 5.1 holds for any choice of

r ( n - 1 2 , 2 s )

which guarantees that both w ~ L 2 ( Ω ; H - r ( γ ) ) and u ~ L 2 ( Ω ; L 2 ( γ ) ) , see Section 3.4. Furthermore, recall that according to (3.11) we have w ~ L 2 ( Ω ; H - r ( γ ) ) + as r n - 1 2 . This means that the error estimate provided by Proposition 5.1 deteriorates as s n - 1 4 .

5.2 Strong Error Estimate

The strong convergence of the finite element algorithm is discussed in this subsection. This result, together with the mean square norm convergence discussed in the next section, are the main results of this work.

Theorem 5.2 (Strong Convergence).

Assume that n - 1 4 < s < 1 . Let u ~ be the solution to (2.13) and let U k be the discrete approximation defined in (4.23). Then there holds

u ~ - P - 1 U k L 2 ( Ω ; L 2 ( γ ) ) ln ( h - 1 ) h 2 s - n - 1 2 + e - π 2 k

provided h < e - 1 .

Proof.

We split the error into the sinc quadrature error, the white noise approximation error, and the finite element error

(5.3) u ~ - P - 1 U k L 2 ( Ω ; L 2 ( γ ) ) u ~ - u ~ k L 2 ( Ω ; L 2 ( γ ) ) + u ~ k - u ~ k N L 2 ( Ω ; L 2 ( γ ) ) + u ~ k N - P - 1 U k L 2 ( Ω ; L 2 ( γ ) ) ,

where u ~ k N := 𝒬 k - s ( L ) W ~ with W ~ defined in (4.13).

The quadrature error is already estimated in Proposition 5.1 and reads

u ~ - u ~ k L 2 ( Ω ; L 2 ( γ ) ) e - π 2 k w ~ L 2 ( Ω ; H - r ( γ ) ) , r = n - 1 4 + s .

For the white noise approximation error, we take advantage of the eigenfunction expansion and the stability of the sinc quadrature (Lemma 4.3) to infer that

u ~ k - u ~ k N L 2 ( Ω ; L 2 ( γ ) ) 2 = 𝒬 k - s ( L ) ( w ~ - W ~ ) L 2 ( Ω ; L 2 ( γ ) ) 2
L - s ( w ~ - W ~ ) L 2 ( Ω ; L 2 ( γ ) ) 2
j = 1 λ ~ j - 2 s ( I - 𝚷 ~ ) ψ ~ j L 2 ( γ ) 2 .

Let 0 < ϵ < 2 s - n - 1 2 . The approximation property (4.8) of 𝚷 ~ together with the property

ψ ~ j H t ( γ ) L t 2 ψ ~ j L 2 ( γ ) λ ~ j t 2 ψ ~ j L 2 ( γ ) = λ ~ j t 2 ,

both with t = 2 s - n - 1 2 - ϵ ( 0 , 2 ) , yield

u ~ k - u ~ k N L 2 ( Ω ; L 2 ( γ ) ) 2 h 4 s - ( n - 1 ) - 2 ϵ j = 1 λ ~ j - n - 1 2 - ϵ .

Since h < e - 1 , we choose ϵ = 2 s - n - 1 2 ln ( h - 1 ) ( 0 , 2 s - n - 1 2 ) and deduce, using Weyl’s law (3.10), that

(5.4)

u ~ k - u ~ k N L 2 ( Ω ; L 2 ( γ ) ) 2 h 4 s - ( n - 1 ) - 2 ϵ j = 1 j - 1 - 2 ϵ n - 1
ϵ - 1 h 4 s - ( n - 1 ) - 2 ϵ
ln ( h - 1 ) h 2 s - n - 1 2 .

It remains to estimate the finite element error. Note that in view of the change of variable formula (4.4) and the estimates on the area ratio (4.5), we have

(5.5)

u ~ k N - P - 1 U k L 2 ( Ω ; L 2 ( γ ) ) = σ 1 2 ( P u ~ k N - U k ) L 2 ( Ω ; L 2 ( Γ ) )
P u ~ k N - U k L 2 ( Ω ; L 2 ( Γ ) )
= P 𝒬 k - s ( L ) W ~ - 𝒬 k - s ( L 𝒯 ) ( σ P W ~ ) L 2 ( Ω ; L 2 ( Γ ) ) .

Whence, (4.26) together with (4.14) yield

u ~ k N - P - 1 U k L 2 ( Ω ; L 2 ( γ ) ) h 2 s ln ( h - 1 ) N 1 2 ln ( h - 1 ) h 2 s - n - 1 2 .

The desired estimate follows upon gathering the three error estimates. ∎

Remark 5.2.

Note that from Theorem 5.2, we can deduce the following stability of the sinc quadrature applied to L 𝒯 :

(5.6) 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) u ~ L 2 ( Ω ; L 2 ( γ ) ) + ln ( h - 1 ) h 2 s - n - 1 2 + e - π 2 k

and thanks to Lemma 4.4, the same holds for L ~ 𝒯 :

(5.7) P 𝒬 k - s ( L ~ 𝒯 ) ( W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) u ~ L 2 ( Ω ; L 2 ( γ ) ) + ln ( h - 1 ) h 2 s - n - 1 2 + e - π 2 k .

Remark 5.3.

In view of Theorem 5.2, we have improved the strong error estimate from [13, Theorem 2.10] and [38, Proposition 2.2] by showing that the error due to the sinc quadrature is independent of the mesh size h. This can be verified numerically. According to the proof of Proposition 5.1, it suffices to check the error max λ ~ λ ~ 1 | e ( λ ~ ) | . In Figure 1, we report the error | e ( λ ~ ) | , λ ~ [ 2 , 10 7 ] , when the fractional power is set to s = 0.75 and for the sinc quadrature spacing k = 0.6 . We observe that the error decays as the eigenvalue increases. Therefore, max λ ~ 2 | e ( λ ~ ) | = | e ( 2 ) | and the sinc quadrature error is independent of the largest eigenvalue.[3]

Figure 1 
                  Log-log plot of sinc approximation error 
                        
                           
                              
                                 |
                                 
                                    e
                                    ⁢
                                    
                                       (
                                       
                                          λ
                                          ~
                                       
                                       )
                                    
                                 
                                 |
                              
                           
                           
                           {|e(\widetilde{\lambda})|}
                        
                      for 
                        
                           
                              
                                 
                                    λ
                                    ~
                                 
                                 ∈
                                 
                                    [
                                    2
                                    ,
                                    
                                       10
                                       7
                                    
                                    ]
                                 
                              
                           
                           
                           {\widetilde{\lambda}\in[2,10^{7}]}
                        
                     .
Figure 1

Log-log plot of sinc approximation error | e ( λ ~ ) | for λ ~ [ 2 , 10 7 ] .

6 Mean Square Norm Error Estimate

In this section, we provide an error bound between the mean square norm of the exact solution u ~ and that of its finite element approximation U k . The main result is the following theorem.

Theorem 6.1 (Convergence in the Mean Square Norm).

Assume that n - 1 4 < s < 1 . Let u ~ be the solution to (2.13) and let U k be the approximation defined by (4.23). Then there holds

| u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 - U k L 2 ( Ω ; L 2 ( Γ ) ) 2 | h 4 s - ( n - 1 ) + C ( h ) h 2 + e - π 2 k ,

where C ( h ) is the constant that appears in Lemma 4.4.

Proof.

Using the triangle inequality, we spilt the target error with the following five error terms:

| u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 - U k L 2 ( Ω ; L 2 ( Γ ) ) 2 | i = 1 5 E i ,

where, thanks to (4.28), we have

E 1 := | u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 - u ~ k L 2 ( Ω ; L 2 ( γ ) ) 2 | = | j = 1 ( λ ~ j - 2 s - 𝒬 k - s ( λ ~ j ) 2 ) | ,
E 2 := j = N + 1 𝒬 k - s ( λ ~ j ) 2 ,
E 3 := | j = 1 N ( 𝒬 k - s ( λ ~ j ) 2 - 𝒬 k - s ( Λ ~ j γ ) 2 ) | ,
E 4 := | 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 - 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) 2 | ,
E 5 := | 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) 2 - U k L 2 ( Ω ; L 2 ( Γ ) ) 2 | .

Note that (4.27) guarantees that E 5 = 0 so we only need to estimate each error E i , i = 1 , , 4 .

Step 1. For E 1 , the sinc quadrature error formula (5.2) together with (4.21) directly implies that for r = n - 1 4 + s we have

| λ ~ j - s - 𝒬 k - s ( λ ~ j ) | | e ( λ ~ j ) | λ ~ j - r 2 e - π 2 k λ ~ j - r 2

so that

| λ ~ j - 2 s - 𝒬 k - s ( λ ~ j ) 2 | e - π 2 k λ ~ j - r 2 ( λ ~ j - s + 𝒬 k - s ( λ ~ j ) ) e - π 2 k λ ~ j - r 2 - s ,

where we used

(6.1) 𝒬 k - s ( λ ~ j ) λ ~ j - s

to justify the last inequality. Consequently, we find that

E 1 e - π 2 k j = 1 λ ~ j - r 2 - s e - π 2 k w ~ L 2 ( Ω ; H - r ( γ ) ) 2 e - π 2 k .

Step 2. We invoke again the stability of the sinc quadrature (6.1) together with Weyl’s law (3.10) to estimate E 2

E 2 j = N + 1 λ ~ j - 2 s j = N + 1 j - 4 s / ( n - 1 ) N 1 - 4 s / ( n - 1 ) h 4 s - ( n - 1 ) .

Step 3. For E 3 , the stability of the sinc quadrature (6.1) along with the error estimates on the approximation of the eigenvalues (Lemma 4.1) yield

𝒬 k - s ( λ ~ j ) - 𝒬 k - s ( Λ ~ j γ ) = k sin ( π s ) π l = - 𝙽 𝙼 e ( 1 - s ) y l ( 1 e y l + λ ~ j - 1 e y l + Λ ~ j γ )
= k sin ( π s ) π l = - 𝙽 𝙼 e ( 1 - s ) y l Λ ~ j γ - λ ~ j ( e y l + λ ~ j ) ( e y l + Λ ~ j γ )
Λ ~ j γ - λ ~ j Λ ~ j γ ( k sin ( π s ) π l = - 𝙽 𝙼 e ( 1 - s ) y l 1 e y l + λ ~ j )
= Λ ~ j γ - λ ~ j Λ ~ j γ 𝒬 k - s ( λ ~ j ) λ ~ j 1 - s h 2 .

Since κ 2 λ ~ j Λ ~ j γ thanks to Lemma 4.1, we deduce that

E 3 j = 1 N | 𝒬 k - s ( λ ~ j ) - 𝒬 k - s ( Λ ~ j γ ) | | 𝒬 k - s ( λ ~ j ) + 𝒬 k - s ( Λ ~ j γ ) |
j = 1 N | 𝒬 k - s ( λ ~ j ) - 𝒬 k - s ( Λ ~ j γ ) | λ ~ j - s h 2 j = 1 N λ ~ j 1 - 2 s
h 2 j = 1 N j 2 - 4 s n - 1 h 2 N 1 + 2 - 4 s n - 1 h 4 s - ( n - 1 ) .

Step 4. We now focus on E 4 . Using the triangle inequality and Lemma 4.4 as well as Remark 5.2, we have

E 4 | 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 - P 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ L 2 ( Ω ; L 2 ( Γ ) ) 2 | + | P 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ L 2 ( Ω ; L 2 ( Γ ) ) 2 - 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) L 2 ( Ω ; L 2 ( Γ ) ) 2 | 1 - σ L ( Γ ) 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 2 + C ( h ) h 2 ,

where C ( h ) is as in Lemma 4.4 and where for the second term on the right-hand side we used (5.6) and (5.7). The geometric consistency (4.5) together with the stability estimate (5.7) imply

E 4 h 2 + C ( h ) h 2 C ( h ) h 2 .

The proof is complete by combing the estimates for E i with i = 1 , , 5 . ∎

Remark 6.1.

We note that for two-dimensional surfaces ( n = 2 ) , the rate of convergence is limited to O ( h 2 ) even when s > 3 4 . This O ( h 2 ) limitation is due to the geometric approximation of the area 1 - σ L ( Γ ) to derive the estimate for E 4 in the proof of Theorem 6.1. We anticipate that the optimal order of convergence O ( h 4 s - ( n - 1 ) ) can be obtained upon using higher order surface approximations. In fact, when γ is of class C p + 1 and polynomial of degree p are used to define the approximation Γ of γ, one has 1 - σ L ( Γ ) h p + 1 (see, e.g., [28]).

7 Numerical Illustration

In this section, we perform several numerical experiments to illustrate the proposed numerical method. In particular, we compute the strong and mean square norm errors as well as illustrate the effect of the parameters s and κ by plotting one realization of the approximated random field U k and by evaluating the covariance cov U k ( 𝐱 , 𝐱 ) at some points 𝐱 , 𝐱 Γ . Recall that U k is a linear combination of the { U l } l = - 𝙼 𝙽 , see (4.25), and that a realization U l ( , ω ) 𝕍 ( 𝒯 ) of U l is obtained by solving the linear system (4.25) for a realization 𝐳 ( ω ) of 𝐳 𝒩 ( 𝟎 , I N × N ) .

For the numerical error analysis, we focus on the error due to the finite element discretization. Therefore, from now on, we set k = 0.6 for the quadrature spacing and choose 𝙽 and 𝙼 according to (4.21), thus yielding (up to a multiplicative constant) a sinc quadrature error of order e - π 2 k 7.1803 10 - 8 . The numerical implementation is based on the deal.ii library [3] and the visualization is done with ParaView [4].

7.1 Gaussian Matérn Random Field on the Unit Sphere

In this first example, we let γ = 𝕊 2 denote the sphere in 3 parametrized by using the spherical coordinates ( θ , φ ) [ 0 , π ] × [ 0 , 2 π ) , where θ and φ are the elevation angle (latitude) and azimuth angle (longitude), respectively. Using the spherical harmonic functions, see for instance [52, 44, 40], the formal KL expansion (3.9) of the white noise can be written

(7.1) ω ~ ( 𝐱 , ω ) = l = 0 [ ξ l , 0 1 ( ω ) q l , 0 ( θ ) + 2 m = 1 l q l , m ( θ ) ( ξ l , m 1 ( ω ) cos ( m φ ) + ξ l , m 2 ( ω ) sin ( m φ ) ) ] ,

and the solution to the SPDE (2.13) reads

(7.2) u ~ ( 𝐱 , ω ) = l = 0 ( κ 2 + l ( l + 1 ) ) - s [ ξ l , 0 1 ( ω ) q l , 0 ( θ ) + 2 m = 1 l q l , m ( θ ) ( ξ l , m 1 ( ω ) cos ( m φ ) + ξ l , m 2 ( ω ) sin ( m φ ) ) ] ,

where the κ 2 + l ( l + 1 ) are the eigenvalues of the operator L (multiplicity 2 l + 1 ). Here

ξ l , m i i.i.d. 𝒩 ( 0 , 1 ) , l 0 := { 0 } , m = 0 , , l , i = 1 , 2 ,

and 𝐱 = ( sin ( θ ) cos ( φ ) , sin ( θ ) sin ( φ ) , cos ( θ ) ) γ . Moreover, for l 0 and m = 0 , , l ,

q l , m ( θ ) := 2 l + 1 4 π ( l - m ) ! ( l + m ) ! P l , m ( cos ( θ ) )

with P l , m ( μ ) the Legendre functions

P l , m ( μ ) := ( - 1 ) m ( 1 - μ 2 ) m 2 m μ m P l ( μ ) , μ [ - 1 , 1 ] ,

associated to the Legendre polynomials

P l ( μ ) := 1 2 l l ! l μ l ( μ 2 - 1 ) l , μ [ - 1 , 1 ] .

Note that the (complex-valued) spherical harmonic functions are given in spherical coordinates by

y l , m ( θ , φ ) := { q l , m ( θ ) e i m φ for  l 0  and  m = 0 , , l , ( - 1 ) m q l , - m ( θ ) e i m φ for  l  and  m = - l , , - 1 .

Then for l 0 and m = - l , , l , y l , m is an eigenfunction of the Laplace–Beltrami operator associated to the eigenvalue l ( l + 1 ) . Finally, we mention that u ~ in (7.2) satisfies

(7.3) u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 = l = 0 ( κ 2 + l ( l + 1 ) ) - 2 s ( 2 l + 1 ) .

7.1.1 Strong and Mean Square Norm Errors

We first compute the strong and mean square norm errors given, respectively, by

P u ~ - U k L 2 ( Ω ; L 2 ( Γ ) ) u ~ - P - 1 U k L 2 ( Ω ; L 2 ( γ ) ) and | u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 - U k L 2 ( Ω ; L 2 ( Γ ) ) 2 | ,

where U k is the approximation given in (4.23). However, these quantities cannot be computed exactly in practice, and some approximations are required. First, we accurately compute the norm of u ~ (still denoted u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 below) using (7.3) but keeping the first 100 , 000 terms. For the other terms, we use the vanilla Monte Carlo method with sample size K to compute the expected values. Now for the strong error, since u ~ involves an infinite sum, we replace it by u ~ 𝙻 = L - s w ~ 𝙻 , where u ~ 𝙻 and w ~ 𝙻 are defined as in (7.1) and (7.2), respectively, but with the index l running from 0 to some positive integer 𝙻 . Then we have

(7.4) u ~ 𝙻 L 2 ( Ω ; L 2 ( γ ) ) 2 = l = 0 𝙻 ( κ 2 + l ( l + 1 ) ) - 2 s ( 2 l + 1 ) ,
(7.5) w ~ 𝙻 L 2 ( Ω ; L 2 ( γ ) ) 2 = l = 0 𝙻 ( 2 l + 1 ) = ( 𝙻 + 1 ) 2 ,

and we refer to [44] for an analysis of the truncation error u ~ - u ~ 𝙻 L 2 ( Ω ; L 2 ( γ ) ) . Moreover, we need to have comparable samples for the (truncated) exact solution u ~ 𝙻 and the approximation U k . This would require the computation of 𝜶 with entries given in (4.16), and thus the computation of the L 2 ( γ ) projection of the eigenfunctions onto the conforming finite element space 𝕍 ~ ( 𝒯 ) . In particular, we cannot use the strategy described in Section 4.4, namely replace 𝜶 by G 𝒛 with 𝒛 𝒩 ( 0 , I N × N ) and G G T = M . As an alternative, we follow [13] and replace U k by U 𝙻 , k obtained by using the data vector with entries ( σ P w ~ 𝙻 , Φ i ) Γ , i = 1 , , N . To sum up, we report below the errors

e strong := ( 1 K i = 1 K P u ~ 𝙻 ( , ω i ) - U 𝙻 , k ( , ω i ) L 2 ( Γ ) 2 ) 1 2 ,
e weak := | u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 - 1 K i = 1 K U k ( , ω i ) L 2 ( Γ ) 2 | ,

as well as e σ := 1 - σ L ( Γ ) , for different meshes and different values of the parameters s and κ. Recall that according to Theorems 5.2 and 6.1, the theoretical convergence rate is 2 s - 1 for the strong error (up to a logarithm term) and 4 s - 2 for the mean square error.

Table 1

Error e strong using K = 10000 Monte Carlo samples and 𝙻 = 100 for the truncation of the white noise.

s = 0.75 s = 0.9
N h e σ κ = 0.5 κ = 2 κ = 8 κ = 0.5 κ = 2 κ = 8
8 1.633 2.000 65.488 11.199 1.913 79.794 8.672 1.024
26 1.000 0.3032 23.258 5.418 1.294 27.780 4.007 0.677
98 0.541 0.0778 11.367 3.078 0.976 13.404 2.191 0.489
386 0.276 0.0194 4.550 1.334 0.532 5.328 0.916 0.247
1538 0.139 0.0048 0.244 0.202 0.192 0.180 0.070 0.061
6146 0.070 0.0012 0.097 0.097 0.095 0.029 0.028 0.028

In Table 1 we report the strong error e strong for 𝙻 = 100 and K = 10 , 000 . In all cases we observe approximately the convergence rate O ( h 2 ) predicted by [18, Theorem 4.2] for smooth right-hand sides (in physical space), see also [40, Proposition 4.2]. In passing, we mention that numerical instabilities may arise when 𝙻 is large because of large values assumed by the Legendre functions P l , m (for instance P 100 , 98 ( 0 ) = - 1.675 10 184 and P 100 , 100 ( 0 ) = 6.666 10 186 ). However, we do not clearly observe such instability, see Figure 2.

Figure 2 
                     Errors 
                           
                              
                                 
                                    e
                                    strong
                                 
                              
                              
                              {e_{\rm strong}}
                           
                         for 
                           
                              
                                 
                                    κ
                                    =
                                    0.5
                                 
                              
                              
                              {\kappa=0.5}
                           
                         and different values of the truncation parameter 
                           
                              
                                 𝙻
                              
                              
                              {\mathtt{L}}
                           
                         using 
                           
                              
                                 
                                    K
                                    =
                                    10000
                                 
                              
                              
                              {K=10000}
                           
                         Monte Carlo samples.
Figure 2

Errors e strong for κ = 0.5 and different values of the truncation parameter 𝙻 using K = 10000 Monte Carlo samples.

Table 2

Mean square error e weak using K = 1000 when κ = 2 , in which case u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 is equal to 2.87891 for s = 0.625 ,1.04528 for s = 0.75 , and 0.44277 for s = 0.9 .

s = 0.625 s = 0.75 s = 0.9
N h U k 2 e weak U k 2 e weak U k 2 e weak
98 0.542 1.4399 1.4391 0.7554 0.2899 0.3738 0.0690
386 0.276 1.8060 1.0729 0.8751 0.1701 0.4103 0.03247
1538 0.139 2.0978 0.7812 0.9461 0.0992 0.4248 0.0180
6146 0.070 2.3210 0.5579 0.9903 0.0550 0.4336 0.0092
24578 0.035 2.4761 0.4028 1.0087 0.0366 0.4339 0.0089
Table 3

Mean square error e weak using K = 1000 when κ = 8 , in which case u ~ L 2 ( Ω ; L 2 ( γ ) ) 2 is equal to 1.40341 for s = 0.625 ,0.25063 for s = 0.75 , and 0.04506 for s = 0.9 .

s = 0.625 s = 0.75 s = 0.9
N h U k 2 e weak U k 2 e weak U k 2 e weak
98 0.542 0.2605 1.1429 0.0813 0.1694 0.0203 0.0248
386 0.276 0.4684 0.9351 0.1329 0.1177 0.0303 0.0148
1538 0.139 0.6859 0.7175 0.1774 0.0732 0.0375 0.0076
6146 0.070 0.8741 0.5293 0.2083 0.0423 0.0415 0.0036
24578 0.035 1.0250 0.3784 0.2278 0.0229 0.0435 0.0015
Figure 3 
                     Errors 
                           
                              
                                 
                                    e
                                    weak
                                 
                              
                              
                              {e_{\rm weak}}
                           
                         for 
                           
                              
                                 
                                    κ
                                    =
                                    2
                                 
                              
                              
                              {\kappa=2}
                           
                         (left) and 
                           
                              
                                 
                                    κ
                                    =
                                    8
                                 
                              
                              
                              {\kappa=8}
                           
                         (right) using 
                           
                              
                                 
                                    K
                                    =
                                    1000
                                 
                              
                              
                              {K=1000}
                           
                         Monte Carlo samples. The dashed lines indicate the behavior predicted by Theorem 6.1.
Figure 3 
                     Errors 
                           
                              
                                 
                                    e
                                    weak
                                 
                              
                              
                              {e_{\rm weak}}
                           
                         for 
                           
                              
                                 
                                    κ
                                    =
                                    2
                                 
                              
                              
                              {\kappa=2}
                           
                         (left) and 
                           
                              
                                 
                                    κ
                                    =
                                    8
                                 
                              
                              
                              {\kappa=8}
                           
                         (right) using 
                           
                              
                                 
                                    K
                                    =
                                    1000
                                 
                              
                              
                              {K=1000}
                           
                         Monte Carlo samples. The dashed lines indicate the behavior predicted by Theorem 6.1.
Figure 3

Errors e weak for κ = 2 (left) and κ = 8 (right) using K = 1000 Monte Carlo samples. The dashed lines indicate the behavior predicted by Theorem 6.1.

Regarding the mean square error e weak , we report its value along with U k L 2 ( Ω ; L 2 ( Γ ) ) 2 for different fractional powers s and the cases κ = 2 and κ = 8 in Tables 2 and 3, respectively. These tables are supplemented with Figure 3 to better comprehend the evolution of the mean square error as the meshsize varies. We observe that the convergence rates observed numerically match the predictions from Theorem 6.1 when the finite element error is dominant, namely when s is small (i.e. the solution is not smooth) or when κ is large (i.e. the correlation length is small). In the other cases, other sources of error like the Monte Carlo and the sinc quadrature errors affect the convergence with respect to h.

7.1.2 Effect of the Parameters s and κ

The fractional power s determines the regularity of the random field. To illustrate this, we give in Figure 4 one realization of U k for s = 0.55 , 0.75 , 0.95 using N = 1538 DoFs ( h = 0.1391 ). For the color map, blue indicates negative values while red indicates positive values, the range of values being [ - 2.2614 , 1.7720 ] for s = 0.55 , [ - 1.1898 , 0.6849 ] for s = 0.75 , and [ - 0.8278 , 0.2587 ] for s = 0.95 .

Figure 4

Numerical solution U k ( , ω m ) to Problem (2.13) on the unit sphere for different values of s when κ = 0.5 .

(a) 
                        
                           
                              
                                 
                                    
                                       s
                                       =
                                       0.55
                                    
                                 
                                 
                                 {s=0.55}
(a)

s = 0.55

(b) 
                        
                           
                              
                                 
                                    
                                       s
                                       =
                                       0.75
                                    
                                 
                                 
                                 {s=0.75}
(b)

s = 0.75

(c) 
                        
                           
                              
                                 
                                    
                                       s
                                       =
                                       0.95
                                    
                                 
                                 
                                 {s=0.95}
(c)

s = 0.95

Table 4

Covariance function defined in (7.6) evaluated at the following points: the South Pole 𝐱 1 = ( 0 , 0 , - 1 ) ,the point 𝐱 2 = ( 0 , 1 , 0 ) which lies on the Equator, and the North Pole 𝐱 3 = ( 0 , 0 , 1 ) .

s = 0.75 s = 0.9
κ = 0.5 κ = 2 κ = 0.5 κ = 2
( 𝐱 1 , 𝐱 2 ) 0.623685 0.005944 0.951398 0.004374
( 𝐱 1 , 𝐱 3 ) 0.577621 0.001588 0.909999 0.000980
( 𝐱 2 , 𝐱 3 ) 0.617366 0.004903 0.945554 0.003722

To see the influence of the parameter κ, which is inversely proportional to the correlation length, we compute the approximated covariance at different points, namely

(7.6) cov U k ( 𝐱 , 𝐱 ) := 1 K - 1 i = 1 K ( U k ( 𝐱 , ω i ) - U ¯ k ( 𝐱 ) ) ( U k ( 𝐱 , ω i ) - U ¯ k ( 𝐱 ) ) ,

where

U ¯ k ( 𝐱 ) := 1 K i = 1 M U k ( 𝐱 , ω i ) .

We take K = 10 , 000 for both the approximation of the mean field U ¯ k and the covariance function cov U k , but using two different sets of samples, and we take N = 1538 DoFs ( h = 0.139 ). The results are reported in Table 4. We observe numerically that the smaller κ the larger the covariance, and that the evaluation of cov U k at pairs of points with the same (geodesic) distance yields comparable values.

7.2 Gaussian Matérn Random Field on a Torus

Here the surface γ is a torus with parametrization

(7.7) ( x , y , z ) = ( ( R + r cos ( θ ) ) cos ( ϕ ) , r sin ( θ ) , ( R + r cos ( θ ) ) sin ( ϕ ) )

for θ , ϕ [ 0 , 2 π ) . In what follows, we set r = 0.5 and R = 2 . One realization for the case κ = 0.5 is given in Figure 5 for s = 0.55 , 0.75 , 0.95 using N = 5120 ( h = 0.1386 ). As above, blue indicates negative values while red stands for positive values, the range of values being in this case [ - 2.6314 , 3.0533 ] for s = 0.55 , [ - 1.5083 , 1.5491 ] for s = 0.75 , and [ - 1.2159 , 0.7646 ] for s = 0.95 .

As for the sphere, we evaluate the covariance function cov U k at different points and for different values of the parameters s and κ, see Table 4 for the results obtained when N = 1280 ( h = 0.2757 ). As above, we observe that the covariance is inversely proportional to κ and the inverse of the geodesic distance between the two points.

Figure 5

Numerical solution to Problem (2.13) on the torus defined in (7.7) for different values of s when κ = 0.5 .

(a) 
                     
                        
                           
                              
                                 
                                    s
                                    =
                                    0.55
                                 
                              
                              
                              {s=0.55}
(a)

s = 0.55

(b) 
                     
                        
                           
                              
                                 
                                    s
                                    =
                                    0.75
                                 
                              
                              
                              {s=0.75}
(b)

s = 0.75

(c) 
                     
                        
                           
                              
                                 
                                    s
                                    =
                                    0.95
                                 
                              
                              
                              {s=0.95}
(c)

s = 0.95

Table 5

Covariance function defined in (7.6) evaluated at the following points: 𝐱 1 = ( 1.5 , 0 , 0 ) , 𝐱 2 = ( 2 , 0.5 , 0 ) ,and 𝐱 3 = ( 2.5 , 0 , 0 ) which correspond to the cases ϕ = 0 and θ = π , 0 , π 2 , respectively.

s = 0.75 s = 0.9
κ = 0.5 κ = 2 κ = 0.5 κ = 2
( 𝐱 1 , 𝐱 2 ) 0.377470 0.015192 0.505575 0.010112
( 𝐱 1 , 𝐱 3 ) 0.360484 0.006877 0.497597 0.005097
( 𝐱 2 , 𝐱 3 ) 0.401743 0.017716 0.529588 0.011722

Award Identifier / Grant number: DMS-2110811

Award Identifier / Grant number: RGPIN-2021-04311

Award Identifier / Grant number: 12301496

Funding statement: Andrea Bonito is partially supported by the NSF Grant DMS-2110811. Diane Guignard acknowledges the support provided by the Natural Science and Engineering Research Council (NSERC, grant RGPIN-2021-04311). Wenyu Lei is partially supported by the National Natural Science Foundation of China (Grant No. 12301496).

A Proof of Lemma 4.4

We follow the argument from [18, Section 4]. Note that from the definition of 𝒬 k - s , we have

P 𝒬 k - s ( L ~ 𝒯 ) W ~ Ψ - 𝒬 k - s ( L 𝒯 ) ( σ P W ~ Ψ ) = k sin ( π s ) π l = - 𝙼 𝙽 e ( 1 - s ) y l l W ~ Ψ ,

where l : 𝕍 ~ ( 𝒯 ) 𝕍 ( 𝒯 ) is given by

(A.1) l := P ( μ l I + L ~ 𝒯 ) - 1 - ( μ l I + L 𝒯 ) - 1 ( σ P )

with μ l := e y l and y l are the sinc quadrature points; see (4.20). The decomposition of the operator l

l = ( μ l I + L 𝒯 ) - 1 [ ( μ l I + L 𝒯 ) P - σ P ( μ l I + L ~ 𝒯 ) ] ( μ l I + L ~ 𝒯 ) - 1
= L 𝒯 ( μ l I + L 𝒯 ) - 1 ( P T ~ 𝒯 - T 𝒯 σ P ) L ~ 𝒯 ( μ l I + L ~ 𝒯 ) - 1 μ l ( μ l I + L 𝒯 ) - 1 ( 1 - σ ) P ( μ l I + L ~ 𝒯 ) - 1
= : l 1 + l 2 .

is instrumental in the error analysis below.

The next result recall estimate for some terms in the above decomposition. We refer to [18, Lemma 4.5] for more details.

Lemma A.1.

Let p [ - 1 , 1 ] and q R be such that p + q [ 0 , 2 ] . Then for any μ > 0 and any F ~ V ~ ( T ) there holds

(A.2) L ~ 𝒯 ( μ I + L ~ 𝒯 ) - 1 F ~ H - p ( γ ) μ - p + q 2 L ~ 𝒯 q 2 F ~ L 2 ( γ ) .

Moreover, for any F V ( T ) we have

(A.3) L 𝒯 ( μ I + L 𝒯 ) - 1 F L 2 ( Γ ) μ - 1 2 F H 1 ( Γ )

and

(A.4) ( μ I + L 𝒯 ) - 1 F L 2 ( Γ ) μ - 1 F L 2 ( Γ ) .

We also note that the arguments leading to (3.18) but using expansions based on the discrete eigenpairs imply that if r ( n - 1 2 , 2 s ) and μ > 0 we have

(A.5) ( μ I + L ~ 𝒯 ) - 1 F ~ L 2 ( γ ) L ~ 𝒯 - r 2 F ~ L 2 ( γ ) { 1 when  μ 1 , μ r 2 - 1 when  μ > 1

for any F ~ 𝕍 ~ ( 𝒯 ) .

To prove Lemma 4.4, it suffices to show that

(A.6) S i := k l = - 𝙼 𝙽 μ l 1 - s l i W ~ Ψ L 2 ( Ω ; L 2 ( Γ ) ) C ( h ) h 2 , i = 1 , 2 .

which we do now by estimating each term separately.

We start with S 2 and let r ( n - 1 2 , 2 s ) . Thanks to (A.4), the geometric error (4.5), and (A.5), we obtain

S 2 h 2 L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) ( μ l 1 k μ l 1 - s + μ l > 1 k μ l - s + r 2 ) ,

The estimate of L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) provided by Lemma 4.2 in conjunction with - s + r 2 < 0 , yield

S 2 h 2 L ~ 𝒯 - r 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) h 2 ,

which is the desired estimate in disguised (with C ( h ) 1 ).

For S 1 , we first estimate the discrepancy between U ~ 𝒯 := T ~ 𝒯 F ~ and U 𝒯 := T 𝒯 σ P F ~ for any F ~ 𝕍 ~ ( 𝒯 ) . By definition, see (4.3), U 𝒯 satisfies

A Γ ( U 𝒯 , V ) = Γ σ P F ~ V

for any V 𝕍 ( 𝒯 ) . In turn, the change of variable formula (4.4) together with the definition of U ~ 𝒯 imply

A Γ ( U 𝒯 , V ) = γ F ~ ( P - 1 V ) = a γ ( U ~ 𝒯 , P - 1 V )

upon realizing that P - 1 V 𝕍 ~ ( 𝒯 ) . Consequently, we find that

κ 2 Γ ( P U ~ 𝒯 - U 𝒯 ) V + Γ Γ ( P U ~ 𝒯 - U 𝒯 ) Γ V = A Γ ( P U ~ 𝒯 , V ) - a γ ( U ~ 𝒯 , P - 1 V )
= κ 2 γ ( P - 1 σ - 1 - 1 ) U ~ 𝒯 ( P - 1 V ) + γ γ U ~ 𝒯 𝐄 ~ γ ( P - 1 V ) ,

where the error matrix 𝐄 ~ satisfies 𝐄 ~ L ( γ ) h 2 (see, e.g., [17, Corollary 33]). We now set V = P U ~ 𝒯 - U 𝒯 so that with Cauchy-Schwarz inequality and the geometric consistency (4.5), we get

(A.7) ( P T ~ 𝒯 - T 𝒯 σ P ) F ~ H 1 ( Γ ) = P U ~ 𝒯 - U 𝒯 H 1 ( Γ ) h 2 U ~ 𝒯 H 1 ( γ ) h 2 F ~ H - 1 ( γ ) .

We next estimate l 1 W ~ Ψ L 2 ( Ω ; L 2 ( Γ ) ) distinguishing two cases.

Case 1. If μ l > 1 , we apply successively (A.3), (A.7) and (A.2) (with p = 1 and q = - min { 1 , r } ) to get

l 1 W ~ Ψ L 2 ( Ω ; L 2 ( Γ ) ) μ l min { - 1 2 , r 2 - 1 } h 2 L ~ 𝒯 q 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) )
μ l r 2 - 1 h 2 L ~ 𝒯 q 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) ,

where we used that μ l - 1 2 μ l r 2 - 1 if r > 1 .

Case 2. If 0 < μ l 1 , we set again q = - min { 1 , r } . Using to the relation L 𝒯 ( μ l I + L 𝒯 ) - 1 = I - μ l ( μ l I + L 𝒯 ) - 1 and (A.4), we have

L 𝒯 ( μ l I + L 𝒯 ) - 1 F L 2 ( Γ ) F L 2 ( Γ ) + μ l ( μ l I + L ) - 1 F L 2 ( Γ ) F L 2 ( Γ ) .

Moreover, (A.7) implies that

( P T ~ 𝒯 - T 𝒯 σ P ) F ~ H 1 ( Γ ) h 2 F ~ H q ( γ ) ,

while (A.2) with p = - q reads

L ~ 𝒯 ( μ l I + L ~ 𝒯 ) - 1 F ~ H q ( γ ) L ~ 𝒯 q 2 F ~ L 2 ( γ ) .

Gathering the above three estimates yields

l 1 W ~ Ψ L 2 ( Ω ; L 2 ( Γ ) ) h 2 L ~ 𝒯 q 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) .

and as a consequence

S 1 h 2 L ~ 𝒯 q 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) ( μ l 1 k μ l 1 - s + μ l > 1 k μ l - s + r 2 ) h 2 L ~ 𝒯 q 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) .

Recall that q = - min ( 1 , r ) and so we now argue depending on whether r 1 (which can only happen when n = 2 ) and r > 1 . When r 1 , we have q = - r and thus S 1 h 2 by Lemma 4.2. For r > 1 , i.e. q = - 1 , we write r = 1 + ϵ with ϵ ( 0 , 2 s - 1 ) and we invoke an inverse inequality to write

S 1 h 2 - ϵ L ~ 𝒯 - 1 + ϵ 2 W ~ Ψ L 2 ( Ω ; L 2 ( γ ) ) 1 + ϵ 1 + ϵ - n - 1 2 h 2 - ϵ ,

where we applied Lemma 4.2 for the second inequality. To conclude, we choose ϵ = 2 s - 1 ln ( h - 1 ) yielding S 1 h 2 for n = 2 and S 1 ln ( h - 1 ) h 2 for n = 3 . The proof is now complete.

References

[1] R. J. Adler and J. E. Taylor, Random Fields and Geometry, Springer Monogr. Math., Springer, New York, 2007. Search in Google Scholar

[2] H. Antil, J. Pfefferer and S. Rogovs, Fractional operators with inhomogeneous boundary conditions: Analysis, control, and discretization, Commun. Math. Sci. 16 (2018), no. 5, 1395–1426. 10.4310/CMS.2018.v16.n5.a11Search in Google Scholar

[3] D. Arndt, W. Bangerth, M. Feder, M. Fehling, R. Gassmöller, T. Heister, L. Heltai, M. Kronbichler, M. Maier, P. Munch, J.-P. Pelteret, S. Sticko, B. Turcksin and D. Wells, The deal.II library, version 9.4, J. Numer. Math. 30 (2022), no. 3, 231–246. 10.1515/jnma-2022-0054Search in Google Scholar

[4] U. Ayachit, The ParaView Guide: A Parallel Visualization Application, Kitware, New York, 2015. Search in Google Scholar

[5] I. Babuška, F. Nobile and R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM J. Numer. Anal. 45 (2007), no. 3, 1005–1034. 10.1137/050645142Search in Google Scholar

[6] I. Babuška and J. Osborn, Eigenvalue problems, Handbook of Numerical Analysis, Vol. II, North-Holland, Amsterdam (1991), 641–787. 10.1016/S1570-8659(05)80042-0Search in Google Scholar

[7] I. Babuška, R. Tempone and G. E. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations, SIAM J. Numer. Anal. 42 (2004), no. 2, 800–825. 10.1137/S0036142902418680Search in Google Scholar

[8] M. Bachmayr and A. Djurdjevac, Multilevel representations of isotropic Gaussian random fields on the sphere, IMA J. Numer. Anal. 43 (2023), no. 4, 1970–2000. 10.1093/imanum/drac034Search in Google Scholar

[9] A. V. Balakrishnan, Fractional powers of closed operators and the semigroups generated by them, Pacific J. Math. 10 (1960), 419–437. 10.2140/pjm.1960.10.419Search in Google Scholar

[10] D. Boffi, Finite element approximation of eigenvalue problems, Acta Numer. 19 (2010), 1–120. 10.1017/S0962492910000012Search in Google Scholar

[11] D. Bolin and K. Kirchner, The rational SPDE approach for Gaussian random fields with general smoothness, J. Comput. Graph. Statist. 29 (2020), no. 2, 274–285. 10.1080/10618600.2019.1665537Search in Google Scholar

[12] D. Bolin, K. Kirchner and M. Kovács, Weak convergence of Galerkin approximations for fractional elliptic stochastic PDEs with spatial white noise, BIT 58 (2018), no. 4, 881–906. 10.1007/s10543-018-0719-8Search in Google Scholar

[13] D. Bolin, K. Kirchner and M. Kovács, Numerical solution of fractional elliptic stochastic PDEs with spatial white noise, IMA J. Numer. Anal. 40 (2020), no. 2, 1051–1073. 10.1093/imanum/dry091Search in Google Scholar

[14] D. Bolin, A. B. Simas and J. Wallin, Gaussian Whittle–Matérn fields on metric graphs, preprint (2022), https://arxiv.org/abs/2205.06163. Search in Google Scholar

[15] A. Bonito, J. M. Cascón, K. Mekchay, P. Morin and R. H. Nochetto, High-order AFEM for the Laplace–Beltrami operator: Convergence rates, Found. Comput. Math. 16 (2016), no. 6, 1473–1539. 10.1007/s10208-016-9335-7Search in Google Scholar

[16] A. Bonito, J. M. Cascón, P. Morin and R. H. Nochetto, AFEM for geometric PDE: The Laplace–Beltrami operator, Analysis and Numerics of Partial Differential Equations, Springer INdAM Ser. 4, Springer, Milan (2013), 257–306. 10.1007/978-88-470-2592-9_15Search in Google Scholar

[17] A. Bonito, A. Demlow and R. H. Nochetto, Finite element methods for the Laplace–Beltrami operator, Geometric Partial Differential Equations. Part I, Handb. Numer. Anal. 21, Elsevier, Amsterdam (2020), 1–103. 10.1016/bs.hna.2019.06.002Search in Google Scholar

[18] A. Bonito and W. Lei, Approximation of the spectral fractional powers of the Laplace–Beltrami operator, Numer. Math. Theory Methods Appl. 15 (2022), no. 4, 1193–1218. 10.4208/nmtma.OA-2022-0005sSearch in Google Scholar

[19] A. Bonito, W. Lei and J. E. Pasciak, On sinc quadrature approximations of fractional powers of regularly accretive operators, J. Numer. Math. 27 (2019), no. 2, 57–68. 10.1515/jnma-2017-0116Search in Google Scholar

[20] A. Bonito and J. E. Pasciak, Convergence analysis of variational and non-variational multigrid algorithms for the Laplace–Beltrami operator, Math. Comp. 81 (2012), no. 279, 1263–1288. 10.1090/S0025-5718-2011-02551-2Search in Google Scholar

[21] A. Bonito and J. E. Pasciak, Numerical approximation of fractional powers of elliptic operators, Math. Comp. 84 (2015), no. 295, 2083–2110. 10.1090/S0025-5718-2015-02937-8Search in Google Scholar

[22] A. Bonito and J. E. Pasciak, Numerical approximation of fractional powers of regularly accretive operators, IMA J. Numer. Anal. 37 (2017), no. 3, 1245–1273. 10.1093/imanum/drw067Search in Google Scholar

[23] F. Bonizzoni and F. Nobile, Perturbation analysis for the Darcy problem with log-normal permeability, SIAM/ASA J. Uncertain. Quantif. 2 (2014), no. 1, 223–244. 10.1137/130949415Search in Google Scholar

[24] V. Borovitskiy, A. Terenin, P. Mostowsky and M. P. Deisenroth, Matérn Gaussian processes on Riemannian manifolds, Advances in Neural Information Processing Systems, Volume 33, Curran Associates, New York (2020), 12426–12437. Search in Google Scholar

[25] K. A. Cliffe, M. B. Giles, R. Scheichl and A. L. Teckentrup, Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients, Comput. Vis. Sci. 14 (2011), no. 1, 3–15. 10.1007/s00791-011-0160-xSearch in Google Scholar

[26] S. G. Cox and K. Kirchner, Regularity and convergence analysis in Sobolev and Hölder spaces for generalized Whittle–Matérn fields, Numer. Math. 146 (2020), no. 4, 819–873. 10.1007/s00211-020-01151-xSearch in Google Scholar

[27] M. C. Delfour and J.-P. Zolésio, Shapes and Geometries, 2nd ed., Adv. Des. Control 22, Society for Industrial and Applied Mathematics, Philadelphia, 2011. 10.1137/1.9780898719826Search in Google Scholar

[28] A. Demlow, Higher-order finite element methods and pointwise error estimates for elliptic problems on surfaces, SIAM J. Numer. Anal. 47 (2009), no. 2, 805–827. 10.1137/070708135Search in Google Scholar

[29] P. J. Diggle and P. J. Ribeiro, Jr., Model-Based Geostatistics, Springer Ser. Statist., Springer, New York, 2007. 10.1007/978-0-387-48536-2Search in Google Scholar

[30] G. Dziuk, Finite elements for the Beltrami operator on arbitrary surfaces, Partial Differential Equations and Calculus of Variations, Lecture Notes in Math. 1357, Springer, Berlin (1988), 142–155. 10.1007/BFb0082865Search in Google Scholar

[31] G. Dziuk and C. M. Elliott, Finite element methods for surface PDEs, Acta Numer. 22 (2013), 289–396. 10.1017/S0962492913000056Search in Google Scholar

[32] A. Ern and J.-L. Guermond, Finite Elements II, Springer, Berlin, 2021. 10.1007/978-3-030-56923-5Search in Google Scholar

[33] A. Feragen, F. Lauze and S. Hauberg, Geodesic exponential kernels: When curvature and linearity conflict, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Press, Piscataway (2015), 3032–3042. 10.1109/CVPR.2015.7298922Search in Google Scholar

[34] P. Frauenfelder, C. Schwab and R. A. Todor, Finite elements for elliptic problems with stochastic coefficients, Comput. Methods Appl. Mech. Engrg. 194 (2005), no. 2–5, 205–228. 10.1016/j.cma.2004.04.008Search in Google Scholar

[35] R. G. Ghanem and P. D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer, New York, 1991. 10.1007/978-1-4612-3094-6Search in Google Scholar

[36] I. G. Graham, F. Y. Kuo, D. Nuyens, R. Scheichl and I. H. Sloan, Quasi-Monte Carlo methods for elliptic PDEs with random coefficients and applications, J. Comput. Phys. 230 (2011), no. 10, 3668–3694. 10.1016/j.jcp.2011.01.023Search in Google Scholar

[37] H. Harbrecht, L. Herrmann, K. Kirchner and C. Schwab, Multilevel approximation of Gaussian random fields: Covariance compression, estimation and spatial prediction, preprint (2021), https://arxiv.org/abs/2103.04424. Search in Google Scholar

[38] L. Herrmann, K. Kirchner and C. Schwab, Multilevel approximation of Gaussian random fields: Fast simulation, Math. Models Methods Appl. Sci. 30 (2020), no. 1, 181–223. 10.1142/S0218202520500050Search in Google Scholar

[39] V. Ivrii, 100 years of Weyl’s law, Bull. Math. Sci. 6 (2016), no. 3, 379–452. 10.1007/s13373-016-0089-ySearch in Google Scholar

[40] E. Jansson, M. Kovács and A. Lang, Surface finite element approximation of spherical Whittle–Matérn Gaussian random fields, SIAM J. Sci. Comput. 44 (2022), no. 2, A825–A842. 10.1137/21M1400717Search in Google Scholar

[41] T. Kato, Fractional powers of dissipative operators, J. Math. Soc. Japan 13 (1961), 246–274. 10.2969/jmsj/01330246Search in Google Scholar

[42] A. V. Knyazev and J. E. Osborn, New a priori FEM error estimates for eigenvalues, SIAM J. Numer. Anal. 43 (2006), no. 6, 2647–2667. 10.1137/040613044Search in Google Scholar

[43] A. Lang and M. Pereira, Galerkin–Chebyshev approximation of Gaussian random fields on compact Riemannian manifolds, BIT 63 (2023), no. 4, Paper No. 51. 10.1007/s10543-023-00986-8Search in Google Scholar

[44] A. Lang and C. Schwab, Isotropic Gaussian random fields on the sphere: Regularity, fast simulation and stochastic partial differential equations, Ann. Appl. Probab. 25 (2015), no. 6, 3047–3094. 10.1214/14-AAP1067Search in Google Scholar

[45] O. P. Le Maître and O. M. Knio, Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics, Sci. Comput., Springer, New York, 2010. 10.1007/978-90-481-3520-2Search in Google Scholar

[46] F. Lindgren, D. Bolin and H. Rue, The SPDE approach for Gaussian and non-Gaussian fields: 10 years and still running, Spat. Stat. 50 (2022), Paper No. 100599. 10.1016/j.spasta.2022.100599Search in Google Scholar

[47] F. Lindgren, H. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach, J. R. Stat. Soc. Ser. B Stat. Methodol. 73 (2011), no. 4, 423–498. 10.1111/j.1467-9868.2011.00777.xSearch in Google Scholar

[48] M. Loève, Probability Theory. I, 4th ed., Grad. Texts in Math. 45, Springer, New York, 1977. 10.1007/978-1-4757-6288-4Search in Google Scholar

[49] M. Loève, Probability theory. II, 4th ed., Grad. Texts in Math. 46, Springer, New York, 1978. 10.1007/978-1-4612-6257-2Search in Google Scholar

[50] G. J. Lord, C. E. Powell and T. Shardlow, An Introduction to Computational Stochastic PDEs, Cambridge Texts Appl. Math., Cambridge University, New York, 2014. 10.1017/CBO9781139017329Search in Google Scholar

[51] J. Lund and K. L. Bowers, Sinc Methods for Quadrature and Differential Equations, Society for Industrial and Applied Mathematics, Philadelphia, 1992. 10.1137/1.9781611971637Search in Google Scholar

[52] D. Marinucci and G. Peccati, An Introduction to Computational Stochastic PDEs, Cambridge University, Cambridge, 2011. Search in Google Scholar

[53] B. Matérn, Spatial variation: Stochastic Models and Their Application to some Problems in Forest Surveys and Other Sampling Investigations, Meddelanden Fran Statens Skogsforskningsinst. 49 5, Statens Skogsforskningsinstitut, Stockholm, 1960, Search in Google Scholar

[54] K. Mekchay, P. Morin and R. H. Nochetto, AFEM for the Laplace–Beltrami operator on graphs: Design and conditional contraction property, Math. Comp. 80 (2011), no. 274, 625–648. 10.1090/S0025-5718-2010-02435-4Search in Google Scholar

[55] J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations, Philos. Trans. R. Soc. A 209 (1909), no. 441–458, 415–446. 10.1098/rsta.1909.0016Search in Google Scholar

[56] F. Nobile and F. Tesei, A multi level Monte Carlo method with control variate for elliptic PDEs with log-normal coefficients, Stoch. Partial Differ. Equ. Anal. Comput. 3 (2015), no. 3, 398–444. 10.1007/s40072-015-0055-9Search in Google Scholar

[57] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, Adapt. Comput. Mach. Learn., MIT Press, Cambridge, 2006. Search in Google Scholar

[58] C. Schwab and R. A. Todor, Karhunen–Loève approximation of random fields by generalized fast multipole methods, J. Comput. Phys. 217 (2006), no. 1, 100–122. 10.1016/j.jcp.2006.01.048Search in Google Scholar

[59] M. L. Stein, Interpolation of Spatial Data: Some Theory for Kriging, Springer Ser. Statist., Springer, New York, 1999. 10.1007/978-1-4612-1494-6Search in Google Scholar

[60] P. Whittle, On stationary processes in the plane, Biometrika 41 (1954), 434–449. 10.1093/biomet/41.3-4.434Search in Google Scholar

[61] P. Whittle, Stochastic processes in several dimensions, Bull. Inst. Internat. Statist. 40 (1963), 974–994. Search in Google Scholar

[62] D. Xiu and J. S. Hesthaven, High-order collocation methods for differential equations with random inputs, SIAM J. Sci. Comput. 27 (2005), no. 3, 1118–1139. 10.1137/040615201Search in Google Scholar

Received: 2022-11-23
Revised: 2023-11-30
Accepted: 2023-12-03
Published Online: 2024-01-04
Published in Print: 2024-10-01

© 2024 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 23.12.2024 from https://www.degruyter.com/document/doi/10.1515/cmam-2022-0237/html
Scroll to top button