[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Differential Evolution with Shadowed and General Type-2 Fuzzy Systems for Dynamic Parameter Adaptation in Optimal Design of Fuzzy Controllers
Next Article in Special Issue
Zero-Aware Low-Precision RNS Scaling Scheme
Previous Article in Journal
Oscillation and Asymptotic Properties of Differential Equations of Third-Order
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Representation of D-Invariant Polynomial Subspaces Based on Symmetric Cartesian Tensors

School of Mathematics and Systematic Sciences, Shenyang Normal University, Shenyang 110034, China
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(3), 193; https://doi.org/10.3390/axioms10030193
Submission received: 10 June 2021 / Revised: 15 August 2021 / Accepted: 16 August 2021 / Published: 19 August 2021
(This article belongs to the Special Issue Computing Methods in Mathematics and Engineering)

Abstract

:
Multivariate polynomial interpolation plays a crucial role both in scientific computation and engineering application. Exploring the structure of the D-invariant (closed under differentiation) polynomial subspaces has significant meaning for multivariate Hermite-type interpolation (especially ideal interpolation). We analyze the structure of a D-invariant polynomial subspace P n in terms of Cartesian tensors, where P n is a subspace with a maximal total degree equal to n , n 1 . For an arbitrary homogeneous polynomial p ( k ) of total degree k in P n , p ( k ) can be rewritten as the inner products of a kth order symmetric Cartesian tensor and k column vectors of indeterminates. We show that p ( k ) can be determined by all polynomials of a total degree one in P n . Namely, if we treat all linear polynomials on the basis of P n as a column vector, then this vector can be written as a product of a coefficient matrix A ( 1 ) and a column vector of indeterminates; our main result shows that the kth order symmetric Cartesian tensor corresponds to p ( k ) is a product of some so-called relational matrices and A ( 1 ) .

1. Introduction

Multivariate polynomial interpolation is widely used in many application domains, such as image processing, electronic communication, control theory, etc. The theory of multivariate polynomial interpolation is far from perfect since the interpolation conditions are too complicated, and the multiplicity structure of each interpolation site has many different expressions. Note that most related practical problems can be converted to ideal interpolation problems, whose interpolation conditions is determined by a D invariant polynomial subspace. Throughout the paper, F [ X ] : = F [ x 1 , , x d ] denotes the polynomial ring in d variables over F . For simplicity, we will work with the ground field F = R or C . Ideal interpolation, a special class of polynomial interpolation problems, can be defined by a linear projector (idempotent operator) of finite rank on F [ X ] . The kernel of the projector forms a polynomial ideal. Carl de Boor has conducted a great deal of important work for ideal interpolation [1].
The interpolation conditions of an ideal interpolant correspond to a D-invariant subspace [1,2]. Lagrange interpolation is a standard example of which the interpolation conditions consist of only evaluation functionals. In the one variable case, every ideal projector is the pointwise limit of Lagrange projectors when F = C . However, this is not true in the multivariate case [3,4]. Thus, it is natural to ask what kind of ideal interpolation problem can be written as a limit of Lagrange interpolation problems; we call this the discrete approximation problem for ideal interpolation [5]. This is equal to considering how to discretize the differential operators in the D-invariant subspace of each interpolation site. In [5], by analyzing the structure of the second-order D-invariant subspaces, we give a sufficient condition to solve the discrete approximation problem for this case. This indicates that analyzing the structure of the D-invariant subspaces will help us know more about multivariate polynomial interpolation.
In polynomial system solving, the multiplicity structure at an isolated zero X ^ is identified with the dual space consisting of all linear functionals supported at X ^ that vanish on the entire polynomial ideal generated by the given polynomial system [2]. Here, all the functionals correspond to a vector space, which is D-invariant (some references would use the term “closed”). Dayton and Zeng used the dual space theory to derive special-case polynomial system solving algorithms in 2005 [6]. After that, Zeng presented a new algorithm that substantially reduces the calculation by means of the closedness subspace strategy in his paper [7]. In [8], Li and Zhi demonstrated the explicit structure of the D-invariant subspace in the case of breadth one, which helps to compute the multiplicity structure of an isolated singular solution more efficiently.
We have seen that analyzing the structure of the D-invariant subspaces helps us to study a class of interpolation problems and to improve computational efficiency in polynomial system solving. It is the aim of this paper to describe the structure of the D-invariant subspaces by Cartesian tensors.

2. Preliminaries

In this section, we will recall some basic definitions in symbolic computation [9] and the concept of tensors in multilinear algebra [10].
A monomial order on F [ X ] is any relation ≻ on Z 0 d satisfying the following:
(i) ≻ is a total (or linear) order on Z 0 d .
(ii) If α β and γ Z 0 d , then α + γ β + γ .
(iii) ≻ is a well-order on Z 0 d .
Let f = α a α X α F [ X ] be a nonzero polynomial and ≻ be a monomial order. The multidegree of f is the following:
multideg ( f ) = max { α Z 0 d : a α 0 } .
The leading monomial of f is the following:
LM ( f ) = X multideg ( f ) .
The total degree of f, denoted by deg ( f ) , is the maximum | α | such that the coefficient a α is nonzero.
Definition 1.
A polynomial subspace P F [ X ] is said to be D-invariant if it is closed under differentiation, i.e., p P ,
D j p P , j = 1 , , d ,
in which D j is the partial derivative of p with respect to the jth argument.
In this paper, we denote by P n , n 1 , a D-invariant polynomial subspace of degree n, i.e., P n satisfies the following:
(i) P n { f F [ X ] : deg ( f ) n } .
(ii) There exists at least one polynomial of total degree n in P n .
(iii) p P n , D j p P n , j = 1 , , d .
For any given subspace P n , we also define P < n : = { f P n : deg ( f ) < n } .
Since P n is a linear space, we only need to study a special basis of P n by doing the following: to fix a term order (for example, graded reverse lexicographic order, 1 x d x d 1 x 1 x d 2 x d x d 1 ), we first write the polynomials in a given basis of P n in matrix form, where the columns are indexed by the monomials in increasing order and the rows are indexed by the basis of P n ; then, with Gauss–Jordan elimination, we can obtain the reduced row echelon form of this matrix, which gives another basis for P n . We call this new basis the “reduced basis” of P n . In the discussion that follows, by a basis of some subspace, we always mean the reduced basis.
Einstein Convention. Einstein introduced a convention whereby if a particular subscript (e.g., i) appears twice in a single term of an expression, then it is implicitly summed, and i is called a dummy index. For example, in traditional notation, we have the following:
a · b = ( a 1 , a 2 , a 3 ) · ( b 1 , b 2 , b 3 ) = i = 1 3 a i b i ,
and using summation convention, we can simply write the following:
a · b = a i b i .
Let e i = ( 0 , , 0 , 1 , 0 , , 0 ) T . Recall that a tensor of real numbers is simply an element in the tensor product of the vector spaces R k 1 R k 2 R k n [11]. Let
{ e i 1 ( 1 ) e i 2 ( 2 ) e i n ( n ) | 1 i 1 k 1 , 1 i 2 k 2 , , 1 i n k n }
be the standard basis of R k 1 R k 2 R k n , where { e 1 ( l ) , e 2 ( l ) , , e k l ( l ) } forms a standard basis for R k l , l = 1 , , n . Note that e i ( l ) · e j ( l ) = δ i j , with δ , the Kronecker delta.
We denote by
T = T i 1 i 2 i n e i 1 ( 1 ) e i 2 ( 2 ) e i n ( n )
an nth order Cartesian tensor. Here, the sum over i 1 , i 2 , , i n is implicit, and T i 1 i 2 i n R denotes the weight for e i 1 ( 1 ) e i 2 ( 2 ) e i n ( n ) in the sum. Throughout the paper, d denotes the number of indeterminates, so if there is no confusion, we will simply write e i (i is a subscript) if e i ( l ) is in the standard basis of R d , and write e i (i is a superscript) if e i ( l ) is in the standard basis of R q , where q need not be equal to d. Particularly, for a second order tensor T = T i j e i e j on R d , d = 3 , representing each basis tensor as a matrix:
e 1 e 1 = 1 0 0 0 0 0 0 0 0 , e 1 e 2 = 0 1 0 0 0 0 0 0 0
then T can be written in the form of a square matrix, i.e., T = ( T i j ) 3 × 3 .
Definition 2
([11]). A tensor T = T i 1 i 2 i n e i 1 e i 2 e i n R d R d R d is called symmetric if it is invariant under any permutation σ of its n indices, i.e., the following:
T i σ ( 1 ) i σ ( 2 ) i σ ( n ) = T i 1 i 2 i n .
The space of symmetric tensors of order n on R d is naturally isomorphic to the space of homogeneous polynomials of total degree n in d variables [11]. We will use this fact to represent a homogeneous polynomial in R [ X ] .
Definition 3
([12]). The inner product (also known as contraction) of two Cartesian tensors is defined as the following:
A · B = ( a i . . . j k e i e j e k ) · ( b l m . . . n e l e m e n ) a i . . . j k b l m . . . n e i e j ( e k · e l ) e m e n = a i . . . j l b l m . . . n e i e j e m e n .
For example, if A and B are matrices, we obtain the following matrix multiplication:
( a i j e i e j ) · ( b s k e s e k ) a i j b s k e i ( e j · e s ) e k = a i s b s k e i e k ,
etc. The critical request here is that the vectors in the inner product must be of the same dimension.

3. The Structure of the D-Invariant Subspace P 2

The structure of P 2 was discussed in our paper [5]; we list the results in this section for completeness.
We first consider a special second-degree D-invariant subspace as follows:
P ˜ 2 : = span { 1 , p 1 ( 1 ) , p 2 ( 1 ) , , p m 1 ( 1 ) , p ( 2 ) } ,
in which m 1 d and the superscript indicate the total degree of the polynomial. Without loss of generality, the polynomials of total degree one can be rewritten as the following:
p 1 ( 1 ) p 2 ( 1 ) p m 1 ( 1 ) = 1 0 0 a 1 , m 1 + 1 a 1 , d 0 1 0 a 2 , m 1 + 1 a 2 , d 0 0 1 a m 1 , m 1 + 1 a m 1 , d x 1 x 2 x d I m 1 A X .
With all p i ( 1 ) , i = 1 , , m 1 , given, we have the following result.
Theorem 1
([5]). With the above notation, p ( 2 ) in P ˜ 2 has the following form:
p ( 2 ) = 1 2 X T E E A A T E A T E A X + L T X ,
where E is an m 1 × m 1 symmetric matrix, L = ( 0 , , 0 , l m 1 + 1 , , l d ) T is a 1-column matrix.
Note that E has 1 2 m 1 ( m 1 + 1 ) free parameters, and L has d m 1 free parameters. The proof of Theorem 1 also establishes the following result.
Corollary 1
([5]). Suppose that P 2 = span { 1 , p 1 ( 1 ) , p 2 ( 1 ) , , p m 1 ( 1 ) , p 1 ( 2 ) , p 2 ( 2 ) , , p m 2 ( 2 ) } , where m 1 d , m 2 d + 1 2 . Then, each p j ( 2 ) , j = 1 , , m 2 , has the following form:
p j ( 2 ) = 1 2 X T E j E j A A T E j A T E j A X + L j T X ,
where E j is an m 1 × m 1 symmetric matrix, L j = ( 0 , , 0 , l m 1 + 1 ( j ) , , l d ( j ) ) T is a 1-column matrix.

4. The Structure of the D-Invariant Subspace P 3

We first assume that there is only 1 polynomial of total degree n in P n . In the general case, since we are considering reduced bases, then all polynomials of total degree n in the basis have this form.

4.1. The Structure of the Homogeneous D-Invariant Subspace P 3

We will discuss the structure of p ( 3 ) in the D-invariant subspace as follows:
P 3 = span { 1 , p 1 ( 1 ) , p 2 ( 1 ) , , p m 1 ( 1 ) , p 1 ( 2 ) , p 2 ( 2 ) , , p m 2 ( 2 ) , p ( 3 ) } , m 2 d + 1 2
with P < 3 given in this part. The partial derivative of the highest homogeneous part of p ( 3 ) can be represented as a linear combination of the quadratic homogeneous part of p i ( 2 ) , i = 1 , , m 2 ; hence, it is natural to begin with the case when p ( 3 ) and p i ( 2 ) are all homogeneous polynomials. The general situation of P 3 is covered at the end of the next section.
An arbitrary homogeneous polynomial p ( 3 ) has the following form:
p ( 3 ) = b i j k x i x j x k = ( b i j k e i e j e k ) · ( x s e s ) · ( x t e t ) · ( x l e l ) B ( 3 ) · X · X · X ,
where i , j , k are dummy indices and
b i j k = b σ ( i j k ) .
Here, σ ( i j k ) denotes any permutation of ( i , j , k ) .
Lemma 1.
Suppose that p ( 3 ) is of the form (5), then p ( 3 ) = 3 B ( 3 ) · X · X .
Proof. 
l = 1 , , d ,
p ( 3 ) x l = b i j k x i x j x k x l + b i j k x i x j x l x k + b i j k x i x l x j x k = 3 b l i j x i x j .
Thus,
p ( 3 ) = p ( 3 ) x 1 p ( 3 ) x d = 3 b 1 i j x i x j 3 b d i j x i x j = 3 b k i j x i x j e k = 3 B ( 3 ) · X · X .
The proof is completed. □
Proposition 1.
Suppose that p ( 3 ) is of the form (5). Let p ( 2 ) : = ( p 1 ( 2 ) , , p m 2 ( 2 ) ) T where p k ( 2 ) = p k i j x i x j with p k i j = p k j i R , k = 1 , , m 2 . Let c s k ( 3 ) be the coefficients satisfying the following:
p ( 3 ) = ( c s k ( 3 ) ) d × m 2 p ( 2 ) C ( 3 ) p ( 2 ) ,
namely, C ( 3 ) is the relational matrix between p ( 3 ) and all p k ( 2 ) . Then
b s i j = 1 3 c s k ( 3 ) p k i j , s , i , j = 1 , , d ,
where the right side is a sum w.r.t. k from 1 to m 2 .
Proof. 
Since
p ( 3 ) = C ( 3 ) p ( 2 ) = c s k ( 3 ) p k i j x i x j e s = ( c s k ( 3 ) p k i j e s e i e j ) · X · X ,
comparing the above equation with (7), we have the following:
3 B ( 3 ) = 3 b s i j e s e i e j = c s k ( 3 ) p k i j e s e i e j ,
which is equivalent to the following:
3 b s i j = c s k ( 3 ) p k i j , s , i , j = 1 , , d .
Thus, the proposition is proved. □
Next, we focus on formulating the relational expression between p ( 3 ) and all p k ( 2 ) in “matrix form”.
Supposing that C k = c k i j e i e j , k = 1 , , m , is a sequence of matrices of the same size, the following notation is useful:
C 1 C 2 C m = c 1 i j e i e j c 2 i j e i e j c m i j e i e j c k i j e k e i e j .
Theorem 2.
Let C k ( 2 ) = c k i j ( 2 ) e i e j , k = 1 , , m 2 , be the relational matrices between p k ( 2 ) and all linear polynomials in the graded basis of P 3 (i.e., p 1 ( 1 ) , p 2 ( 1 ) , , p m 1 ( 1 ) ). With the above notation,
B ( 3 ) = 1 3 ! C ( 3 ) · C 1 ( 2 ) C 2 ( 2 ) C m 2 ( 2 ) · I m 1 A .
Proof. 
For simplicity, let ( I m 1 A ) = A ( 1 ) = a i j ( 1 ) e i e j . Since p k ( 2 ) = p k i j x i x j , and p k i j e i e j is a d × d matrix, using Corollary 1, we obtain the following:
p k i j e i e j = 1 2 C k ( 2 ) ( I m 1 A ) = 1 2 C k ( 2 ) A ( 1 ) , k = 1 , , m 2 .
It follows from (9) that
3 B ( 3 ) = ( c s k ( 3 ) e s e k ) · ( p l i j e l e i e j ) = C ( 3 ) · p 1 i j e i e j p 2 i j e i e j p m 2 i j e i e j = 1 2 C ( 3 ) · C 1 ( 2 ) A ( 1 ) C 2 ( 2 ) A ( 1 ) C m 2 ( 2 ) A ( 1 ) = 1 2 ( c l q ( 3 ) e l e q ) · ( c s i j ( 2 ) a j t ( 1 ) e s e i e t ) = 1 2 c l s ( 3 ) c s i j ( 2 ) a j t ( 1 ) e l e i e t ,
which gives the following:
B ( 3 ) = 1 3 ! c l s ( 3 ) c s i j ( 2 ) a j t ( 1 ) e l e i e t .
On the other hand, the right side of (11) is equal to the following:
1 3 ! ( c l q ( 3 ) e l e q ) · ( c w i j ( 2 ) e w e i e j ) · ( a s t ( 1 ) e s e t ) = 1 3 ! c l q ( 3 ) c q i j ( 2 ) a j t ( 1 ) e l e i e t ,
This completes the proof of the theorem. □

4.2. The Degrees of Freedom of p ( 3 )

We have proved that B ( 3 ) has the form (11). Corollary 1 shows that each C j ( 2 ) has 1 2 m 1 ( m 1 + 1 ) free parameters with P < 2 given. Now we turn to the following question: given the space P < 3 , how many degrees of freedom does C ( 3 ) have? According to Equation (8), the constraints on C ( 3 ) are derived from the symmetry of B ( 3 ) , i.e., Equation (6).
Lemma 2.
The number of equality constraints contained in (6) is the following:
d ( d 1 ) + d ( d 1 ) ( d 2 ) 3 .
Proof. 
There are three cases to consider. First, b 111 , b 222 , , b d d d do not lead to any constraint. Second, b 112 , b 113 , are of the form b s s t , s t , with b s s t = b s t s = b t s s . Notice that b s s t = b s t s holds naturally by (8), so there are 1 · d ( d 1 ) equality constraints. Third, b 123 , b 124 , , are of the form b s t w , s t w . Each b s t w leads to five equations, for which three pairs naturally hold, so there are 2 d 3 = d ( d 1 ) ( d 2 ) 3 equality constraints in this case. □
Similar to (2), with the term order x 1 x 2 x d , we can write the following:
p 1 ( 2 ) p 2 ( 2 ) p m 2 ( 2 ) = I m 2 A ˜ Q x 1 2 x 1 x 2 x d 2 ,
in which Q is a column permutation matrix. Since the basis is reduced, then LM ( p i ( 2 ) ) LM ( p j ( 2 ) ) , i j . We denote by LM ( P = 2 ) and H ( P = 2 ) the following sets:
LM ( P = 2 ) : = { LM ( p 1 ( 2 ) ) , LM ( p 2 ( 2 ) ) , , LM ( p m 2 ( 2 ) ) } ,
H ( P = 2 ) : = span { t i s a m o n o m i a l : deg ( t ) = 3 , D i t LM ( P = 2 ) , i = 1 , , d } .
Theorem 3.
Let χ denote the number of the degrees of freedom of C ( 3 ) . Then, we have the following:
max { 0 , d m 2 d ( d 1 ) + d ( d 1 ) ( d 2 ) 3 } χ dim ( H ( P = 2 ) ) .
Proof. 
With P < 3 given, χ is equal to the number of the degrees of freedom of B ( 3 ) , so the second inequality holds obviously. To verify the first one, note that some of the linear equations derived from (6) may not be linearly independent, and by Lemma 2, the theorem is proved. □
Example 1.
Choose d = 3 , Q = I , m 2 = 5 in (13). Then
LM ( P = 2 ) = { x 2 , x y , x z , y 2 , y z } ,
which indicates that H ( P = 2 ) = span { x 3 , x 2 y , x 2 z , x y z , x y 2 , y 3 , y 2 z } . Thus, by the above theorem, the following holds:
max { 0 , 15 8 } χ 7 ,
so that χ = 7 .
Next, let us show that estimation (14) is sharp through the following example.
Example 2.
Let d = 3 , m 2 = 4 , and
p 1 ( 2 ) = x 2 + 0 + 0 + 0 + p 123 y z + p 133 z 2 , p 2 ( 2 ) = 0 + x y + 0 + 0 + p 223 y z + p 233 z 2 , p 3 ( 2 ) = 0 + 0 + x z + 0 + p 323 y z + p 333 z 2 ,
p 4 ( 2 ) = 0 + 0 + 0 + y 2 + p 423 y z + p 433 z 2 .
By (14),
4 χ 5 .
We can verify the following:
χ = 4 , if p 123 = p 223 = p 323 = p 433 = 1 , p 133 = p 233 = p 333 = p 423 = 0 . The free parameters in C ( 3 ) are c 11 ( 3 ) , c 12 ( 3 ) , c 13 ( 3 ) , c 14 ( 3 ) .
χ = 5 , if p k 23 = p k 33 = 0 , k = 1 , 2 , 3 , 4 . The free parameters can be chosen as c 11 ( 3 ) , c 12 ( 3 ) , c 13 ( 3 ) , c 14 ( 3 ) , c 24 ( 3 ) .

5. The Structure of the D-Invariant Subspace P n

Consider the following D-invariant subspace:
P n = span { 1 , p 1 ( 1 ) , p 2 ( 1 ) , , p m 1 ( 1 ) , , p 1 ( n 1 ) , p 2 ( n 1 ) , , p m n 1 ( n 1 ) , p 1 ( n ) } , n 3 ,
in which m k d + k 1 k for k = 1 , , n 1 . We will discuss the special case where all polynomials in the reduced basis of P n are restricted to be homogeneous. Let
p t ( s ) = b t i 1 i 2 i s x i 1 x i 2 x i s , s = 1 , , n , t = 1 , , m s ,
with b t i σ ( 1 ) i σ ( 2 ) i σ ( s ) = b t i 1 i 2 i s , m n = 1 .
Similar to the case when n = 3 , let B t ( s ) = b t i 1 i 2 i s e i 1 e i 2 e i s , the following can be verified:
p t ( s ) = B t ( s ) · X · X X s c o p i e s , p t ( s ) = s B t ( s ) · X · X X s 1 c o p i e s .
Note that if s = 1 , B t ( 1 ) is a first order tensor which can be written as a vector as in Section 3; if s = 2 , B t ( 2 ) is a second order tensor which we write as a matrix in (3). Namely, p t ( 2 ) has two equivalent forms:
p t ( 2 ) = 1 2 X T B t ( 2 ) X = 1 2 B t ( 2 ) · X · X .
Finally, assuming that P < n is given, we set forth a general form of Theorem 2.
Theorem 4.
For any fixed j { 2 , , n } , k { 1 , , m j } , let C k ( j ) be the relational matrix between p k ( j ) and all polynomials of total degree j 1 in the basis of P n , n 3 . Then,
B 1 ( n ) = 1 n ! C 1 ( n ) · C 1 ( n 1 ) C 2 ( n 1 ) C m n 1 ( n 1 ) · C 1 ( n 2 ) C 2 ( n 2 ) C m n 2 ( n 2 ) C 1 ( 2 ) C 2 ( 2 ) C m 2 ( 2 ) · A ( 1 ) ,
where A ( 1 ) = ( I m 1 A ) .
Unlike general linear polynomial subspaces, due to the “closed” property, the relation between the polynomial of higher degree (i.e., p t ( s ) ) and all linear polynomials in P n can be observed by the above formula. For the proof of this theorem, we need the following lemmas.
Lemma 3.
C 1 ( n ) · B 1 ( n 1 ) · X · X X B 2 ( n 1 ) · X · X X B m n 1 ( n 1 ) · X · X X = C 1 ( n ) · B 1 ( n 1 ) B 2 ( n 1 ) B m n 1 ( n 1 ) · X · X X .
Proof. 
The left side of (18) is equal to the following:
C 1 ( n ) · b 1 i j . . . k x i x j x k b 2 i j . . . k x i x j x k b m n 1 i j . . . k x i x j x k = ( c 1 s t e s e t ) · ( b q i j . . . k x i x j x k e q ) = c 1 s t b t i j . . . k x i x j x k e s ;
and the right side is equal to the following:
( c 1 s t e s e t ) · ( b q i j . . . k e q e i e j e k ) · ( x i e i ) · ( x j e j ) ( x t e t ) = ( c 1 s t b t i j . . . k e s e i e k ) · ( x i e i ) · ( x j e j ) ( x t e t ) = c 1 s t b t i j . . . k x i x j x k e s .
This completes the proof. □
In Theorem 2, we have actually proved the following:
C ( 3 ) · C 1 ( 2 ) A ( 1 ) C 2 ( 2 ) A ( 1 ) C m 2 ( 2 ) A ( 1 ) = C ( 3 ) · C 1 ( 2 ) C 2 ( 2 ) C m 2 ( 2 ) · A ( 1 ) ,
This can be easily generalized to an arbitrary n , n > 3 , as follows.
Lemma 4.
C ( n ) · C 1 ( n 1 ) · C 1 ( n 2 ) C 2 ( n 2 ) C m n 2 ( n 2 ) C 1 ( 2 ) C 2 ( 2 ) C m 2 ( 2 ) · A ( 1 ) C m n 1 ( n 1 ) · C 1 ( n 2 ) C 2 ( n 2 ) C m n 2 ( n 2 ) C 1 ( 2 ) C 2 ( 2 ) C m 2 ( 2 ) · A ( 1 ) = C ( n ) · C 1 ( n 1 ) C 2 ( n 1 ) C m n 1 ( n 1 ) C 1 ( 2 ) C 2 ( 2 ) C m 2 ( 2 ) · A ( 1 ) .
Proof 
(Proof of Theorem 4). We will use induction on n. If n = 3 , the theorem is true by Theorem 2. Now assume that the theorem holds for all B t ( n 1 ) , t = 1 , , m n 1 , n > 3 . If we let s = n in (15), we have the following:
p 1 ( n ) = n B 1 ( n ) · X · X X ;
on the other hand, using Lemma 3, we obtain the following:
p 1 ( n ) = C 1 ( n ) · p 1 ( n 1 ) p 2 ( n 1 ) p m n 1 ( n 1 ) = C 1 ( n ) · B 1 ( n 1 ) · X · X X B 2 ( n 1 ) · X · X X B m n 1 ( n 1 ) · X · X X = C 1 ( n ) · B 1 ( n 1 ) B 2 ( n 1 ) B m n 1 ( n 1 ) · X · X X .
Thus,
B 1 ( n ) = 1 n C 1 ( n ) · B 1 ( n 1 ) B 2 ( n 1 ) B m n 1 ( n 1 ) .
By our inductive assumption and Lemma 4, the theorem is proved. □

6. Conclusions

Li and Zhi demonstrated the structure of the breadth-one D-invariant polynomial subspace in [8]. We analyzed the structure of a second-degree D-invariant polynomial subspace P 2 in our previous work [5]. As an application for ideal interpolation, we solved the discrete approximation problem for δ z P 2 ( D ) under certain conditions. In this work, we discuss the structure of P n for a special case, where all polynomials in the reduced basis of P n are restricted to be homogeneous. In the future, we will consider a more general case of P n . For any fixed s and t, we can decompose p t ( s ) into its homogeneous components, and write the following:
p t ( s ) = B t , s ( s ) · X X s c o p i e s + B t , s 1 ( s ) · X X s 1 c o p i e s + + B t , 1 ( s ) · X , s = 1 , , n , t = 1 , , m s
with B t , j ( s ) , a jth order symmetric Cartesian tensor for j = 2 , , s . Since 1 is always in P n , this means that the constant term in p t ( s ) can be omitted with reduction. We can now analyze the structure of the following:
p 1 ( n ) = B 1 , n ( n ) · X X n c o p i e s + B 1 , n 1 ( n ) · X X n 1 c o p i e s + + B 1 , 1 ( n ) · X
with P < n given. In view of the D-invariance of P n , B 1 , n ( n ) only relates to the highest homogeneous components of all p j ( n 1 ) , j = 1 , , m n 1 , i.e., B j , n 1 ( n 1 ) . In addition, note that each B j , n 1 ( n 1 ) can be expressed in the same way; hence, B 1 , n ( n ) has the form (17). B 1 , j ( n ) , j = 2 , , n 1 relates to all the homogeneous complements, which have total degree j 1 of polynomials in P < n . With linear reduction, B 1 , 1 ( n ) can be written in the form B 1 , 1 ( n ) = ( 0 , , 0 , l m 1 + 1 ( n ) , , l d ( n ) ) .

Author Contributions

Conceptualization, X.J.; methodology, X.J. and K.C.; software, X.J. and K.C.; validation, X.J. and K.C.; formal analysis, X.J. and K.C.; investigation, X.J. and K.C.; resources, X.J. and K.C.; data curation, X.J. and K.C.; writing–original draft preparation, X.J.; writing–review and editing, K.C.; visualization, X.J. and K.C.; supervision, X.J. and K.C.; project administration, X.J. and K.C.; funding acquisition, X.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 11901402.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. de Boor, C. Ideal interpolation. In Approximation Theory XI: Gatlinburg; Nashboro Press: Brentwood TN, USA, 2004; pp. 59–91. [Google Scholar]
  2. Marinari, M.G.; Möller, H.M.; Mora, T. Gröbner bases of ideals defined by functionals with an application to ideals of projective points. Appl. Algebra Engrg. Comm. Comput. 1993, 4, 103–145. [Google Scholar] [CrossRef]
  3. de Boor, C.; Shekhtman, B. On the pointwise limits of bivariate Lagrange projectors. Linear Algebra Appl. 2008, 429, 311–325. [Google Scholar] [CrossRef] [Green Version]
  4. Shekhtman, B. On a conjecture of Carl de Boor regarding the limits of Lagrange interpolants. Constr. Approx. 2006, 24, 365–370. [Google Scholar] [CrossRef]
  5. Jiang, X.; Zhang, S. The structure of a second-degree D-invariant subspace and its application in ideal interpolation. J. Approx. Theory 2016, 207, 232–240. [Google Scholar] [CrossRef]
  6. Dayton, B.H.; Zeng, Z. Computing the multiplicity structure in solving polynomial systems. In Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation, Beijing, China, 24–27 July 2005; ACM: New York, NY, USA, 2005; pp. 116–123. [Google Scholar]
  7. Zeng, Z. The Closedness Subspace Method for Computing the Multiplicity Structure of a Polynomial System. 2009. Available online: http://orion.neiu.edu/~zzeng/Papers/csdual.pdf (accessed on 1 May 2021).
  8. Li, N.; Zhi, L. Computing the multiplicity structure of an isolated singular solution: Case of breadth one. J. Symb. Comput. 2012, 47, 700–710. [Google Scholar] [CrossRef] [Green Version]
  9. Cox, D.; Little, J.; O’Shea, D. Ideals, Varieties, and Algorithms; Springer: New York, NY, USA, 1992. [Google Scholar]
  10. Northcott, D.G. Multilinear Algebra; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  11. Comon, P.; Golub, G.; Lim, L.H.; Mourrain, B. Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl. 2008, 30, 1254–1279. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, R. A Brief Tutorial on Tensor Analysis; Tongji University Press: Shanghai, China, 2010. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, X.; Cui, K. The Representation of D-Invariant Polynomial Subspaces Based on Symmetric Cartesian Tensors. Axioms 2021, 10, 193. https://doi.org/10.3390/axioms10030193

AMA Style

Jiang X, Cui K. The Representation of D-Invariant Polynomial Subspaces Based on Symmetric Cartesian Tensors. Axioms. 2021; 10(3):193. https://doi.org/10.3390/axioms10030193

Chicago/Turabian Style

Jiang, Xue, and Kai Cui. 2021. "The Representation of D-Invariant Polynomial Subspaces Based on Symmetric Cartesian Tensors" Axioms 10, no. 3: 193. https://doi.org/10.3390/axioms10030193

APA Style

Jiang, X., & Cui, K. (2021). The Representation of D-Invariant Polynomial Subspaces Based on Symmetric Cartesian Tensors. Axioms, 10(3), 193. https://doi.org/10.3390/axioms10030193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop