[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Dynamic Performances of Technological Vibrating Machines
Next Article in Special Issue
A General Picard-Mann Iterative Method for Approximating Fixed Points of Nonexpansive Mappings with Applications
Previous Article in Journal
Assessment of Dynamics of a Rail Vehicle in Terms of Running Properties While Moving on a Real Track Model
Previous Article in Special Issue
A New Study on the Fixed Point Sets of Proinov-Type Contractions via Rational Forms
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Douglas–Rachford Splitting Method with Linearization for the Split Feasibility Problem

1
College of Science, Civil Aviation University of China, Tianjin 300300, China
2
Department of Mathematics, NanChang University, Nanchang 330031, China
3
Department of Mathematics and Engineering Sciences, Hellenic Military Academy, 16673 Vari Attikis, Greece
4
Program in Interdisciplinary Studies, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USA
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(3), 537; https://doi.org/10.3390/sym14030537
Submission received: 5 February 2022 / Revised: 24 February 2022 / Accepted: 25 February 2022 / Published: 6 March 2022
(This article belongs to the Special Issue Symmetry in Nonlinear Analysis and Fixed Point Theory)

Abstract

:
The aim of this article is to introduce the Douglas–Rachford splitting method with linearization to solve the split feasibility problem (SFP). Our proposed method includes two existing methods in work of Tang et al. and Wang as special cases. The ranges of the parameters in work of Tang et al. are extended from (0,1) to (0,2). Under standard conditions, we prove the weak convergence of proposed algorithms. We also provide two numerical experiments to illustrate the effectiveness of the proposed algorithm by comparing the algorithms in work of Tang et al. and Wang.

1. Introduction

In this article, we consider the split feasibility problem (SFP) in the form
find x * C and A x * Q ,
where C and Q are the nonempty closed convex subsets of the real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator. The SFP was introduced by Censor and Elfving [1] for inverse problems which arise from phase retrievals and medical image reconstruction [2] and were later generalized to the multiple-sets split feasibility problem [3].
Throughout this article, we assume that the SFP (1) is consistent, i.e., its solution set, denoted by
Γ = { x | x C and A x Q } ,
is nonempty.
By transforming the SFP as an equivalent constrained optimization problem, Byrne [2,4] firstly introduced the well-known CQ algorithms, which have received a great deal of attention from many authors, who improved it in various ways; see, e.g., [5,6,7,8,9,10,11,12].
To solve the SFP (1), Wang [13] introduced the following gradient method which was called Polyak’s gradient method:
x k + 1 = x k τ k ( x k P C x k ) + A * ( I P Q ) A x k ,
where
τ k = ρ k x k P C x k 2 + | | ( I P Q ) A x k | | 2 2 ( x k P C x k ) + A * ( I P Q ) A x k 2 ,
and ρ k ( 0 , 2 ) . In fact, (2) is a special case of the so-called simple projection method in [14]. The authors [15] proposed an iterative method for solving the multiple-sets split feasibility problem with splitting self-adaptive stepsizes, which reduces to the following scheme when applied to the SFP (1):
x k + 1 = x k μ k ( x k P C x k ) + ν k A * ( I P Q ) A x k ,
where μ k ( 0 , 1 ) and
ν k = σ k ( I P Q ) A x k 2 A * ( I P Q ) A x k 2 ,
and σ k ( 0 , 1 ) . Note that the difference of the schemes (2) and (3) is the choices of the stepsizes.
The Douglas–Rachford splitting method introduced in [16] is a popular method to solve structured optimization problems. Let g and h be proper lower semicontinuous convex functions from a Hilbert space H to ( , + ) . Consider the structured optimization problem:
min x H g ( x ) + h ( x ) .
Given the initial guess x 0 , the Douglas–Rachford splitting method generates the iterative sequence via the following scheme:
y k + 1 = argmin x H g ( y ) + 1 2 λ k | | y x k | | 2 , z k + 1 = argmin z H h ( z ) + 1 2 λ k | | z ( 2 y k + 1 x k ) | | 2 , x k + 1 = x k + α k ( z k + 1 y k + 1 ) ,
where { α k } k N ( 0 , 2 ) is a parameter sequence and { λ k } k N ( 0 , ) is the proximal parameter sequence of the regularization terms. Note that the scheme (4) with { α k } k N ( 0 , + ) was called the general splitting method in [17] and becomes the Peaceman–Rachford splitting method when { α k } k N [ 2 , + ) .
It is easy to see that the split feasibility problem (1) also equals the following unconstrained minimization problem:
min x H 1 ι C ( x ) + f ( x ) ,
where f ( x ) = 1 2 ( I P Q ) A x 2 and ι C ( x ) is the indicator function of the set C; that is, ι C ( x ) = 0 if x C , otherwise ι C ( x ) = + . Recall that the gradient of f ( x ) is f ( x ) = A * ( I P Q ) A x . Letting g = f and h = ι C in (4) and using the linearization technique, the authors [18] recently introduced the following general splitting methods with linearization for the SFP (1):
y k + 1 : = x k λ k f ( x k ) , z k + 1 : = P C ( 2 y k + 1 x k ) , x k + 1 : = x k + α k ( z k + 1 y k + 1 ) ,
where { α k } k N is a sequence of positive numbers and
λ k : = γ f ( x k ) f ( x k ) 2 , if f ( x k ) 0 , 0 , otherwise ,
and γ ( 0 , 2 ) . The weak convergence of the algorithm (6) was established under the standard conditions, and the algorithm (6) has good numerical performance comparing the algorithm (2) and Algorithm 3.1 in [10].
In this article, we present an iterative scheme by letting g = ι C and h = f in (4) and using the linearization technique. The convergence of the corresponding scheme is analyzed.
The rest of the paper is constructed as follows. In Section 2, we recall some definitions and known results for further analysis. In Section 3, we present the Douglas–Rachford splitting method with linearization and its relaxed version. In Section 4, we show the weak convergence of the proposed algorithm, which converges weakly to a solution of the SFP. In Section 5, we give two numerical experiments to show the behavior of the algorithm. Finally, some concluding remarks are given in Section 6.

2. Preliminaries

Let H be a Hilbert space and K be a nonempty closed convex subset of H . We use the notation:
  • ⇀ for weak convergence and → for strong convergence;
  • ω w ( x k ) = { x : x k l x } denotes the weak ω -limit set of { x k } k N .
The following identity will be used for the main results (see [19], Corollary 2.15):
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2
for all α R and ( x , y ) H × H .
Definition 1
([19], Definition 6.38). Let K be a nonempty convex subset of H and let x H . The normal cone to K at x is
N K ( x ) = u H | sup K x , u 0 , if x K ; , otherwise .
For a point x, the the classical metric projection of x onto K, denoted by P K ( x ) , is defined by
P K ( x ) : = argmin { x y : y K } .
Lemma 1
([19], Proposition 4.4). For any x , y H and z K , the following hold:
(i) 
P K ( x ) P K ( y ) 2 P K ( x ) P K ( y ) , x y ;
(ii) 
P K ( x ) z 2 x z 2 P K ( x ) x 2 ;
(iii) 
( I P K ) x ( I P K ) y , x y ( I P K ) x ( I P K ) y 2 .
It follows from Lemma 1 (iii) that
x P K ( x ) , x z x P K ( x ) 2 , x H , z K .
Recall that a mapping T : H H is called to be nonexpansive if
T x T y x y , x , y H .
Lemma 1 (i) implies that P K is firmly nonexpansive and consequently nonexpansive.
The next lemma shows that the nonexpansive mappings are demiclosed at 0.
Lemma 2
([19], Theorem 4.27). Let K be a nonempty closed convex subset of H and T : K H be a nonexpansive mapping. Let { x k } k N be a sequence in K and x H such that x k x and T x k x k 0 as k + . Then x Fix ( T ) .
Lemma 3
([19], Lemma 2.47). Let K be a nonempty closed convex subset of H and { x k } k N be a sequence in H such that the following two conditions hold:
(i) 
For all x K , lim n x k x exists;
(ii) 
Every sequential weak cluster point of { x k } k N is in K.
Then the sequence { x k } k N converges weakly to a point in K.

3. Douglas–Rachford Splitting Method with Linearization

In this section, we introduce Douglas–Rachford splitting method with linearization and its relaxed variant.
Using symmetry, we can set g = ι C and h = f in (4). Now, we present a direct adaptation of the Douglas–Rachford splitting method (4), which can solve the equivalent problem (5) of the SFP (1). (see Algorithm 1)
Algorithm 1 Douglas–Rachford splitting method.
  • Step 0. Input k : = 0 , x 0 H 1 .
  • Step 1. Generate x k + 1 by
    y k + 1 : = P C ( x k ) , z k + 1 : = argmin z H 1 f ( z ) + 1 2 λ k z ( 2 y k + 1 x k ) 2 , x k + 1 : = x k + α k z k + 1 y k + 1 ,
    where { α k } k N and { λ k } k N are sequences of positive numbers.
  • Step 2. If a termination criterion is not met, then set k : = k + 1 and go to Step 1.
By the first-order optimality condition of the second formula of (9) in Algorithm 1,
0 = f ( z k + 1 ) + 1 λ k z k + 1 ( 2 y k + 1 x k )     = A * ( I P Q ) A z k + 1 + 1 λ k z k + 1 ( 2 y k + 1 x k ) .
To calculate z k + 1 , we need to get ( I + λ k A * ( I P Q ) A ) 1 , which is very difficult. In order to overcome this difficulty, we linearize f ( z ) at x k . Then, the second formula of (9) becomes
z k + 1 : = argmin z H 1 f ( x k ) + f ( x k ) , z x k + 1 2 λ k z ( 2 y k + 1 x k ) 2 = argmin z H 1 f ( x k ) , z x k + 1 2 λ k z ( 2 y k + 1 x k ) 2 .
Its first-order optimality condition is
f ( x k ) + 1 λ k [ z k + 1 ( 2 y k + 1 x k ) ] = 0 .
Then we have
z k + 1 = 2 y k + 1 x k λ k f ( x k ) .
Thus, we get the following linearized Douglas–Rachford splitting method:
y k + 1 : = P C ( x k ) , z k + 1 : = 2 y k + 1 x k λ k f ( x k ) , x k + 1 : = x k + α k z k + 1 y k + 1 .
After a simple calculation, we get
x k + 1 : = x k + α k P C ( x k ) x k λ k f ( x k ) .
In what follows, we present the Douglas–Rachford splitting method with linearization. (see Algorithm 2)
Algorithm 2 Douglas–Rachford splitting method with linearization.
  • Step 0. Input k : = 0 , x 0 H 1 .
  • Step 1. Given x k , generate x k + 1 by
    x k + 1 : = x k α k ( x k P C ( x k ) ) + λ k A * ( I P Q ) A x k ,
    where { α k } k N and { λ k } k N are two sequences of positive numbers.
  • Step 2. If f ( x k ) = 0 and x k + 1 = x k , then terminate. Otherwise, set k : = k + 1 and go to Step 1.
Remark 1.
Algorithm 2 is a general scheme with two parameters sequences { α k } k N and { λ k } k N , which includes the algorithms (2) and (3) as special cases. Indeed,
(i) 
Let α k = τ k and λ k 1 , k N ; then, Algorithm 2 becomes the algorithm (2);
(ii) 
Let α k = μ k and λ k = ν k μ k , k N ; then, Algorithm 2 becomes the algorithm (3).
In the following, we extend the ranges of the parameter sequences { μ k } k N and { σ k } k N in the algorithm (3). To this end, let α k = β k θ and
λ k = ( 1 β k ) ρ α k η k ,
where β k ( 0 , 1 ) , θ ( 0 , 2 ) , ρ ( 0 , 2 ) and
η k = ( I P Q ) A x k 2 A * ( I P Q ) A x k 2 .
Then, Algorithm 2 becomes
x k + 1 = x k β k θ ( x k P C ( x k ) ) ( 1 β k ) ρ η k A * ( I P Q ) A x k .
Note that β k θ ( 0 , 2 ) and ( 1 β k ) ρ ( 0 , 2 ) . Comparing (3) and (10), the ranges of the parameter sequences { μ k } k N and { σ k } k N in the algorithm (3) are extended from (0,1) to (0,2). It is worth noting that { μ k } k N and { σ k } k N cannot both be in (1,2). In fact, one is in (0,2), while the other is in (0,1). Due to α k ( 0 , 2 ) , Algorithm 2 is the Douglas–Rachford splitting method with linearization.
We give two choices of β k , as follows:
  • Constant choice: β k : = β ( 0 , 1 ) for all k N ;
  • Adaptive choice: Let β ̲ , β ¯ ( 0 , 1 ) ,
    β k * = 1 2 + θ ( 2 θ ) x k P C ( x k ) 2 ρ ( 2 ρ ) η k ( I P Q ) A x k 2 2 θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 ,
    and β k = max { β ̲ , min { β k * , β ¯ } } .
Note that the upper and lower bounds are imposed in the adaptive choice of β k to guarantee β k ( 0 , 1 ) . Therefore, it is not fully adaptive.
Now, the following lemma shows the validity of the stopping criterion through Step 2 of the Douglas–Rachford splitting method with linearization.
Lemma 4.
If f ( x k ) = 0 and x k + 1 = x k for some k, then x k generated by Algorithm 2 is a solution of the SFP (1).
Proof. 
Using f ( x k ) = 0 and x k + 1 = x k , we get x k = P C ( x k ) , which implies x k C . Thus, 0 N C ( x k ) . We deduce that x k is a solution of the SFP (1) because of the first-order optimality condition 0 N C ( x ) + f ( x ) of problem (5). □
In Algorithm 2, we generally assume that the projections P C and P Q are easy to calculate. However, projection is sometimes impossible or difficult to calculate. In order to solve this problem, we consider a general situation of C and Q in SFP (1). C = { x H 1 : c ( x ) 0 } and Q = { y H 2 : q ( y ) 0 } are level sets, where c : H 1 R and q : H 2 R are convex functions. We assume that c and q are bounded operators and define the sets C k and Q k as follows:
C k = x H 1 : c ( x k ) + ξ k , x x k 0 ,
where ξ k c ( x k ) , and
Q k = y H 2 : q ( A x k ) + η k , y A x k 0 ,
where η k q ( A x k ) .
Next, we define f k ( x ) : = 1 2 ( I P Q k ) A x 2 and introduce the relaxed Douglas–Rachford splitting method with linearization. (see Algorithm 3)
Algorithm 3 Relaxed Douglas–Rachford splitting method with linearization.
  • Step 0. Input k : = 0 , x 0 H 1 .
  • Step 1. Given x k , generate x k + 1 by
    x k + 1 : = x k α k ( x k P C k ( x k ) ) + λ k A * ( I P Q k ) A x k ,
    where { α k } k N and { λ k } k N are two sequences of positive numbers.
  • Step 2. If f k ( x k ) = 0 and x k + 1 = x k , then terminate. Otherwise, set k : = k + 1 and go to Step 1.

4. Convergence Analysis

In this section, we prove the weak convergence of Algorithm 2 under the standard conditions.
Firstly, we present two lemmas which are key for the convergence of Algorithm 2.
Lemma 5.
Let { x k } k N be the sequence generated by Algorithm 2 from any initial point x 0 H 1 and x * Γ . Let β k β ( 0 , 1 ) , k N . Then it holds
x k + 1 x * 2 x k x * 2 β θ ( 2 θ ) x k P C ( x k ) 2 ( 1 β ) ρ ( 2 ρ ) η k ( I P Q ) A x k 2 β ( 1 β ) θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 .
Proof. 
By (10) and (7), we have
        x k + 1 x * 2 = β k [ x k θ ( x k P C ( x k ) ) x * ] + ( 1 β k ) [ x k ρ η k A * ( I P Q ) A x k x * ] 2 = β x k x * θ ( x k P C ( x k ) ) 2         + ( 1 β ) x k x * ρ η k A * ( I P Q ) A x k 2         β ( 1 β ) θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 .
Using (8), we get
x k x * , x k P C ( x k ) x k P C ( x k ) 2 ,
and
x k x * , A * ( I P Q ) A x k = A x k A x * , ( I P Q ) A x k ( I P Q ) A x k 2 .
By (15), we obtain
        x k x * θ ( x k P C ( x k ) ) 2 = x k x * 2 + θ 2 x k P C ( x k ) 2 2 θ x k x * , x k P C ( x k )         x k x * 2 θ ( 2 θ ) x k P C ( x k ) 2 .
From (16) and the definition of η k , it follows
        x k x * ρ η k A * ( I P Q ) A x k 2 = x k x * 2 + ρ 2 η k 2 A * ( I P Q ) A x k 2 2 ρ η k x k x * , A * ( I P Q ) A x k x k x * 2 + ρ 2 η k 2 A * ( I P Q ) A x k 2 2 ρ η k ( I P Q ) A x k 2 = x k x * 2 ρ ( 2 ρ ) η k ( I P Q ) A x k 2 .
Combining (14), (17) and (18), we deduce (13). The proof is completed. □
Remark 2.
The inequality (13) with the ranges of the parameters β , θ , ρ shows the monotonically decreasing property of the sequence { x k x * | | } k N . It is not the same as the inequality in [15], and the latter reduces to the following form for the SFP (1):
x k + 1 x * 2 x k x * 2 2 μ k ( 1 μ k ) x k P C ( x k ) 2                           2 σ k ( 1 σ k ) η k ( I P Q ) A x k 2 .
It is difficult to compare them and show which is better in theory. The numerical experiments in Section 5 illustrate that the optimal choices of the parameters of { μ k } k N and { σ k } k N is that one is in (0,2) while the others are in (0,1).
Lemma 6.
Let { x k } k N be the sequence generated by Algorithm 2 from any initial point x 0 H 1 , x * Γ . Let β k is given adaptively.
(i) 
If β k * [ β ̲ , β ¯ ] , it holds
        x k + 1 x * 2 x k x * 2 1 2 θ ( 2 θ ) x k P C ( x k ) 2         1 2 ρ ( 2 ρ ) η k ( I P Q ) A x k 2         1 4 θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2         θ ( 2 θ ) x k P C ( x k ) 2 ρ ( 2 ρ ) η k ( I P Q ) A x k 2 2 4 θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 ;
(ii) 
If β k * [ β ̲ , β ¯ ] , it holds
x k + 1 x * 2 x k x * 2 β ̲ θ ( 2 θ ) x k P C ( x k ) 2                                                       ( 1 β ¯ ) ρ ( 2 ρ ) η k ( I P Q ) A x k 2                                                       ( β ̲ β ¯ 2 ) θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 .
Proof. 
(i) In this case, β k = β k * . Similar to Lemma 5, we have
x k + 1 x * 2 x k x * 2 β k θ ( 2 θ ) x k P C ( x k ) 2                                                       ( 1 β k ) ρ ( 2 ρ ) η k ( I P Q ) A x k 2                                                       β k ( 1 β k ) θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 .
Let
d k = θ ( 2 θ ) x k P C ( x k ) 2 , e k = ρ ( 2 ρ ) η k ( I P Q ) A x k 2 , m k = θ ( x k P C ( x k ) ) ρ η k A * ( I P Q ) A x k 2 .
The inequality (21) becomes
x k + 1 x * 2 x k x * 2 β k d k ( 1 β k ) e k β k ( 1 β k ) m k .
After simple calculations, we get
          β k d k ( 1 β k ) e k β k ( 1 β k ) m k = m k β k 2 ( d k e k + m k ) β k e k = m k ( β k d k e k + m k 2 m k ) 2 m k ( 1 2 + d k e k 2 m k ) 2 e k = 1 2 d k 1 2 e k m k 4 ( d k e k ) 2 4 m k ,
where the last equality holds due to (11). Combining (22) and (23), we get (19);
(ii) In this case, β k = β ̲ or β k = β ¯ . By Lemma 5, it is easy to obtain (20). □
Theorem 1.
Let { x k } k N be generated by Algorithm 2; then, { x k } k N converges weakly to a solution of the SFP (1).
Proof. 
By (13), (19) and (20), we have
x k + 1 x * x k x * ,
which implies that { x k } k N is bounded and lim k x k x * exists. From the definition of η k , it follows
η k = ( I P Q ) A x k 2 A * ( I P Q ) A x k 2 A x k P Q ( A x k ) 2 A 2 A x k P Q ( A x k ) 2 = 1 A 2 .
Combining (13), (19), (20) and (24), we deduce
lim k P C ( x k ) x k = 0 ,
and
lim k ( I P Q ) A x k = 0 .
To use Lemma 3, we need to show ω w ( x k ) Γ . To this end, take arbitrarily x ^ ω w ( x k ) and let x k l x ^ where { x k l } l N is a subsequence of { x k } k N . From Lemma 2 and (25), we get x ^ C . Combining (26) and the weak lower semicontinuity of f, we obtain
0 f ( x ^ ) lim inf l f ( x k l ) = lim k f ( x k ) = 0 .
Hence, f ( x ^ ) = 1 2 ( I P Q ) A x ^ = 0 , i.e., A x ^ Q . Thus, we have ω w ( x k ) Γ . By Lemma 3, we deduce that { x k } k N converges weakly to a point in Γ . The proof is completed. □
Combining Theorem 1 and [18] (Theorem 3.2), it is easy to verify the convergence of Algorithm 3.
Theorem 2.
Let { x k } k N be generated by Algorithm 3; then, { x k } k N converges weakly to a solution of the SFP (1).

5. Numerical Results

In this section, we show the behavior of Algorithm 2 by comparing it with the algorithms (3) and (2) through two numerical examples.
For convenience, we denote the vector with all elements 0 by e 0 and the vector with all elements 1 by e 1 in what follows. In the next two numerical examples, we take the objective function value p ( x ) = 1 2 x P C ( x ) 2 + 1 2 A x P Q ( A x ) 2 ϵ as the stopping criterion.
Example 1
([6]). Consider the SFP, where A = ( a i j ) m × n R m × n ( m = 200 , n = 500 ) and a i j ( 0 , 1 ) generated randomly and
C = { x R n : x d r } , Q = { y R m : L y U } ,
where d is the center of the ball C, e 0 d e 1 , r ( 10 , 20 ) is the radius, d and r are both generated randomly. L and U are the boundary of the box Q and are also generated randomly, satisfying 10 e 1 L 20 e 1 and 20 e 1 U 30 e 1 , respectively.
In the numerical experiment, we take the objective function value p ( x k ) ϵ = 10 5 as the stopping criterion.
In Figure 1, the initial point x 0 ( 0 , 100 e 1 ) is randomly chosen. For comparing Algorithm 2 with the algorithms (2) and (3), we take θ = 1.59 , ρ = 1.86 and β k β = 0.37 in Algorithm 2, ρ k = 1.99 in the algorithm (2) and μ k = 0.72 and ν k = 0.88 in the algorithm (3).
In Table 1, we show the iteration steps and CPU time of Algorithm 2, algorithm (2) and algorithm (3) for 3 initial points. For case 1, x 0 = ( 100 , , 100 ) ; for case 2, x 0 = (100, −100, ⋯, 100, −100); and for case 3, x 0 ( 100 e 1 , 100 e 1 ) .
As shown in Figure 1 and the iteration steps in Table 1, Algorithm 2 behaves better than the algorithms (2) and (3).
Example 2.
Suppose that H 1 = H 2 = L 2 ( [ 0 , 1 ] ) with norm x = 0 1 | x ( t ) | 2 d t 1 2 and inner product
x , y = 0 1 x ( t ) y ( t ) d t .
Let
C : = { x H 1 : a , x = b } ,
where a = sin ( 5 t 3 ) and b = 0 . In this case, we have
P C ( x ) = x + b a , x a 2 a .
Additionally, let
Q : = { x H 2 : c , x d } ,
where c = sin ( t / 4 ) and d = 1 / 5 ; then, we obtain
P Q ( x ) = x if c , x d x + d c , x c 2 c if c , x > d .
Consider the matrix
A = 1 200 0 0 0 0 0 1 1 0 0 0 1 2 1 0 0 1 2 2 1 0 1 2 2 2 1 101 × 101
resulting from a discretization of the operator question of the first kind
y ( t ) = ( A x ) ( t ) : = 0 t x ( s ) d s , s , t [ 0 , 1 ] ; x , y : [ 0 , 1 ] [ 0 , 1 ] .
In the numerical experiment, the initial function is x 0 ( t ) = 10 sin ( t 2 + 2 t ) . We take p ( x k ) ϵ = 10 10 as the stopping criterion.
For comparing Algorithm 2 with the algorithms (2) and (3), we take θ = 1.99 , ρ = 1.99 and β k β = 0.41 in Algorithm 2, ρ k = 1.89 in the algorithm (2) and μ k = 0.5 and ν k = 0.99 in the algorithm (3). From Figure 2, we can see that the performance of Algorithm 2 is better than the algorithms (2) and (3).

6. Some Concluding Remarks

In this article, we introduce the Douglas–Rachford splitting method with linearization for the split feasibility problem, which is a general method and includes the methods in [13,15] as special cases. The weak convergence of the proposed method is established under the standard conditions. Numerical experiments illustrate the effectiveness of our methods.
The methods proposed in this article can be generalized to solve the multiple-sets split feasibility problem. It is interesting to investigate the other possible choices of the parameters { α k } k N and { λ k } k N .
Recently, some authors have applied self-adaptive step sizes to split generalized equilibrium and fixed point problems [20] and pseudo-monotone equilibrium problems [21]. The numerical examples illustrate that the step sizes have excellent behaviour. Applying the self-adaptive step size to the split feasibility problem is worth investigating.

Author Contributions

Methodology, Z.H. and Q.D.; formal analysis, Z.H. and Q.D.; writing—original draft preparation, Z.H.; writing—review and editing, Q.D., Y.T. and M.T.R. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided to this project as the Scientific Research Project of Tianjin Municipal Education Commission (No. 2020ZD02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We express our thanks to anonymous referees for their constructive suggestions, which significantly improved the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C.L. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  3. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  4. Byrne, C.L. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
  5. Dang, Y.; Sun, J.; Zhang, S. Double projection algorithms for solving the split feasibility problems. J. Ind. Manag. Optim. 2019, 15, 2023–2034. [Google Scholar] [CrossRef] [Green Version]
  6. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  7. Dong, Q.L.; Yao, Y.; He, S. Weak convergence theorems of the modified relaxed projection algorithms for the split feasibility problem in hilbert spaces. Optim. Lett. 2014, 8, 1031–1046. [Google Scholar] [CrossRef]
  8. He, S.; Xu, H.K. The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Sci. Appl. 2018, 19, 1199–1215. [Google Scholar]
  9. Gibali, A.; Liu, L.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  10. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 27, 085004. [Google Scholar] [CrossRef]
  11. Qu, B.; Wang, C.; Xiu, N. Analysis on Newton projection method for the split feasibility problem. Comput. Optim. Appl. 2017, 67, 175–199. [Google Scholar] [CrossRef]
  12. Wang, J.; Hu, Y.; Li, C.; Yao, J.C. Linear convergence of CQ algorithms and applications in gene regulatory network inference. Inverse Probl. 2017, 33, 055017. [Google Scholar] [CrossRef]
  13. Wang, F. Polyak’s gradient method for split feasibility problem constrained by level sets. Numer. Algorithms 2018, 77, 925–938. [Google Scholar] [CrossRef]
  14. Zhao, J.; Yang, Q. A simple projection method for solving the multiple-sets split feasibility problem. Inverse Prob. Sci. Eng. 2013, 21, 537–546. [Google Scholar] [CrossRef]
  15. Tang, Y.; Zhu, C.; Yu, H. Iterative methods for solving the multiple-sets split feasibility problem with splitting self-adaptive step size. Fixed Point Theory Appl. 2015, 2015, 178. [Google Scholar] [CrossRef] [Green Version]
  16. Douglas, J.; Rachford, H.H. On the numerical solution of heat conduction problems in two or three space variables. Trans. Am. Math. Soc. 1956, 82, 421–439. [Google Scholar] [CrossRef]
  17. Li, M.; Wu, Z. Convergence analysis of the generalized splitting methods for a class of nonconvex optimization problems. J. Optimiz. Theory Appl. 2019, 183, 535–565. [Google Scholar] [CrossRef]
  18. Dong, Q.L.; He, S.; Rassias, M.T. General splitting methods with linearization for the split feasibility problem. J. Glob. Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
  19. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2017; Volume 2. [Google Scholar]
  20. Olona, M.A.; Alakoya, T.O.; Owolabi, A.O.-E.; Mewomo, O.T. Inertial shrinking projection algorithm with self-adaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings. Demonstr. Math. 2021, 54, 47–67. [Google Scholar] [CrossRef]
  21. Pakkaranang, N.; Rehman, H.; Kumam, W. Two strongly convergent self-adaptive iterative schemes for solving pseudo-monotone equilibrium problems with applications. Demonstr. Math. 2021, 54, 280–298. [Google Scholar] [CrossRef]
Figure 1. The value of p ( x k ) versus the iteration numbers for Example 1.
Figure 1. The value of p ( x k ) versus the iteration numbers for Example 1.
Symmetry 14 00537 g001
Figure 2. The value of p ( x k ) versus the iteration numbers for Example 2.
Figure 2. The value of p ( x k ) versus the iteration numbers for Example 2.
Symmetry 14 00537 g002
Table 1. Comparison of Algorithm 2, algorithms (2) and (3) under different initial points.
Table 1. Comparison of Algorithm 2, algorithms (2) and (3) under different initial points.
CasesAlgorithm 2algorithm (2)algorithm (3)
Iter.CPU TimeIter.CPU TimeIter.CPU Time
Case 1180.03131550.0313380.0469
Case 2220.04693720.0781370.0469
Case 3160.03132560.0469350.0313
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, Z.; Dong, Q.; Tang, Y.; Rassias, M.T. Douglas–Rachford Splitting Method with Linearization for the Split Feasibility Problem. Symmetry 2022, 14, 537. https://doi.org/10.3390/sym14030537

AMA Style

Hu Z, Dong Q, Tang Y, Rassias MT. Douglas–Rachford Splitting Method with Linearization for the Split Feasibility Problem. Symmetry. 2022; 14(3):537. https://doi.org/10.3390/sym14030537

Chicago/Turabian Style

Hu, Ziyue, Qiaoli Dong, Yuchao Tang, and Michael Th. Rassias. 2022. "Douglas–Rachford Splitting Method with Linearization for the Split Feasibility Problem" Symmetry 14, no. 3: 537. https://doi.org/10.3390/sym14030537

APA Style

Hu, Z., Dong, Q., Tang, Y., & Rassias, M. T. (2022). Douglas–Rachford Splitting Method with Linearization for the Split Feasibility Problem. Symmetry, 14(3), 537. https://doi.org/10.3390/sym14030537

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop