[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Optimization of Queueing Model with Server Heating and Cooling
Next Article in Special Issue
A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator
Previous Article in Journal
Mittag–Leffler Memory Kernel in Lévy Flights
Previous Article in Special Issue
A Modified Fletcher–Reeves Conjugate Gradient Method for Monotone Nonlinear Equations with Some Applications
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications †

by
Auwal Bala Abubakar
1,2,
Poom Kumam
1,3,4,*,
Hassan Mohammad
2 and
Aliyu Muhammed Awwal
1,5
1
KMUTTFixed Point Research Laboratory, SCL 802 Fixed Point Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
5
Department of Mathematics, Faculty of Science, Gombe State University, Gombe 760214, Nigeria
*
Author to whom correspondence should be addressed.
This project was supported by Petchra Pra Jom Klao Doctoral Academic Scholarship for Ph.D. Program at KMUTT. Moreover, this project was partially supported by the Thailand Research Fund (TRF) and the King Mongkut’s University of Technology Thonburi (KMUTT) under the TRF Research Scholar Award (Grant No. RSA6080047).
Mathematics 2019, 7(9), 767; https://doi.org/10.3390/math7090767
Submission received: 29 June 2019 / Revised: 30 July 2019 / Accepted: 6 August 2019 / Published: 21 August 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
This research paper proposes a derivative-free method for solving systems of nonlinear equations with closed and convex constraints, where the functions under consideration are continuous and monotone. Given an initial iterate, the process first generates a specific direction and then employs a line search strategy along the direction to calculate a new iterate. If the new iterate solves the problem, the process will stop. Otherwise, the projection of the new iterate onto the closed convex set (constraint set) determines the next iterate. In addition, the direction satisfies the sufficient descent condition and the global convergence of the method is established under suitable assumptions. Finally, some numerical experiments were presented to show the performance of the proposed method in solving nonlinear equations and its application in image recovery problems.

1. Introduction

In this paper, we consider the following constrained nonlinear equation
F ( x ) = 0 , subject to x Ψ ,
where F : R n R n is continuous and monotone. The constraint set Ψ R n is nonempty, closed and convex.
Monotone equations appear in many applications [1,2,3], for example, the subproblems in the generalized proximal algorithms with Bregman distance [4], reformulation of some 1 -norm regularized problems arising in compressive sensing [5] and variational inequality problems are also converted into nonlinear monotone equations via fixed point maps or normal maps [6], (see References [7,8,9] for more examples). Among earliest methods for the case Ψ = R n is the hyperplane projection Newton method proposed by Solodov and Svaiter in Reference [10]. Subsequently, many methods were proposed by different authors. Among the popular methods are spectral gradient methods [11,12], quasi-Newton methods [13,14,15] and conjugate gradient methods (CG) [16,17].
To solve the constrained case (1), the work of Solodov and Svaiter was extended by Wang et al. [18] which also involves solving a linear system in each iteration but it was shown later by some authors that the computation of the linear system is not necessary. For examples, Xiao and Zhu [19] presented a CG method, which is a combination the well known CG-DESCENT method in Reference [20] with the projection strategy by Solodov and Svaiter. Liu et al. [21] presented two CG method with sufficiently descent directions. In Reference [22], a modified version of the method in Reference [19] was presented by Liu and Li. The modification improves the numerical performance of the method in Reference [19]. Another extension of the Dai and Kou (DK) CG method combined with the projection method to solve (1) was proposed by Ding et al. in Reference [23]. Just recently, to popularize the Dai-Yuan (DY) CG method, Liu and Feng [24] modified the DY such that the direction will be sufficiently descent. A new hybrid spectral gradient projection method for solving convex constraints nonlinear monotone equations was proposed by Awwal et al. in Reference [25]. The method is a convex combination of two different positive spectral parameters together with the projection strategy. In addition, Abubakar et al. extended the method in Reference [17] to solve (1) and also solve some sparse signal recovery problems.
Inspired by some the above methods, we propose a descent conjugate gradient method to solve problem (1). Under appropriate assumptions, the global convergence is established. Preliminary numerical experiments were given to compare the proposed method with existing methods to solve nonlinear monotone equations and some signal and image reconstruction problems arising from compressive sensing.
The remaining part of this paper is organized as follows. In Section 2, we state the proposed algorithm as well as its convergence analysis. Finally, Section 3 reports some numerical results to show the performance of the proposed method in solving Equation (1), signal recovery problems and image restoration problems.

2. Algorithm: Motivation and Convergence Result

This section starts by defining the projection map together with some of its properties.
Definition 1.
Let Ψ R n be a nonempty closed convex set. Then for any x R n , its projection onto Ψ, denoted by P Ψ ( x ) , is defined by
P Ψ ( x ) = arg min { x y : y Ψ } .
Moreover, P Ψ is nonexpansive, That is,
P Ψ ( x ) P Ψ ( y ) x y , x , y R n .
All through this article, we assume the followings
( G 1
The mapping F is monotone, that is,
( F ( x ) F ( y ) ) T ( x y ) 0 , x , y R n .
( G 2
The mapping F is Lipschitz continuous, that is there exists a positive constant L such that
F ( x ) F ( y ) L x y , x , y R n .
( G 3
The solution set of (1), denoted by Ψ , is nonempty.
An important property that methods for solving Equation (1) must possess is that the direction d k satisfy
F ( x k ) T d k c F ( x k ) 2 ,
where c > 0 is a constant. The inequality (3) is called sufficient descent property if F ( x ) is the gradient vector of a real valued function f : R n R .
In this paper, we propose the following search direction
d k = F ( x k ) , if k = 0 , F ( x k ) + β k d k 1 θ k F ( x k ) , if k 1 ,
where
β k = F ( x k ) d k 1
and θ k is determined such that Equation (3) is satisfied. It is easy to see that for k = 0 , the equation holds with c = 1 . Now for k 1 ,
F ( x k ) T d k = F ( x k ) T F ( x k ) + F ( x k ) T F ( x k ) d k 1 d k 1 θ k F ( x k ) T F ( x k ) = F ( x k ) 2 + F ( x k ) d k 1 F ( x k ) T d k 1 θ k F ( x k ) 2 = F ( x k ) 2 d k 1 2 + F ( x k ) d k 1 F ( x k ) T d k 1 θ k F ( x k ) 2 d k 1 2 d k 1 2 .
aking θ k = 1 we have
F ( x k ) T d k F ( x k ) 2 .
Thus, the direction defined by (4) satisfy condition (3) k where c = 1 .
To prove the global convergence of Algorithm 1, the following lemmas are needed.
Algorithm 1: (DCG)
Step 0. Given an arbitrary initial point x 0 R n , parameters σ > 0 , 0 < β < 1 , T o l > 0 and set k : = 0 .
Step 1. If F ( x k ) T o l , stop, otherwise go to Step 2.
Step 2. Compute d k using Equation (4).
Step 3. Compute the step size α k = max { β i : i = 0 , 1 , 2 , } such that
F ( x k + α k d k ) T d k σ α k F ( x k + α k d k ) d k 2 .

Step 4. Set z k = x k + α k d k . If z k Ψ and F ( z k ) T o l , stop. Else compute
x k + 1 = P Ψ [ x k ζ k F ( z k ) ]

where
ζ k = F ( z k ) T ( x k z k ) F ( z k ) 2 .

Step 5. Let k = k + 1 and go to Step 1.
Lemma 1.
The direction defined by Equation (4) satisfies the sufficient descent property, that is, there exist constants c > 0 such that (3) holds.
Lemma 2.
Suppose that assumptions ( G 1 )–( G 3 ) holds, then the sequences { x k } and { z k } generated by Algorithm 1 (CGD) are bounded. Moreover, we have
lim k x k z k = 0
and
lim k x k + 1 x k = 0 .
Proof. 
We will start by showing that the sequences { x k } and { z k } are bounded. Suppose x ¯ Ψ , then by monotonicity of F, we get
F ( z k ) T ( x k x ¯ ) F ( z k ) T ( x k z k ) .
Also by definition of z k and the line search (8), we have
F ( z k ) T ( x k z k ) σ α k 2 F ( z k ) d k 2 0 .
So, we have
x k + 1 x ¯ 2 = P Ψ [ x k ζ k F ( z k ) ] x ¯ 2 x k ζ k F ( z k ) x ¯ 2 = x k x ¯ 2 2 ζ k F ( z k ) T ( x k x ¯ ) + ζ F ( z k ) 2 x k x ¯ 2 2 ζ k F ( z k ) T ( x k z k ) + ζ F ( z k ) 2 = x k x ¯ 2 F ( z k ) T ( x k z k ) F ( z k ) 2 x k x ¯ 2
Thus the sequence { x k x ¯ } is non increasing and convergent and hence { x k } is bounded. Furthermore, from Equation (13), we have
x k + 1 x ¯ 2 x k x ¯ 2 ,
and we can deduce recursively that
x k x ¯ 2 x 0 x ¯ 2 , k 0 .
Then from Assumption ( G 2 ), we obtain
F ( x k ) = F ( x k ) F ( x ¯ ) L x k x ¯ L x 0 x ¯ .
If we let L x 0 x ¯ = κ , then the sequence { F ( x k ) } is bounded, that is,
F ( x k ) κ , k 0 .
By the definition of z k , Equation (12), monotonicity of F and the Cauchy-Schwatz inequality, we get
σ x k z k = σ α k d k 2 x k z k F ( z k ) T ( x k z k ) x k z k F ( z k ) T ( x k z k ) x k z k F ( x k ) .
The boundedness of the sequence { x k } together with Equations (15) and (16), implies the sequence { z k } is bounded.
Since { z k } is bounded, then for any x ¯ Ψ , the sequence { z k x ¯ } is also bounded, that is, there exists a positive constant ν > 0 such that
z k x ¯ ν .
This together with Assumption ( G 2 ) yields
F ( z k ) = F ( z k ) F ( x ¯ ) L z k x ¯ L ν .
Therefore, using Equation (13), we have
σ 2 ( L ν ) 2 x k z k 4 x k x ¯ 2 x k + 1 x ¯ 2 ,
which implies
σ 2 ( L ν ) 2 k = 0 x k z k 4 k = 0 ( x k x ¯ 2 x k + 1 x ¯ 2 ) x 0 x ¯ < .
Equation (17) implies
lim k x k z k = 0 .
However, using Equation (2), the definition of ζ k and the Cauchy-Schwartz inequality, we have
x k + 1 x k = P Ψ [ x k ζ k F ( z k ) ] x k x k ζ k F ( z k ) x k = ζ k F ( z k ) = x k z k ,
which yields
lim k x k + 1 x k = 0 .
Equation (9) and definition of z k implies that
lim k α k d k = 0 .
Lemma 3.
Suppose d k is generated by Algorithm 1 (CGD), then there exist M > 0 such the d k M
Proof. 
By definition of d k and Equation (15)
d k = 2 F ( x k ) + F ( x k ) d k 1 d k 1 2 F ( x k ) + F ( x k ) d k 1 d k 1 3 F ( x k ) 3 κ .
Letting M = 3 κ , we have the desired result. ☐
Theorem 1.
Suppose that assumptions ( G 1 )–( G 3 ) hold and let the sequence { x k } be generated by Algorithm 1, then
lim inf k F ( x k ) = 0 ,
Proof. 
To prove the Theorem, we consider two cases;
Case 1
Suppose lim inf k d k = 0 , we have lim inf k F ( x k ) = 0 . Then by continuity of F, the sequence { x k } has some accumulation point x ¯ such that F ( x ¯ ) = 0 . Because { x k x ¯ } converges and x ¯ is an accumulation point of { x k } , therefore { x k } converges to x ¯ .
Case 2
Suppose lim inf k d k > 0 , we have lim inf k F ( x k ) > 0 . Then by (19), it holds that lim k α k = 0 . Also from Equation (8),
F ( x k + β i 1 d k ) T d k < σ β i 1 F ( x k + β i 1 d k ) d k 2
and the boundedness of { x k } , { d k } , we can choose a sub-sequence such that allowing k to go to infinity in the above inequality results
F ( x ¯ ) T d ¯ > 0 .
On the other hand, allowing k to approach in (7), implies
F ( x ¯ ) T d ¯ 0 .
(22) and (23) imply contradiction. Hence, lim inf k F ( x k ) > 0 is not true and the proof is complete. ☐

3. Numerical Examples

This section gives the performance of the proposed method with existing methods such as PCG and PDY proposed in References [22,24], respectively, to solve monotone nonlinear equations using 9 benchmark test problems. Furthermore Algorithm 1 is applied to restore a blurred image. All codes were written in MATLAB R2018b and run on a PC with intel COREi5 processor with 4 GB of RAM and CPU 2.3 GHZ. All runs were stopped whenever F ( x k ) < 10 5 .
The parameters chosen for the existing algorithm are as follows:
PCG method: All parameters are chosen as in Reference [22].
PDY method: All parameters are chosen as in Reference [24].
Algorithm 1: We have tested several values of β ( 0 , 1 ) and found that β = 0.7 gives the best result. In addition, to implement most of the optimization algorithms, the parameter σ is chosen as a very small number. Therefore, we chose β = 0.7 and σ = 0.0001 for the implementation of the proposed algorithm.
We test 9 different problems with dimensions ranging from n = 1000 , 5000 , 10 , 000 , 50 , 000 , 100 , 000 and 6 initial points: x 1 = ( 0.1 , 0.1 , , 1 ) T , x 2 = ( 0.2 , 0.2 , , 0.2 ) T , x 3 = ( 0.5 , 0.5 , , 0.5 ) T , x 4 = ( 1.2 , 1.2 , , 1.2 ) T , x 5 = ( 1.5 , 1.5 , , 1.5 ) T , x 6 = ( 2 , 2 , , 2 ) T . In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, the number of iterations (ITER), number of function evaluations (FVAL), CPU time in seconds (TIME) and the norm at the approximate solution (NORM) were reported. The symbol ‘−’ is used when the number of iterations exceeds 1000 and/or the number of function evaluations exceeds 2000.
The test problems are listed below, where the function F is taken as F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f n ( x ) ) T .
Problem 1
([26]). Exponential Function.
f 1 ( x ) = e x 1 1 , f i ( x ) = e x i + x i 1 , for i = 2 , 3 , , n , a n d Ψ = R + n .
Problem 2
([26]). Modified Logarithmic Function.
f i ( x ) = ln ( x i + 1 ) x i n , for i = 2 , 3 , , n , a n d Ψ = { x R n : i = 1 n x i n , x i > 1 , i = 1 , 2 , , n } .
Problem 3
([13]). Nonsmooth Function.
f i ( x ) = 2 x i sin | x i | , i = 1 , 2 , 3 , , n , a n d Ψ = { x R n : i = 1 n x i n , x i 0 , i = 1 , 2 , , n } .
It is clear that Problem 3 is nonsmooth at x = 0 .
Problem 4
([26]). Strictly Convex Function I.
f i ( x ) = e x i 1 , for i = 1 , 2 , , n , a n d Ψ = R + n .
Problem 5
([26]). Strictly Convex Function II.
f i ( x ) = i n e x i 1 , for i = 1 , 2 , , n , a n d Ψ = R + n .
Problem 6
([27]). Tridiagonal Exponential Function
f 1 ( x ) = x 1 e cos ( h ( x 1 + x 2 ) ) , f i ( x ) = x i e cos ( h ( x i 1 + x i + x i + 1 ) ) , for i = 2 , , n 1 , f n ( x ) = x n e cos ( h ( x n 1 + x n ) ) , h = 1 n + 1 a n d Ψ = R + n .
Problem 7
([28]). Nonsmooth Function
f i ( x ) = x i sin | x i 1 | , i = 1 , 2 , 3 , , n . a n d Ψ = { x R n : i = 1 n x i n , x i 1 , i = 1 , 2 , , n } .
Problem 8
([23]). Penalty 1
t i = i = 1 n x i 2 , c = 10 5 f i ( x ) = 2 c ( x i 1 ) + 4 ( t i 0.25 ) x i , i = 1 , 2 , 3 , , n . a n d Ψ = R + n .
Problem 9
([29]). Semismooth Function
f 1 ( x ) = x 1 + x 1 3 10 , f 2 ( x ) = x 2 x 3 + x 2 3 + 1 , f 3 ( x ) = x 2 + x 3 + 2 x 3 3 3 , f 4 ( x ) = 2 x 4 3 , a n d Ψ = { x R 4 : i = 1 4 x i 3 , x i 0 , i = 1 , 2 , 3 , 4 } .
In addition, we employ the performance profile developed in Reference [30] to obtain Figure 1, Figure 2 and Figure 3, which is a helpful process of standardizing the comparison of methods. The measure of the performance profile considered are; number of iterations, CPU time (in seconds) and number of function evaluations. Figure 1 reveals that Algorithm 1 most performs better in terms of number of iterations, as it solves and wins 90 percent of the problems with less number of iterations, while PCG and PDY solves and wins less than 10 percent. In Figure 2, Algorithm 1 performed a little less by solving and winning over 80 percent of the problems with less CPU time as against PCG and PDY with similar performance of less than 10 percent of the problems considered. The translation of Figure 3 is identical to Figure 1. Figure 4 is the plot of the decrease in residual norm against number of iterations on problem 9 with x 4 as initial point. It shows the speed of the convergence of each algorithm using the convergence tolerance 10 5 , it can be observed that Algorithm 1 converges faster than PCG and PDY.

Applications in Compressive Sensing

There are many problems in signal processing and statistical inference involving finding sparse solutions to ill-conditioned linear systems of equations. Among popular approach is minimizing an objective function which contains quadratic ( 2 ) error term and a sparse 1 —regularization term, that is,
min x 1 2 y B x 2 2 + η x 1 ,
where x R n , y R k is an observation, B R k × n ( k < < n ) is a linear operator, η is a non-negative parameter, x 2 denotes the Euclidean norm of x and x 1 = i = 1 n | x i | is the 1 —norm of x. It is easy to see that problem (24) is a convex unconstrained minimization problem. Due to the fact that if the original signal is sparse or approximately sparse in some orthogonal basis, problem (24) frequently appears in compressive sensing and hence an exact restoration can be produced by solving (24).
Iterative methods for solving (24) have been presented in many papers (see References [5,31,32,33,34,35]). The most popular method among these methods is the gradient based method and the earliest gradient projection method for sparse reconstruction (GPRS) was proposed by Figueiredo et al. [5]. The first step of the GPRS method is to express (24) as a quadratic problem using the following process. Let x R n and splitting it into its positive and negative parts. Then x can be formulated as
x = u v , u 0 , v 0 ,
where u i = ( x i ) + , v i = ( x i ) + for all i = 1 , 2 , , n and ( . ) + = max { 0 , . } . By definition of 1 -norm, we have x 1 = e n T u + e n T v , where e n = ( 1 , 1 , , 1 ) T R n . Now (24) can be written as
min u , v 1 2 y B ( u v ) 2 2 + η e n T u + η e n T v , u 0 , v 0 ,
which is a bound-constrained quadratic program. However, from Reference [5], Equation (25) can be written in standard form as
min z 1 2 z T D z + c T z , such that z 0 ,
where z = u v , c = ω e 2 n + b b , b = B T y , D = B T B B T B B T B B T B .
Clearly, D is a positive semi-definite matrix, which implies that Equation (26) is a convex quadratic problem.
Xiao et al. [19] translated (26) into a linear variable inequality problem which is equivalent to a linear complementarity problem. Furthermore, it was noted that z is a solution of the linear complementarity problem if and only if it is a solution of the nonlinear equation:
F ( z ) = min { z , D z + c } = 0 .
The function F is a vector-valued function and the “min” is interpreted as component-wise minimum. It was proved in References [36,37] that F ( z ) is continuous and monotone. Therefore problem (24) can be translated into problem (1) and thus Algorithm 1 (DCG) can be applied to solve it.
In this experiment, we consider a simple compressive sensing possible situation, where our goal is to restore a blurred image. We use the following well-known gray test images; (P1) Cameraman, (P2) Lena, (P3) House and (P4) Peppers for the experiments. We use 4 different Gaussian blur kernals with standard deviation σ to compare the robustness of DCG method with CGD method proposed in Reference [19]. CGD method is an extension of the well-known conjugate gradient method for unconstrained optimization CG-DESCENT [20] to solve the 1 -norm regularized problems.
To access the performance of each algorithm tested with respect to metrics that indicate a better quality of restoration, in Table 10 we reported the number of iterations, the objective function (ObjFun) value at the approximate solution, the mean of squared error (MSE) to the original image x ˜ ,
M S E = 1 n x ˜ x 2 ,
where x is the reconstructed image and the signal-to-noise-ratio (SNR) which is defined as
SNR = 20 × log 10 x ¯ x x ¯ .
We also reported the structural similarity (SSIM) index that measure the similarity between the original image and the restored image [38]. The MATLAB implementation of the SSIM index can be obtained at http://www.cns.nyu.edu/~lcv/ssim/.
The original, blurred and restored images by each of the algorithm are given in Figure 5, Figure 6, Figure 7 and Figure 8. The figures demonstrate that both the two tested algorithm can restored the blurred images. It can be observed from Table 10 and Figure 5, Figure 6, Figure 7 and Figure 8 that Algorithm 1 (DCG) compete with the CGD algorithm, therefore, it can be used as an alternative to CGD for restoring blurred image.

4. Conclusions

In this research article, we present a CG method which possesses the sufficient descent property for solving constrained nonlinear monotone equations. The proposed method has the ability to solve non-smooth equations as it does not require matrix storage and Jacobian information of the nonlinear equation under consideration. The sequence of iterates generated converge the solution under appropriate assumptions. Finally, we give some numerical examples to display the efficiency of the proposed method in terms of number of iterations, CPU time and number of function evaluations compared with some related methods for solving convex constrained nonlinear monotone equations and its application in image restoration problems.

Author Contributions

Conceptualization, A.B.A.; methodology, A.B.A.; software, H.M.; validation, P.K. and A.M.A.; formal analysis, P.K. and H.M.; investigation, P.K. and A.M.A.; resources, P.K.; data curation, A.B.A. and H.M.; writing—original draft preparation, A.B.A.; writing—review and editing, H.M.; visualization, A.M.A.; supervision, P.K.; project administration, P.K.; funding acquisition, P.K.

Funding

Petchra Pra Jom Klao Doctoral Scholarship for Ph.D. program of King Mongkut’s University of Technology Thonburi (KMUTT) and Theoretical and Computational Science (TaCS) Center. Moreover, this project was partially supported by the Thailand Research Fund (TRF) and the King Mongkut’s University of Technology Thonburi (KMUTT) under the TRF Research Scholar Award (Grant No. RSA6080047).

Acknowledgments

We thank Associate Professor Jin Kiu Liu for providing us with the access of the CGD-CS MATLAB codes. The authors acknowledge the financial support provided by King Mongkut’s University of Technology Thonburi through the “KMUTT 55th Anniversary Commemorative Fund”. This project is supported by the theoretical and computational science (TaCS) center under computational and applied science for smart research innovation (CLASSIC), Faculty of Science, KMUTT. The first author was supported by the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, B.; Sheng, V.S.; Tay, K.Y.; Romano, W.; Li, S. Incremental support vector learning for ordinal regression. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1403–1416. [Google Scholar] [CrossRef] [PubMed]
  2. Li, J.; Li, X.; Yang, B.; Sun, X. Segmentation-based image copy-move forgery detection scheme. IEEE Trans. Inf. Forensics Secur. 2015, 10, 507–518. [Google Scholar]
  3. Wen, X.; Shao, L.; Xue, Y.; Fang, W. A rapid learning algorithm for vehicle classification. Inf. Sci. 2015, 295, 395–406. [Google Scholar] [CrossRef]
  4. Michael, S.V.; Alfredo, I.N. Newton-type methods with generalized distances for constrained optimization. Optimization 1997, 41, 257–278. [Google Scholar]
  5. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  6. Magnanti, T.L.; Perakis, G. Solving variational inequality and fixed point problems by line searches and potential optimization. Math. Program. 2004, 101, 435–461. [Google Scholar] [CrossRef]
  7. Pan, Z.; Zhang, Y.; Kwong, S. Efficient motion and disparity estimation optimization for low complexity multiview video coding. IEEE Trans. Broadcast. 2015, 61, 166–176. [Google Scholar]
  8. Xia, Z.; Wang, X.; Sun, X.; Wang, Q. A secure and dynamic multi-keyword ranked search scheme over encrypted cloud data. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 340–352. [Google Scholar] [CrossRef]
  9. Zheng, Y.; Jeon, B.; Xu, D.; Wu, Q.M.; Zhang, H. Image segmentation by generalized hierarchical fuzzy c-means algorithm. J. Intell. Fuzzy Syst. 2015, 28, 961–973. [Google Scholar]
  10. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Dordrecht, The Netherlands, 1998; pp. 355–369. [Google Scholar]
  11. Mohammad, H.; Abubakar, A.B. A positive spectral gradient-like method for nonlinear monotone equations. Bull. Comput. Appl. Math. 2017, 5, 99–115. [Google Scholar]
  12. Zhang, L.; Zhou, W. Spectral gradient projection method for solving nonlinear monotone equations. J. Comput. Appl. Math. 2006, 196, 478–484. [Google Scholar] [CrossRef] [Green Version]
  13. Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
  14. Zhou, W.; Li, D. Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math. 2007, 25, 89–96. [Google Scholar]
  15. Abubakar, A.B.; Waziria, M.Y. A matrix-free approach for solving systems of nonlinear equations. J. Mod. Methods Numer. Math. 2016, 7, 1–9. [Google Scholar] [CrossRef]
  16. Abubakar, A.B.; Kumam, P. An improved three-term derivative-free method for solving nonlinear equations. Comput. Appl. Math. 2018, 37, 6760–6773. [Google Scholar] [CrossRef]
  17. Abubakar, A.B.; Kumam, P.; Awwal, A.M. A descent dai-liao projection method for convex constrained nonlinear monotone equations with applications. Thai J. Math. 2018, 17, 128–152. [Google Scholar]
  18. Wang, C.; Wang, Y.; Xu, C. A projection method for a system of nonlinear monotone equations with convex constraints. Math. Methods Oper. Res. 2007, 66, 33–46. [Google Scholar] [CrossRef]
  19. Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  20. Hager, W.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef]
  21. Liu, S.-Y.; Huang, Y.-Y.; Jiao, H.-W. Sufficient descent conjugate gradient methods for solving convex constrained nonlinear monotone equations. Abstr. Appl. Anal. 2014, 2014, 305643. [Google Scholar] [CrossRef]
  22. Liu, J.K.; Li, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  23. Ding, Y.; Xiao, Y.; Li, J. A class of conjugate gradient methods for convex constrained monotone equations. Optimization 2017, 66, 2309–2328. [Google Scholar] [CrossRef]
  24. Liu, J.; Feng, Y. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2018, 1–18. [Google Scholar] [CrossRef]
  25. Muhammed, A.A.; Kumam, P.; Abubakar, A.B.; Wakili, A.; Pakkaranang, N. A new hybrid spectral gradient projection method for monotone system of nonlinear equations with convex constraints. Thai J. Math. 2018, 16, 125–147. [Google Scholar]
  26. La Cruz, W.; Martínez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef]
  27. Bing, Y.; Lin, G. An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations. SIAM J. Optim. 1991, 1, 206–221. [Google Scholar] [CrossRef]
  28. Yu, Z.; Lin, J.; Sun, J.; Xiao, Y.H.; Liu, L.Y.; Li, Z.H. Spectral gradient projection method for monotone nonlinear equations with convex constraints. Appl. Numer. Math. 2009, 59, 2416–2423. [Google Scholar] [CrossRef]
  29. Yamashita, N.; Fukushima, M. Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems. Math. Program. 1997, 76, 469–491. [Google Scholar] [CrossRef]
  30. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  31. Figueiredo, M.A.T.; Nowak, R.D. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  32. Hale, E.T.; Yin, W.; Zhang, Y. A Fixed-Point Continuation Method for ℓ1-Regularized Minimization with Applications to Compressed Sensing; CAAM TR07-07; Rice University: Houston, TX, USA, 2007; pp. 43–44. [Google Scholar]
  33. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  34. Van Den Berg, E.; Friedlander, M.P. Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef]
  35. Birgin, E.G.; Martínez, J.M.; Raydan, M. Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 2000, 10, 1196–1211. [Google Scholar] [CrossRef]
  36. Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based method for 1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  37. Pang, J.-S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
  38. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Figure 1. Performance profiles for the number of iterations.
Figure 1. Performance profiles for the number of iterations.
Mathematics 07 00767 g001
Figure 2. Performance profiles for the CPU time (in seconds).
Figure 2. Performance profiles for the CPU time (in seconds).
Mathematics 07 00767 g002
Figure 3. Performance profiles for the number of function evaluations.
Figure 3. Performance profiles for the number of function evaluations.
Mathematics 07 00767 g003
Figure 4. Convergence histories of Algorithm 1, PCG and PDY on Problem 9.
Figure 4. Convergence histories of Algorithm 1, PCG and PDY on Problem 9.
Mathematics 07 00767 g004
Figure 5. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 20.05, SSIM = 0.83 and by DCG (bottom right) with SNR = 20.12, SSIM = 0.83.
Figure 5. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 20.05, SSIM = 0.83 and by DCG (bottom right) with SNR = 20.12, SSIM = 0.83.
Mathematics 07 00767 g005
Figure 6. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 22.93, SSIM = 0.87 and by DCG (bottom right) with SNR = 24.36, SSIM = 0.90.
Figure 6. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 22.93, SSIM = 0.87 and by DCG (bottom right) with SNR = 24.36, SSIM = 0.90.
Mathematics 07 00767 g006
Figure 7. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 25.65, SSIM = 0.86 and by DCG (bottom right) with SNR = 26.37, SSIM = 0.87.
Figure 7. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 25.65, SSIM = 0.86 and by DCG (bottom right) with SNR = 26.37, SSIM = 0.87.
Mathematics 07 00767 g007
Figure 8. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 21.50, SSIM = 0.84 and by DCG (bottom right) with SNR = 21.81, SSIM = 0.85.
Figure 8. The original image (top left), the blurred image (top right), the restored image by CGD (bottom left) with SNR = 21.50, SSIM = 0.84 and by DCG (bottom right) with SNR = 21.81, SSIM = 0.85.
Mathematics 07 00767 g008
Table 1. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 1 with given initial points and dimensions.
Table 1. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 1 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 11490.0255578.88 × 10 6 18730.0192955.72 × 10 6 12490.162489.18 × 10 6
x 2 12530.0141644.78 × 10 6 18730.0116489.82 × 10 6 13530.037806.35 × 10 6
x 3 12530.0085248.75 × 10 6 19770.0111977.1 × 10 6 14570.015505.59 × 10 6
x 4 13570.0113336.68 × 10 6 18730.0221978.27 × 10 6 15610.017464.07 × 10 6
x 5 13570.0142026.09 × 10 6 632540.0460729.58 × 10 6 14570.021939.91 × 10 6
x 6 13570.0110458.14 × 10 6 612460.0316089.15 × 10 6 401620.034729.70 × 10 6
5000 x 1 12530.0243115.82 × 10 6 18730.114317.42 × 10 6 13530.031586.87 × 10 6
x 2 13570.0273613.13 × 10 6 19770.039976.53 × 10 6 14570.042704.62 × 10 6
x 3 13570.025415.73 × 10 6 20810.0561595.2 × 10 6 15610.054334.18 × 10 6
x 4 14610.0320384.38 × 10 6 19770.0383818.1 × 10 6 15610.043579.08 × 10 6
x 5 14610.0390443.98 × 10 6 622500.158369.53 × 10 6 15610.089607.30 × 10 6
x 6 14610.0272315.33 × 10 6 602420.132769.1 × 10 6 391580.112849.86 × 10 6
10,000 x 1 12530.054348.23 × 10 6 18730.0732079.5 × 10 6 13530.063719.70 × 10 6
x 2 13570.0456644.43 × 10 6 19770.0907718.15 × 10 6 14570.063366.53 × 10 6
x 3 13570.0419228.09 × 10 6 20810.0708596.74 × 10 6 15610.064145.90 × 10 6
x 4 14610.0476416.2 × 10 6 20810.0873575.11 × 10 6 16650.079204.28 × 10 6
x 5 14610.0457345.62 × 10 6 622500.246468.87 × 10 6 391580.221017.97 × 10 6
x 6 14610.0571047.54 × 10 6 592380.199499.96 × 10 6 873510.362379.93 × 10 6
50,000 x 1 13570.163845.41 × 10 6 19770.254878.8 × 10 6 14570.276077.12 × 10 6
x 2 13570.186339.9 × 10 6 20810.326897.39 × 10 6 15610.262204.91 × 10 6
x 3 14610.208015.32 × 10 6 21850.336496.31 × 10 6 16650.282604.37 × 10 6
x 4 15650.19464.08 × 10 6 21850.327795.1 × 10 6 381540.606507.54 × 10 6
x 5 15650.197993.69 × 10 6 612460.826158.85 × 10 6 1777122.523309.44 × 10 6
x 6 15650.224184.95 × 10 6 592380.799928.5 × 10 6 36114495.979509.74 × 10 6
100,000 x 1 13570.322917.65 × 10 6 20810.538465.52 × 10 6 15610.393423.39 × 10 6
x 2 14610.333294.12 × 10 6 21850.615334.62 × 10 6 15610.421546.94 × 10 6
x 3 14610.370487.52 × 10 6 21850.536388.78 × 10 6 16650.458516.18 × 10 6
x 4 15650.360585.76 × 10 6 21850.620027.21 × 10 6 1757044.361009.47 × 10 6
x 5 15650.349755.22 × 10 6 602421.45649.73 × 10 6 1767084.291809.91 × 10 6
x 6 15650.36217.01 × 10 6 582341.41559.42 × 10 6 36014459.711909.99 × 10 6
Table 2. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 2 with given initial points and dimensions.
Table 2. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 2 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 9383.17445.84 × 10 6 15590.0498998.59 × 10 6 10390.010536.96 × 10 6
x 2 10420.0146336.25 × 10 6 11420.0150899.07 × 10 6 11430.009379.23 × 10 6
x 3 9380.0170677.4 × 10 6 17660.0169356.44 × 10 6 13510.011116.26 × 10 6
x 4 7300.0063926.53 × 10 6 18690.014366 × 10 6 14550.021549.46 × 10 6
x 5 11460.0119543.47 × 10 6 13480.009077.58 × 10 6 15590.018504.60 × 10 6
x 6 12500.686666.74 × 10 6 18680.013525.4 × 10 6 15590.019387.71 × 10 6
5000 x 1 10420.112413.53 × 10 6 16630.0411519.35 × 10 6 11430.035284.86 × 10 6
x 2 11460.0287233.81 × 10 6 12460.0287068.8 × 10 6 12470.040326.89 × 10 6
x 3 10420.0293674.3 × 10 6 18700.0475326.98 × 10 6 14550.048894.61 × 10 6
x 4 13540.0362313.67 × 10 6 19730.0521646.45 × 10 6 15590.048266.96 × 10 6
x 5 11460.049637.21 × 10 6 14520.0405296.71 × 10 6 16630.059693.37 × 10 6
x 6 13540.0549714.05 × 10 6 19720.123035.71 × 10 6 16630.062535.64 × 10 6
10,000 x 1 10420.0496144.98 × 10 6 17670.0747796.6 × 10 6 11430.067326.85 × 10 6
x 2 11460.0615955.36 × 10 6 13500.083086.11 × 10 6 12470.122329.72 × 10 6
x 3 10420.0545876.02 × 10 6 18700.0855549.83 × 10 6 14550.082886.51 × 10 6
x 4 13540.0733335.16 × 10 6 19730.105799.07 × 10 6 15590.084139.82 × 10 6
x 5 12500.063062.83 × 10 6 14520.0749829.18 × 10 6 16630.095894.75 × 10 6
x 6 13540.0622595.69 × 10 6 19720.0991678.02 × 10 6 16640.114998.55 × 10 6
50,000 x 1 11460.207033.1 × 10 6 18710.394737.37 × 10 6 12470.278265.23 × 10 6
x 2 12500.232513.35 × 10 6 14540.273466.74 × 10 6 13510.296427.11 × 10 6
x 3 11460.213383.73 × 10 6 20780.372495.5 × 10 6 15590.356024.82 × 10 6
x 4 14580.32323.22 × 10 6 21810.375915.07 × 10 6 351410.694706.69 × 10 6
x 5 12500.227036.27 × 10 6 16600.263395.02 × 10 6 351410.684889.12 × 10 6
x 6 14580.259793.54 × 10 6 20760.338148.93 × 10 6 351410.709739.91 × 10 6
100,000 x 1 11460.555114.38 × 10 6 19750.654945.22 × 10 6 12470.445417.39 × 10 6
x 2 12500.546944.73 × 10 6 14540.49449.52 × 10 6 14550.532993.39 × 10 6
x 3 11460.409225.27 × 10 6 20780.783197.78 × 10 6 15600.586038.71 × 10 6
x 4 14580.620494.55 × 10 6 21810.760517.17 × 10 6 722902.706308.31 × 10 6
x 5 12500.470398.86 × 10 6 16600.585457.07 × 10 6 722902.722208.68 × 10 6
x 6 14580.711745.01 × 10 6 21800.770516.32 × 10 6 722902.758508.96 × 10 6
Table 3. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 3 with given initial points and dimensions.
Table 3. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 3 with given initial points and dimensions.
Algorithm 1 (DCG)PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 10430.753229.9 × 10 6 19760.557525.62 × 10 6 12480.012554.45 × 10 6
x 2 11470.0069335.46 × 10 6 20800.0109365.58 × 10 6 12480.013119.02 × 10 6
x 3 12510.006763.48 × 10 6 21840.0110486.58 × 10 6 13520.014868.34 × 10 6
x 4 12510.0096644.41 × 10 6 22880.0110585.67 × 10 6 14560.016988.04 × 10 6
x 5 11470.0104879.06 × 10 6 22880.0121985.64 × 10 6 14560.015519.72 × 10 6
x 6 13550.0127023.15 × 10 6 21840.0182318.36 × 10 6 14560.015349.42 × 10 6
5000 x 1 11470.0194586.19 × 10 6 20800.0408086.29 × 10 6 12480.036609.94 × 10 6
x 2 12510.0215623.42 × 10 6 21840.066886.25 × 10 6 13520.036166.85 × 10 6
x 3 12510.0242747.79 × 10 6 22880.041447.37 × 10 6 14560.045946.14 × 10 6
x 4 12510.0267719.86 × 10 6 23920.0522146.35 × 10 6 15600.043426.01 × 10 6
x 5 12510.0268145.67 × 10 6 23920.0414446.31 × 10 6 15600.042967.25 × 10 6
x 6 13550.0239037.03 × 10 6 22880.0401359.37 × 10 6 321290.100818.85 × 10 6
10,000 x 1 11470.0441348.75 × 10 6 20800.0643128.9 × 10 6 13520.061924.77 × 10 6
x 2 12510.0519474.83 × 10 6 21840.0881028.84 × 10 6 13520.064429.68 × 10 6
x 3 13550.0572913.08 × 10 6 23920.072965.22 × 10 6 14560.094998.69 × 10 6
x 4 13550.0551343.9 × 10 6 23920.0752658.99 × 10 6 15600.076968.5 × 10 6
x 5 12510.0475518.02 × 10 6 23920.0739378.93 × 10 6 331330.186256.45 × 10 6
x 6 13550.0550699.95 × 10 6 23920.0998886.64 × 10 6 331330.155487.51 × 10 6
50,000 x 1 12510.199385.47 × 10 6 21840.270319.97 × 10 6 14560.236423.51 × 10 6
x 2 13550.224993.02 × 10 6 22880.26579.9 × 10 6 14560.248137.12 × 10 6
x 3 13550.193966.89 × 10 6 24960.32465.85 × 10 6 15600.270496.53 × 10 6
x 4 13550.202598.72 × 10 6 251000.323735.04 × 10 6 341370.545457.13 × 10 6
x 5 13550.194525.01 × 10 6 251000.337645.01 × 10 6 682741.023309.99 × 10 6
x 6 14590.220156.22 × 10 6 24960.336877.44 × 10 6 692781.038108.05 × 10 6
100,000 x 1 12510.399837.74 × 10 6 22880.638097.06 × 10 6 14560.454754.96 × 10 6
x 2 13550.327654.28 × 10 6 23920.634587.02 × 10 6 15600.490183.39 × 10 6
x 3 13550.301339.75 × 10 6 24960.714228.27 × 10 6 15600.490169.24 × 10 6
x 4 14590.428653.45 × 10 6 251000.735247.13 × 10 6 1395594.031109.01 × 10 6
x 5 13550.345127.09 × 10 6 251000.706257.09 × 10 6 702822.071008.54 × 10 6
x 6 14590.403878.8 × 10 6 251000.767775.27 × 10 6 1395594.024409.38 × 10 6
Table 4. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 4 with given initial points and dimensions.
Table 4. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 4 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 10430.154618.33 × 10 6 18720.118539.93 × 10 6 12480.009894.60 × 10 6
x 2 11470.0062763.84 × 10 6 19760.0143188.75 × 10 6 12480.009669.57 × 10 6
x 3 11470.0098593.91 × 10 6 20800.00937767.15 × 10 6 13520.008878.49 × 10 6
x 4 11470.0079765.21 × 10 6 471890.0233217.83 × 10 6 12480.012075.83 × 10 6
x 5 12510.0083824.09 × 10 6 461850.0471059.76 × 10 6 291170.053719.43 × 10 6
x 6 12510.0086453.32 × 10 6 411650.0277198.77 × 10 6 291170.023966.65 × 10 6
5000 x 1 11470.0220245.21 × 10 6 20800.0294455.57 × 10 6 13520.025033.49 × 10 6
x 2 11470.0205878.59 × 10 6 20800.0331159.8 × 10 6 13520.026267.24 × 10 6
x 3 11470.0237148.75 × 10 6 21840.0333188.01 × 10 6 14560.033496.29 × 10 6
x 4 12510.0247283.26 × 10 6 491970.0717159.46 × 10 6 13520.022584.25 × 10 6
x 5 12510.0310159.14 × 10 6 491970.0685658.68 × 10 6 311250.054717.59 × 10 6
x 6 12510.0300127.43 × 10 6 441770.0708627.79 × 10 6 632540.100648.54 × 10 6
10,000 x 1 11470.0414767.37 × 10 6 20800.0430137.88 × 10 6 13520.037614.93 × 10 6
x 2 12510.0478663.4 × 10 6 21840.0516856.94 × 10 6 14560.041003.37 × 10 6
x 3 12510.0426073.46 × 10 6 22880.0504225.67 × 10 6 14560.039198.90 × 10 6
x 4 12510.0364064.61 × 10 6 502010.175639.84 × 10 6 321290.096136.02 × 10 6
x 5 13550.0413743.61 × 10 6 502010.200359.03 × 10 6 321290.091776.44 × 10 6
x 6 13550.0398472.94 × 10 6 451810.122148.11 × 10 6 642580.207919.39 × 10 6
50,000 x 1 12510.139284.61 × 10 6 21840.271458.83 × 10 6 14560.171933.63 × 10 6
x 2 12510.180317.6 × 10 6 22880.231497.78 × 10 6 14560.152377.54 × 10 6
x 3 12510.125267.74 × 10 6 23920.287896.36 × 10 6 15600.165496.66 × 10 6
x 4 13550.143222.88 × 10 6 532130.616248.75 × 10 6 672700.762837.81 × 10 6
x 5 13550.179048.08 × 10 6 532130.71198.02 × 10 6 672700.761578.80 × 10 6
x 6 13550.136356.57 × 10 6 471890.481929.8 × 10 6 26910802.925109.41 × 10 6
100,000 x 1 12510.242936.52 × 10 6 22880.608226.25 × 10 6 14560.302295.13 × 10 6
x 2 13550.274333.01 × 10 6 23920.529655.51 × 10 6 15600.316483.59 × 10 6
x 3 13550.27143.06 × 10 6 23920.570648.99 × 10 6 321290.728389.99 × 10 6
x 4 13550.268194.08 × 10 6 542171.18059.1 × 10 6 1355432.867809.73 × 10 6
x 5 14590.316963.2 × 10 6 542171.1078.34 × 10 6 27210925.741409.91 × 10 6
x 6 13550.26989.29 × 10 6 491971.06177.49 × 10 6 548219711.441309.87 × 10 6
Table 5. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 5 with given initial points and dimensions.
Table 5. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 5 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 19780.717098.63 × 10 6 22830.0993387.48 × 10 6 16630.075756.03 × 10 6
x 2 21860.0171277.65 × 10 6 23880.0160147.31 × 10 6 16630.014705.42 × 10 6
x 3 23950.0139097.23 × 10 6 23900.0163289.31 × 10 6 331320.022086.75 × 10 6
x 4 22920.01658.64 × 10 6 491970.0301248.45 × 10 6 301210.018358.39 × 10 6
x 5 351450.0247028.26 × 10 6 532130.0393218.38 × 10 6 321290.027008.47 × 10 6
x 6 431820.0274718.7 × 10 6 461850.0336278.8 × 10 6 301210.017126.95 × 10 6
5000 x 1 1465920.238039.45 × 10 6 24910.0601586.36 × 10 6 17670.043945.64 × 10 6
x 2 21860.043379.46 × 10 6 25950.0603856.24 × 10 6 17670.046355.07 × 10 6
x 3 24990.0546198.27 × 10 6 25980.0400155.86 × 10 6 351400.083119.74 × 10 6
x 4 241000.0664246.66 × 10 6 532130.0980979.11 × 10 6 331330.080756.02 × 10 6
x 5 381570.0712229.28 × 10 6 582330.109588.56 × 10 6 351410.100917.51 × 10 6
x 6 451900.0902767.14 × 10 6 502010.215217.65 × 10 6 321290.080548.55 × 10 6
10,000 x 1 2118530.603579.65 × 10 6 25950.0764275.4 × 10 6 17670.068168.81 × 10 6
x 2 22900.080124.98 × 10 6 25950.0984618.9 × 10 6 17670.088337.80 × 10 6
x 3 251030.0892695.89 × 10 6 25980.074958.64 × 10 6 371480.147326.36 × 10 6
x 4 251040.117815.54 × 10 6 552210.190489.11 × 10 6 371490.142938.25 × 10 6
x 5 401650.158597.43 × 10 6 602410.197519.01 × 10 6 361450.147198.23 × 10 6
x 6 461940.17288.62 × 10 6 512050.288829.62 × 10 6 742980.264567.79 × 10 6
50,000 x 1 2259092.13739.93 × 10 6 26990.345756.75 × 10 6 421690.581137.78 × 10 6
x 2 23940.310984.48 × 10 6 271030.438065.16 × 10 6 421690.584567.13 × 10 6
x 3 261070.362936.83 × 10 6 271060.48155.28 × 10 6 411650.587178.87 × 10 6
x 4 261080.324279.72 × 10 6 602410.908688.66 × 10 6 401610.564317.17 × 10 6
x 5 431770.489389.47 × 10 6 652610.79249.05 × 10 6 823301.089208.44 × 10 6
x 6 502100.691178.12 × 10 6 562250.723348.19 × 10 6 803221.066707.82 × 10 6
100,000 x 1 2319334.25889.85 × 10 6 26990.712429.73 × 10 6 431731.096208.47 × 10 6
x 2 1395642.72669.96 × 10 6 271030.627467.39 × 10 6 431731.100407.77 × 10 6
x 3 261070.575059.92 × 10 6 271060.829897.77 × 10 6 421691.083309.66 × 10 6
x 4 271120.622278.52 × 10 6 622491.54749 × 10 6 853422.118809.22 × 10 6
x 5 451850.89927.79 × 10 6 672691.66929.5 × 10 6 843382.106409.78 × 10 6
x 6 522181.43187.37 × 10 6 582331.43338.32 × 10 6 1676714.062009.90 × 10 6
Table 6. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 6 with given initial points and dimensions.
Table 6. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 6 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 13551.385.68 × 10 6 23920.40389.28 × 10 6 15600.016714.35 × 10 6
x 2 13550.0133395.47 × 10 6 23920.0163258.92 × 10 6 15600.013464.18 × 10 6
x 3 13550.0661424.81 × 10 6 23920.0230457.86 × 10 6 15600.016303.68 × 10 6
x 4 13550.0268383.3 × 10 6 23920.0161725.38 × 10 6 14560.013397.48 × 10 6
x 5 12510.0098649.45 × 10 6 22880.037858.62 × 10 6 14560.012676.01 × 10 6
x 1 12510.0098815.57 × 10 6 22880.0150135.08 × 10 6 14560.016853.54 × 10 6
5000 x 1 14590.0425333.56 × 10 6 251000.0616425.22 × 10 6 15600.050389.73 × 10 6
x 2 14590.0366483.43 × 10 6 251000.0929525.02 × 10 6 15600.047759.36 × 10 6
x 3 14590.0434523.02 × 10 6 24960.0681418.82 × 10 6 15600.049238.25 × 10 6
x 4 13550.0325797.38 × 10 6 24960.0846256.04 × 10 6 15600.057935.64 × 10 6
x 5 13550.032955.92 × 10 6 23920.0861229.67 × 10 6 15600.045974.53 × 10 6
x 6 13550.0330623.49 × 10 6 23920.0933185.7 × 10 6 14560.050707.93 × 10 6
10,000 x 1 14590.0649175.04 × 10 6 251000.214247.38 × 10 6 682740.407249.06 × 10 6
x 2 14590.0699134.84 × 10 6 251000.139787.09 × 10 6 682740.418188.72 × 10 6
x 3 14590.084734.27 × 10 6 251000.17316.25 × 10 6 341370.219056.22 × 10 6
x 4 14590.0758472.92 × 10 6 24960.147448.54 × 10 6 15600.100767.98 × 10 6
x 5 13550.079748.38 × 10 6 24960.141696.85 × 10 6 15600.126806.40 × 10 6
x 6 13550.0631294.94 × 10 6 23920.152948.06 × 10 6 15600.119843.78 × 10 6
50,000 x 1 15630.253293.15 × 10 6 261040.646698.26 × 10 6 1435753.091209.42 × 10 6
x 2 15630.363943.03 × 10 6 261040.677177.95 × 10 6 1435753.062009.06 × 10 6
x 3 14590.24139.54 × 10 6 261040.55627 × 10 6 1425713.049509.04 × 10 6
x 4 14590.275026.53 × 10 6 251000.561719.56 × 10 6 692781.539209.14 × 10 6
x 5 14590.364045.24 × 10 6 251000.579827.67 × 10 6 682741.494909.43 × 10 6
x 6 14590.25063.09 × 10 6 24960.586459.03 × 10 6 15600.381778.44 × 10 6
100,000 x 1 15630.847814.45 × 10 6 271081.32155.86 × 10 6 292117213.595309.53 × 10 6
x 2 15630.666634.28 × 10 6 271081.50625.63 × 10 6 290116413.309309.75 × 10 6
x 3 15630.666833.77 × 10 6 261041.1669.9 × 10 6 1445796.681509.96 × 10 6
x 4 14590.626979.24 × 10 6 261041.39616.78 × 10 6 1415676.508009.92 × 10 6
x 5 14590.628917.41 × 10 6 261041.27115.44 × 10 6 702823.305108.07 × 10 6
x 6 14590.624224.37 × 10 6 251001.16856.4 × 10 6 341371.645106.37 × 10 6
Table 7. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 7 with given initial points and dimensions.
Table 7. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 7 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 6280.256892 × 10 6 17691.22756.98 × 10 6 14570.009535.28 × 10 6
x 2 6280.0084691.26 × 10 6 15610.233969.89 × 10 6 13530.008969.05 × 10 6
x 3 4200.0036199.25 × 10 6 16650.0080955.79 × 10 6 3120.004268.47 × 10 6
x 4 5240.0043455.7 × 10 6 16650.0100775.21 × 10 6 15610.011696.73 × 10 6
x 5 6280.0071464.42 × 10 6 19770.053544.95 × 10 6 311260.036469.03 × 10 6
x 6 6270.0042994.43 × 10 6 18720.0256778.93 × 10 6 15600.010823.99 × 10 6
5000 x 1 6280.0129154.47 × 10 6 18730.177227.6 × 10 6 15610.032154.25 × 10 6
x 2 6280.0122722.81 × 10 6 17690.0277295.25 × 10 6 14570.029427.40 × 10 6
x 3 5240.0146691.16 × 10 6 17690.029856.31 × 10 6 4160.011071.01 × 10 7
x 4 6280.0127657.14 × 10 7 17690.0281765.68 × 10 6 16650.043315.43 × 10 6
x 5 6280.013319.89 × 10 6 20810.0322135.39 × 10 6 331340.093797.78 × 10 6
x 6 6270.0158289.91 × 10 6 19760.0443289.73 × 10 6 15600.040778.92 × 10 6
10,000 x 1 6280.0223466.32 × 10 6 19770.178635.23 × 10 6 15610.064846.01 × 10 6
x 2 6280.0226693.97 × 10 6 17690.0492427.42 × 10 6 15610.077343.77 × 10 6
x 3 5240.0393421.64 × 10 6 17690.0482388.92 × 10 6 4160.027071.42 × 10 7
x 4 6280.0210171.01 × 10 6 17690.048078.03 × 10 6 16650.079417.69 × 10 6
x 5 7320.0316547.83 × 10 7 20810.0631567.62 × 10 6 341380.149426.83 × 10 6
x 6 7310.0234567.85 × 10 7 20800.0594386.7 × 10 6 341380.152248.81 × 10 6
50,000 x 1 7320.0924527.91 × 10 7 20811.08085.7 × 10 6 16650.259954.89 × 10 6
x 2 6280.10688.88 × 10 6 18730.328048.08 × 10 6 15610.246748.42 × 10 6
x 3 5240.0656843.66 × 10 6 18730.21899.71 × 10 6 4160.094053.18 × 10 7
x 4 6280.101932.26 × 10 6 18730.34978.75 × 10 6 361460.552076.39 × 10 6
x 5 7320.0956761.75 × 10 6 21850.225958.3 × 10 6 351420.546799.05 × 10 6
x 6 7310.0928551.76 × 10 6 21840.223747.3 × 10 6 361460.557647.59 × 10 6
100,000 x 1 7320.175971.12 × 10 6 20812.16758.06 × 10 6 17690.525955.68 × 10 6
x 2 7320.17417.03 × 10 7 19770.455535.57 × 10 6 16650.521024.34 × 10 6
x 3 5240.175225.18 × 10 6 19770.432196.69 × 10 6 4160.148644.50 × 10 7
x 4 6280.207853.19 × 10 6 19770.522596.03 × 10 6 361461.053609.04 × 10 6
x 5 7320.239792.48 × 10 6 22890.61715.72 × 10 6 742992.107308.55 × 10 6
x 6 7310.231282.48 × 10 6 22880.573845.03 × 10 6 371501.082406.66 × 10 6
Table 8. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 8 with given initial points and dimensions.
Table 8. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 8 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
1000 x 1 7280.114953.03 × 10 6 9320.857977.6 × 10 6 692790.055388.95 × 10 6
x 2 7280.0050343.03 × 10 6 9320.0346757.6 × 10 6 27010850.187989.72 × 10 6
x 3 7280.0067433.03 × 10 6 9320.0059857.6 × 10 6 24520.024396.57 × 10 6
x 4 7280.0058563.03 × 10 6 9320.0048087.6 × 10 6 27580.015207.59 × 10 6
x 5 7280.0046353.03 × 10 6 9320.0150267.6 × 10 6 28610.043309.21 × 10 6
x 6 7280.0064873.03 × 10 6 9320.157787.6 × 10 6 40850.021168.45 × 10 6
5000 x 1 5220.0090684.52 × 10 6 7260.672391.3 × 10 6 65826391.130309.98 × 10 6
x 2 5220.0093694.52 × 10 6 7260.0106511.3 × 10 6 27580.051017.59 × 10 6
x 3 5220.0108954.52 × 10 6 7260.0157581.3 × 10 6 491040.080358.11 × 10 6
x 4 5220.0149584.52 × 10 6 7260.0149351.3 × 10 6 40850.079798.45 × 10 6
x 5 5220.015074.52 × 10 6 7260.015241.3 × 10 6 18400.091289.14 × 10 6
x 6 5220.0087164.52 × 10 6 7260.19991.3 × 10 6 17380.185288.98 × 10 6
10,000 x 1 6270.0311983.81 × 10 6 5190.0443875.06 × 10 6 491040.204437.62 × 10 6
x 2 6270.020983.81 × 10 6 5190.02235.06 × 10 6 40850.158018.45 × 10 6
x 3 6270.019913.81 × 10 6 5190.0182095.06 × 10 6 19420.378807.66 × 10 6
x 4 6270.0254023.81 × 10 6 5190.0216545.06 × 10 6 901871.258029.7 × 10 6
x 5 6270.0258163.81 × 10 6 5190.0173535.06 × 10 6 988198812.682599.93 × 10 6
x 6 6270.0250653.81 × 10 6 5190.0197635.06 × 10 6 27580.328597.59 × 10 6
50,000 x 1 4210.0836412.34 × 10 7 8330.429025.15 × 10 6 19420.522916.42 × 10 6
x 2 4210.0741562.34 × 10 7 8330.115255.15 × 10 6 1483043.930639.92 × 10 6
x 3 4210.0785962.34 × 10 7 8330.144325.15 × 10 6 937188622.970979.87 × 10 6
x 4 4210.0782892.34 × 10 7 8330.115625.15 × 10 6 27580.684677.59 × 10 6
x 5 4210.0735352.34 × 10 7 8330.116745.15 × 10 6 3467028.450439.79 × 10 6
x 6 4210.0819092.34 × 10 7 8330.104865.15 × 10 6 40850.992308.45 × 10 6
100,000 x 1 4220.16636.25 × 10 6 6251.29226.81 × 10 7 ----
x 2 4220.151476.25 × 10 6 6250.188396.81 × 10 7 ----
x 3 4220.155826.25 × 10 6 6250.161536.81 × 10 7 ----
x 4 4220.154656.25 × 10 6 6250.173976.81 × 10 7 ----
x 5 4220.167446.25 × 10 6 6250.185866.81 × 10 7 ----
x 6 4220.16876.25 × 10 6 6250.179386.81 × 10 7 ----
Table 9. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 9 with given initial points and dimensions.
Table 9. Numerical Results for Algorithm 1 (DCG), PCG and PDY for Problem 9 with given initial points and dimensions.
Algorithm 1PCGPDY
DIMENSIONINITIAL POINTITERFVALTIMENORMITERFVALTIMENORMITERFVALTIMENORM
4 x 1 512150.236659.01 × 10 6 793210.59789.76 × 10 6 592410.712689.36 × 10 6
x 2 512150.049689.99 × 10 6 773130.0163269.85 × 10 6 582370.0454419.73 × 10 6
x 3 532230.0172119.46 × 10 6 803250.165299.38 × 10 6 592410.0195529.9 × 10 6
x 4 532230.0190049.68 × 10 6 833370.0417139.57 × 10 6 622530.0220078.07 × 10 6
x 5 572390.0234478.87 × 10 6 813290.119729.04 × 10 6 612490.0401178.36 × 10 6
x 6 542270.0208329.31 × 10 6 823330.0161279.3 × 10 6 612490.0173749.18 × 10 6
Table 10. Efficiency comparison based on the value of the number of iterations (Iter), objective function (ObjFun) value, mean-square-error (MSE) and signal-to-noise-ratio (SNR) under different Pi ( σ ).
Table 10. Efficiency comparison based on the value of the number of iterations (Iter), objective function (ObjFun) value, mean-square-error (MSE) and signal-to-noise-ratio (SNR) under different Pi ( σ ).
ImageIterObjFunMSESNR
DCGCGDDCGCGDDCGCGDDCGCGD
P1(1E-8)894.397 × 10 3 4.398 × 10 3 3.136 × 10 2 3.157 × 10 2 9.429.39
P1(1E-1)894.399 × 10 3 4.401 × 10 3 3.147 × 10 2 3.163 × 10 2 9.409.38
P1(0.11)1184.428 × 10 3 4.432 × 10 3 3.229 × 10 2 3.232 × 10 2 9.299.29
P1(0.25)1284.468 × 10 3 4.473 × 10 3 3.365 × 10 2 3.289 × 10 2 9.119.21
P1(1E-8)994.555 × 10 3 4.556 × 10 3 3.287 × 10 2 3.3412 × 10 2 9.149.07
P1(1E-1)994.558 × 10 3 4.559 × 10 3 3.298 × 10 2 3.348 × 10 2 9.129.06
P1(0.11)12124.588 × 10 3 4.591 × 10 3 3.416 × 10 2 3.446 × 10 2 8.978.93
P1(0.25)784.628 × 10 3 4.630 × 10 3 3.621 × 10 2 3.500 × 10 2 8.728.86
P1(1E-8)995.179 × 10 3 5.179 × 10 3 3.209 × 10 2 3.3259 × 10 2 10.039.96
P1(1E-1)995.182 × 10 3 5.182 × 10 3 3.231 × 10 2 3.267 × 10 2 10.009.95
P1(0.11)795.209 × 10 3 5.209 × 10 3 3.436 × 10 2 3.344 × 10 2 9.739.85
P1(0.25)1085.250 × 10 3 5.254 × 10 3 3.557 × 10 2 3.438 × 10 2 9.589.73
P1(1E-8)994.388 × 10 3 4.389 × 10 3 3.299 × 10 2 3.335 × 10 2 9.038.99
P1(1E-1)994.391 × 10 3 4.393 × 10 3 3.308 × 10 2 3.340 × 10 2 9.028.98
P1(0.11)1284.421 × 10 3 4.424 × 10 3 3.425 × 10 2 3.411 × 10 2 8.878.89
P1(0.25)784.461 × 10 3 4.463 × 10 3 3.621 × 10 2 3.483 × 10 2 8.638.80

Share and Cite

MDPI and ACS Style

Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M. An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications. Mathematics 2019, 7, 767. https://doi.org/10.3390/math7090767

AMA Style

Abubakar AB, Kumam P, Mohammad H, Awwal AM. An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications. Mathematics. 2019; 7(9):767. https://doi.org/10.3390/math7090767

Chicago/Turabian Style

Abubakar, Auwal Bala, Poom Kumam, Hassan Mohammad, and Aliyu Muhammed Awwal. 2019. "An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications" Mathematics 7, no. 9: 767. https://doi.org/10.3390/math7090767

APA Style

Abubakar, A. B., Kumam, P., Mohammad, H., & Awwal, A. M. (2019). An Efficient Conjugate Gradient Method for Convex Constrained Monotone Nonlinear Equations with Applications. Mathematics, 7(9), 767. https://doi.org/10.3390/math7090767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop