[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Autonomous Detection for Traffic Flow Parameters of a Vehicle-Mounted Sensing Device Based on Symmetrical Difference
Next Article in Special Issue
Convergence and Dynamics of a Higher-Order Method
Previous Article in Journal
Impact of Stair and Diagonal Matrices in Iterative Linear Massive MIMO Uplink Detectors for 5G Wireless Networks
Previous Article in Special Issue
Nonparametric Tensor Completion Based on Gradient Descent and Nonconvex Penalty
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence of Solvers with Eighth Order Having Weak Conditions

by
Ramandeep Behl
1 and
Ioannis K. Argyros
2,*
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences Lawton, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(1), 70; https://doi.org/10.3390/sym12010070
Submission received: 25 November 2019 / Revised: 19 December 2019 / Accepted: 25 December 2019 / Published: 2 January 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
In particular, the problem of approximating a solution of an equation is of extreme importance in many disciplines, since numerous problems from diverse disciplines reduce to solving such equations. The solutions are found using iterative schemes since in general to find closed form solution is not possible. That is why it is important to study convergence order of solvers. We extended the applicability of an eighth-order convergent solver for solving Banach space valued equations. Earlier considerations adopting suppositions up to the ninth Fŕechet-derivative, although higher than one derivatives are not appearing on these solvers. But, we only practiced supposition on Lipschitz constants and the first-order Fŕechet-derivative. Hence, we extended the applicability of these solvers and provided the computable convergence radii of them not given in the earlier works. We only showed improvements for a certain class of solvers. But, our technique can be used to extend the applicability of other solvers in the literature in a similar fashion. We used a variety of numerical problems to show that our results are applicable to solve nonlinear problems but not earlier ones.

1. Introduction

A plethora of problems from diverse disciplines such as Applied Mathematics, Mathematical: Biology, chemistry, Economics, Physics, Environmental Sciences and also Engineering are reduced to equations on abstract spaces via mathematical modeling. The closed form solution is obtained only in rare cases. That is why it is important to develop iterates generating a sequence converging to the solution based on some suitable hypotheses on the initial information. Hence, we consider the problem of finding approximate unique solution α of
Γ ( μ ) = 0 ,
is one of the top priorities in the field of numerical analysis. We consider that Γ : K T 1 T 2 is a Fréchet differentiable operator, T 1 , T 2 are Banach spaces, and K is a convex subset of T 1 . The L ( T 1 , T 2 ) is the space of continuous operators from T 1 to T 2 .
We have several examples where researchers demonstrated the applicability of (1). They transformed the real life problems to (1) by adopting mathematical modeling and details can be found in [1,2,3,4,5,6,7]. We have to target on iterative solvers since it is not always feasible to access the solution α in an explicit pattern. We have a small number of globally convergent methods that do not require a sufficiently close starting point, e.g., Bisection method or regula falsi method. But, most of the algorithms determine one zero at a time. If the zero has been determined with sufficient accuracy, the polynomial is deflated and the algorithm is applied again on the deflated polynomial. In this way, we can determine all zeros simultaneously and also have theoretical importance [8] for the details of methods can be seen in [9,10,11,12,13,14,15,16,17,18,19]. Therefore, we have extended amount of iterative solvers to solve problems like expression (1). The analysis of solvers involves local convergence that stands on the knowledge around α . It also ensures the convergence of iteration procedures. One of the most significant tasks in the analysis of iterative procedures is to yield the convergence region. Hence, we suggest the radius of convergence.
We rewrite for this purpose the iterative solver suggested in [20] in the following way:
ν τ = μ τ A τ 1 Γ ( μ τ ) , μ τ + 1 = ν τ 4 B τ 1 Γ ( ν τ ) , ,
where μ 0 K is an initial point, A : K L ( T 1 , T 2 ) given as A ( μ τ ) = A τ = Γ ( μ τ ) + Q ( μ τ ) Γ ( μ τ ) , B ( μ τ , ν τ ) = B τ = Γ ( μ τ ) + 4 Q ( μ τ ) Γ ( μ τ ) + 2 Γ μ τ + ν τ 2 + Γ ( ν τ ) , and Q ( . . ) : K × K L ( T 1 , T 2 ) is a bilinear operator. In the special case, when T 1 = T 2 = R , Q ( μ , ν ) = G ( μ ) G ( μ ) , where G ( μ ) 0 for each x K { α } solver (2) reduces to a fourth-order convergent solver studied in [20]. Shah et al. [20] suggested fourth-order convergence by adopting Taylor series expansions and suppositions up to the ninth-order derivative of the involved function. Such constraints hamper the suitability of solver (2). But, only first-order derivative emerges in the solver (2). Let us assume the succeeding function Γ on T 1 = T 2 = R , K = 1 2 , 3 2 as
Γ ( μ ) = μ 3 ln μ 2 + μ 5 μ 4 , μ 0 0 , μ = 0 .
Then, we have that
Γ ( μ ) = 3 μ 2 ln μ 2 + 5 μ 4 4 μ 3 + 2 μ 2 ,
Γ ( μ ) = 6 μ ln μ 2 + 20 μ 3 12 μ 2 + 10 μ
and
Γ ( μ ) = 6 ln μ 2 + 60 μ 2 24 μ + 22 .
From the above derivatives, it is straightforward to see that the 3rd-order derivative of Γ is unbounded in A . In the available literature, we have a bulk number of research articles [1,2,3,4,5,6,7,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]. In the majority of these articles, authors mention that starting guess x 0 must be adequately close to μ . But this is not offering us an idea of: how to pick x 0 , how much closeness is sufficient for convergence, find radius; bounds on x n μ and results on uniqueness. We deal with all these questions for solver (2) in the next section.
In the present study, we adopt only conditions on the first-order derivative of Γ with generalized Lipschitz conditions. In addition, we are avoiding the Taylor series expansions because it proceeds with higher-order derivatives of Γ , but we adopt Lipschitz parameters. In this way, we are not committed to adopt higher-order derivatives for convergence order of (2). Further, we adopted the following ( C O C ) and ( A C O C ) for computing the convergence order:
ξ = l n μ τ + 2 α μ τ + 1 α l n μ τ + 1 α μ τ α , for each = 0 , 1 , 2 , ,
or the approximate computational order of convergence ( A C O C ) [21], defined as
ξ * = l n μ τ + 2 μ τ + 1 μ τ + 1 μ τ l n μ τ + 1 μ τ μ τ μ τ 1 , for each = 1 , 2 , ,
where the computational order of convergence C O C and the approximate computational order of convergence A C O C [21], respectively. They do not require higher than one derivatives. It is vital to note that A C O C does not need the prior information of exact root μ . Finally, we investigate the applicability of our results on several numerical examples, where earlier works did not exhibit this behavior.
The remainder of this paper is coordinated in the succeeding way: We suggest the local convergence study of solver (2) in Section 2. Numerical experimentation is depicted in Section 3. Finally, we make concluding assertions in Section 4.

2. Study of Local Convergence

In this section, we suggest the local convergence study of solver (2). Therefore, we adopt some scalar functions Δ 0 , Δ , w 0 , w , w 1 that are non-decreasing continuous functions from [ 0 , + ) to [ 0 , + ) such that w 0 ( 0 ) = w ( 0 ) = 0 . We assume
p ( ζ ) = 1
has a minimal positive solution r 0 and
p ( ζ ) = w 0 ( ζ ) + w 1 ( ζ ) ζ 0 1 Δ 0 ( θ ζ ) d θ .
In addition, we describe functions g 1 , h 1 , q , and h q on 0 , r 0 as follows:
g 1 ( ζ ) = 0 1 w ( 1 θ ) ζ d θ 1 w 0 ( ζ ) + 0 1 Δ ( θ ζ ) d θ 0 1 Δ 0 ( θ ζ ) d θ w 1 ( ζ ) ζ 1 w 0 ( ζ ) 1 p ( ζ ) , h 1 ( ζ ) = g 1 ( ζ ) 1 , q ( ζ ) = 1 2 w 0 ( ζ ) + 4 0 1 Δ 0 ( θ ζ ) d θ w 1 ( ζ ) ζ + 0 1 Δ θ 2 1 + g 1 ( ζ ) ζ d θ 1 + g 1 ( ζ ) ζ + w 0 g 1 ( ζ ) ζ , and h q ( ζ ) = q ( ζ ) 1 .
We have that h 1 ( 0 ) = h q ( 0 ) = 1 < 0 and h 1 ( ζ ) + , h q ( ζ ) + as ζ r 0 . By adopting the intermediate value theorem, we can say that both function h 1 and h q have zeros in 0 , r 0 . Call as r 1 and r q the smallest such zeros in 0 , r 0 of the functions h 1 and h q , respectively. Further, we represent functions g 2 and h 2 on [ 0 , r 0 ) as follows:
g 2 ( ζ ) = 1 + 2 0 1 Δ θ g 1 ( ζ ) ζ d θ 1 q ( ζ ) g 1 ( ζ ) , and h 2 ( ζ ) = g 2 ( ζ ) 1 .
We have again that h 2 ( 0 ) = 1 < 0 and h 2 ( ζ ) + as ζ r q . Let us call r 2 to be minimal zero of h 2 in ( 0 , r q ) . Finally, we describe the convergence radius r in the following way:
r = min { r 1 , r 2 } .
Then, we have that for each ζ [ 0 , r )
0 p ( ζ ) < 1 ,
0 w 0 ( ζ ) < 1 ,
0 g 1 ( ζ ) < 1 ,
0 q ( ζ ) < 1 ,
and
0 g 2 ( ζ ) < 1 .
The U ( λ , ρ ) and U ¯ ( λ , ρ ) are two open and closed balls, respectively in T 1 centered at λ T 1 . Both have the radius ρ > 0 .
The local convergence analysis of solver (2) is based on conditions ( A ) :
(A1)
Γ : K T 1 T 2 is a Fréchet-differentiable operator.
(A2)
Δ 0 , Δ , w 0 , w , w 1 : [ 0 , ) [ 0 , ) with w 0 ( 0 ) = w ( 0 ) = 0 are non-decreasing continuous functions.
(A3)
There exists a zero α K of Γ such that for every μ K
Γ ( α ) = 0 , Γ ( α ) 1 L ( T 2 , T 1 )
and
Γ ( α ) 1 Γ ( μ ) Γ ( α ) w 0 ( μ α ) .
Set K 0 : = K U ( α , r 0 ) .
(A4)
Γ ( α ) 1 Γ ( μ ) Γ ( ν ) w ( μ ν ) ,
Γ ( μ ) Δ 0 ( μ α ) ,
Γ ( α ) 1 Γ ( μ ) Δ ( μ α ) ,
Γ ( α ) 1 Q ( μ , ν ) w 1 ( μ α ) , for every μ , ν K 0 ,
and
U ¯ ( α , r ) K ,
where Q ( μ , ν ) : K × K L ( T 1 , T 2 ) .
Then, we present the main local convergence result.
Theorem 1.
Under the conditions ( A ) sequence { μ τ } obtained for μ 0 U ( α , r ) { α } by solver (2) exists, remains in U ( α , r ) for all τ = 0 , 1 , 2 , and converges to α, so that
ν τ α g 1 μ τ α μ τ α μ τ α < r
and
λ τ α g 2 μ τ α μ τ α μ τ α .
Furthermore, if
0 1 w 0 ( θ R ) d θ < 1 , f o r R r ,
then α is the unique root of Γ ( μ ) = 0 in K 1 : = K U ¯ ( α , R ) .
Proof. 
We select the mathematical induction to show expressions (19)–(21) are well defined in U ( α , r ) . Further, they converge to required zero α . Adopting hypothesis μ 0 U ( α , r ) { α } , (5)–(7) and (13), we obtain
Γ ( α ) 1 Γ ( μ 0 ) Γ ( α ) w 0 ( μ 0 α ) < w 0 ( r ) < 1 .
From the expression (22) and the Banach Lemma on inverse operators [1,2] that Δ ( x 0 ) 1 L ( T 2 , T 1 ) , ν 0 , λ 0 are well defined and
Γ ( μ 0 ) 1 Γ ( α ) 1 1 w 0 ( μ 0 α ) .
To show that ν 0 exists, it suffices to show that A 0 1 L ( T 2 , T 1 ) . Using (5), (6), (8), (13) and (15), we get in turn that
Γ ( μ ) 1 A 0 Γ ( α ) Γ ( α ) 1 Γ ( μ 0 ) Γ ( α ) + Γ ( μ 0 ) Γ ( α ) 1 Q ( μ 0 ) w ( μ 0 α ) + 0 1 Δ 0 θ μ 0 α d θ w 1 ( μ 0 α ) μ 0 α = p ( μ 0 α ) < p ( r ) < 1 ,
so A 0 1 L ( T 2 , T 1 ) is well defined and
A 0 1 Γ ( α ) 1 1 p ( μ 0 α ) .
By the definition of A 0 and the first substep of (2), we can write
ν 0 α = μ 0 α Γ 0 ( μ 0 ) 1 Γ ( μ 0 ) + Γ 0 ( μ 0 ) 1 A 0 1 Γ ( μ 0 ) = Γ 0 ( μ 0 ) 1 Γ ( α ) 0 1 Γ ( α ) 1 Γ α + θ ( μ 0 α ) Γ ( μ 0 ) ( μ 0 α ) d θ + Γ ( μ 0 ) 1 Γ ( α ) Γ ( α ) 1 A 0 Γ ( μ 0 ) A 0 1 Γ ( α ) Γ ( α ) 1 Γ ( μ 0 ) .
We also have by (16)
Γ ( μ 0 ) = Γ ( μ 0 ) Γ ( α ) = 0 1 Γ α + θ ( μ 0 α ) d θ ( μ 0 α ) , so Γ ( α ) 1 Γ ( μ 0 ) = 0 1 Γ ( α ) 1 Γ α + θ ( μ 0 α ) d θ ( μ 0 α ) 0 1 Δ θ μ 0 α d θ μ 0 α .
In view of (2), (5), (6), (9), (14), (15), (22) and (24), we obtain
ν 0 α Γ ( μ 0 ) 1 Γ ( α ) 0 1 Γ ( α ) 1 ( Γ ( α + θ ( μ 0 α ) ) Γ ( μ 0 ) ) ( μ 0 α ) d θ + Γ ( μ 0 ) 1 Γ ( α ) Γ ( μ 0 ) 1 A 0 Γ ( μ 0 ) A 0 1 Γ ( α ) Γ ( α ) 1 Γ ( μ 0 ) 0 1 w ( 1 θ ) μ 0 α d θ μ 0 α 1 w 0 ( μ 0 α ) + 0 1 Δ 0 θ μ 0 α d θ 0 1 Δ θ μ 0 α d θ w 1 μ 0 α μ 0 α 2 ( 1 w 0 ( μ 0 α ) ) ( 1 p ( μ 0 α ) ) = g 1 ( μ 0 α ) μ 0 α μ 0 α < r ,
which illustrates (22) for τ = 0 and ν 0 U ( α , r ) .
Next, we have to prove that B 0 1 L ( T 2 , T 1 ) . By (5), (6), (10), (13), (15), (16) and (28), we get
2 Γ ( α ) 1 ( B 0 2 Γ ( α ) ) 1 2 [ Γ ( α ) 1 Γ ( μ 0 ) Γ ( α ) + 4 Γ ( α ) 1 Q ( μ 0 ) Γ ( μ 0 ) + 2 Γ ( α ) 1 Γ μ 0 + ν 0 2 + Γ ( α ) 1 Γ ( ν 0 ) Γ ( α ) ] 1 2 [ w 0 ( μ 0 α ) + 4 0 1 Δ 0 ( θ μ 0 α ) d θ w 1 ( μ 0 α ) μ 0 α + 0 1 Δ θ 2 ( μ 0 α + ν 0 α ) d θ μ 0 α + ν 0 α + w 0 ( μ 0 α ) ] q ( μ 0 α ) < q ( r ) < 1 .
Hence, B 0 1 L ( T 2 , T 1 ) is valid by solver (2), and
B 0 1 Γ ( α ) 1 2 ( 1 q ( μ 0 α ) ) .
Then, by the last sub step of solver (2), (5), (6), (11), (15), (28) and (30), we have in turn that
μ 1 α ν 0 α + 4 B 0 1 Γ ( α ) Γ ( α ) 1 Γ ( ν 0 ) g 1 ( μ 0 α ) μ 0 α + 2 0 1 Δ θ ν 0 α d θ g 1 ( μ 0 α ) μ 0 α 1 q ( μ 0 α ) = g 2 ( μ 0 α ) μ 0 α μ 0 α < r ,
which illustrates (20) for τ = 0 and λ 0 U ( α , r ) . By restoring μ 0 , ν 0 , μ 1 by μ σ , ν σ , μ σ + 1 in the succeeding estimates, we attain (19) and (20). Then, in view of the estimates
μ σ + 1 α c μ σ α < r , c = g 2 ( μ 0 α ) [ 0 , 1 ) ,
that attain lim σ μ σ = α and μ σ + 1 U ( α , r ) . Finally, the uniqueness of solution is required. Therefore, we assume that ν * D 1 with Γ ( ν * ) = 0 and Q = 0 1 Γ ( α + θ ( α ν * ) ) d θ .
By adopting (9) and (16), we yield
Γ ( α ) 1 Q Γ ( α ) 0 1 w 0 θ ν * α d θ 0 1 w 0 ( θ R ) d θ < 1 .
So, Q is invertible in view of
0 = Γ ( α ) Γ ( ν * ) = Q ( α ν * ) ,
that yields α = ν * . ☐
Remark 1.
(a) 
It is straightforward from the expression of (14) that we can drop the hypothesis (16) and restore as
Δ ( ζ ) = 1 + w 0 ( ζ ) or Δ ( ζ ) = 1 + w 0 ( r 0 ) ,
since,
Γ ( α ) 1 Γ ( μ ) Γ ( α ) + Γ ( α ) 1 + Γ ( α ) 1 Γ ( μ ) Γ ( α ) 1 + w 0 ( μ α ) = 1 + w 0 ( ζ ) for μ α r 0 .
(b) 
We can choose
r 0 = w 0 1 ( 1 ) ,
instead of (5) provided the function w 0 is strictly increasing.
(c) 
If w 0 , w , Δ are constants functions, then we have
r 1 = 2 2 w 0 + w
and
r r 1 .
The r 1 stands for the radius of the following Newton’s solver
μ τ + 1 = μ τ Γ ( μ τ ) 1 Γ ( μ τ ) .
Rheindoldt [22] and Traub [5] also suggested convergence radius instead of r 1
r T R = 2 3 w 1 ,
and by Argyros [1,2]
r A = 2 2 w 0 + w 1 ,
where w 1 is a Lipschitz parameter for (10) on K . Hence, we have
w w 1 , w 0 w 1 ,
so
r T R r A r 1
and
r T R r A 1 3 a s w 0 w 0 .
The convergence radius q suggested by Dennis and Schabel [1] is smaller than the radius r D S
q < r S D = 1 2 w 1 < r T R .
However, q can not be calculated by the Lipschitz conditions.
(d) 
By adopting conditions on the ninth-order derivative of operator Γ, the order of convergence of solver (2) was provided by Shah et al. [20]. But, we assume hypotheses only on first-order derivative of operator Γ. For obtaining the computational order of convergence ( C O C ) , we adopted expressions (3) and (4).
(e) 
Assume [1,2] satisfying the autonomous differential equation
Γ ( μ ) = P Γ ( μ )
where P is a given and continuous operator. Then, Γ ( α ) = P Γ ( α ) = P ( 0 ) , our results apply. But, without knowledge of α and choose Γ ( μ ) = e μ 1 . Hence, we select P ( μ ) = μ + 1 .

3. Numerical Experimentation

Here, we illustrate the theoretical consequences suggested in Section 2. Next, we choose Q = I in the first four examples.
Example 1.
Let T 1 = T 2 = H and H = C [ 0 , 1 ] . We study the mixed Hammerstein-like equation [6,23], defined by
μ ( s ) = 1 + 0 1 G ( s , ζ ) μ ( ζ ) 3 2 + μ ( ζ ) 2 2 d ζ
where
Γ ( s , ζ ) = ( 1 s ) ζ , ζ s , s ( 1 ζ ) , s ζ ,
defined in [ 0 , 1 ] × [ 0 , 1 ] . The solution α ( s ) = 0 is the same as zero of (1), where Γ : H H , given as:
Γ ( μ ) ( s ) = μ ( s ) 0 ζ G ( s , ζ ) μ ( ζ ) 3 2 + μ ( ζ ) 2 2 d ζ .
But
0 ζ G ( s , ζ ) d ζ 1 8 .
Then, we have that
Γ ( μ ) ν ( s ) = ν ( s ) 0 ζ G ( s , ζ ) 3 2 μ ( ζ ) 1 2 + μ ( ζ ) d ζ ,
and since Γ ( α ( s ) ) = I ,
Γ ( α ) 1 Γ ( μ ) Γ ( ν ) 1 8 3 2 μ ν 1 2 + μ ν .
Therefore, we can choose
w 0 ( ζ ) = w ( ζ ) = 1 8 3 2 ζ 1 2 + ζ .
Hence, by Remark 2.2(a), we can set
Δ 0 ( ζ ) = Δ ( ζ ) = 1 + w 0 ( ζ ) a n d w 1 ( ζ ) = 1 .
But, theorems in [20] can not be utilized to solve this problem because Γ is not Lipschitz. Notice though that our theorems can be utilized. We have the following radii for Example 1:
r 1 = 0.321768 , r q = 0.284919 , r 2 = 0.119079 ,
so
r = 0.119079 .
Example 2.
Consider, setting T 1 = T 2 = R 3 and Ω = Ω ( 0 , 1 ) . Then, for w = ( μ , ν , λ ) T define a function Γ : Ω R 3 as follows:
Γ ( w ) = e μ 1 , e 1 2 ν 2 + ν , λ T .
Then, we obtain
Γ ( w ) = e μ 0 0 0 ( e 1 ) ν + 1 0 0 0 1 .
Hence, for μ = ( 0 , 0 , 0 ) T we can choose w 0 ( ζ ) = ( e 1 ) ζ , w ( ζ ) = e 1 e 1 ζ , Δ 0 ( ζ ) = Δ ( ζ ) = e 1 e 1 , and w 1 ( ζ ) = 1 . By adopting these functions and parameters, we obtain the following radii for Example 2:
r 1 = 0.121854 , r q = 0.134127 , r 2 = 0.0370321 ,
so
r = 0.0370321 .
Example 3.
Let us choose that T 1 = T 2 = H , facilitated by the max norm. In addition, we consider B ( x ) = Γ ( μ ) and K = U ¯ ( 0 , 1 ) for every μ K . Choose a function Γ on K
Γ ( φ ) ( μ ) = ϕ ( μ ) 5 0 1 μ θ φ ( θ ) 3 d θ ,
which yields
Γ ( φ ( ξ ) ) ( μ ) = ξ ( μ ) 15 0 1 μ θ φ ( θ ) 2 ξ ( θ ) d θ , for each ξ Ω .
Then, we have that w 0 ( ζ ) = 7.5 ζ , w ( ζ ) = 15 ζ and Δ 0 ( ζ ) = Δ ( ζ ) = 2 , and w 1 ( ζ ) = 1 . We have the following radii for Example 3:
r 1 = 0.0453881 , r q = 0.0587713 , r 2 = 0.0133343 ,
so
r = 0.0133343 .
Example 4.
By the academic problem that we considered in the introduction. We can choose w 0 ( ζ ) = w ( ζ ) = 96.662907 ζ and Δ 0 ( ζ ) = 6 , Δ ( ζ ) = 2 , and w 1 ( ζ ) = 1 3 . By adopting these functions and parameters, we yield the following radii, for Example 4:
r 1 = 0.00641476 , r q = 0.00741294 , r 2 = 0.00247724 ,
so
r = 0.00247724 .

4. Concluding Assertions

We first generalized solver (2) from functions on the real line to Banach space valued operators. Then, we presented a local convergence analysis in this setting and by using generalized-continuity conditions. Our analysis uses only the first derivative appearing in the solver. In the special case of the real line, derivatives up to the order seven were used. Notice that these high order derivatives do not appear in the solver (2) and also limit the applicability of the solver, as we saw in the introduction. Hence, the applicability of solver (2) has been significantly extended. Numerical examples and applications complete the paper.

Author Contributions

Both authors have equal contribution for this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-274-130-1440). The authors, therefore, acknowledge with thanks DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
  3. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear system. J. Comput. Appl. Math. 2012, 252, 86–94. [Google Scholar] [CrossRef]
  4. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for system of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  5. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  6. Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algorithms 2015, 70, 377–392. [Google Scholar] [CrossRef] [Green Version]
  7. Potra, F.A.; Pták, V. Nondiscrete Introduction and Iterative Process, Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  8. Henrici, P. Applied and Computational Complex Analysis; Wiley and Sons: New York, NY, USA, 1974; Volume 1. [Google Scholar]
  9. Aberth, O. Iteration Methods for Finding all Zeros of a Polynomial Simultaneously. Math. Comput. 1973, 27, 339–344. [Google Scholar] [CrossRef]
  10. Lázaro, M.; Martín, P.; Agüero, A.; Ferrer, I. The Polynomial Pivots as Initial Values for a New Root-Finding Iterative Method. J. Appl. Math. 2015, 2015, 413816. [Google Scholar] [CrossRef] [Green Version]
  11. Pan, V.Y. Optimal and nearly optimal algorithms for approximating polynomial zeros. Comput. Math. Appl. 1996, 3, 97–138. [Google Scholar] [CrossRef] [Green Version]
  12. Pan, V.Y. Solving a polynomial equation: Some history and recent progress. SIAM Rev. 1997, 39, 187–220. [Google Scholar] [CrossRef]
  13. Pan, V.Y. Univariate polynomials: Nearly optimal algorithms for numerical factorization and root-finding. J. Symb. Comput. 2002, 33, 701–733. [Google Scholar] [CrossRef] [Green Version]
  14. Pan, V.Y.; Zheng, A.L. New progress in real and complex polynomial root-finding. Comput. Math. Appl. 2011, 61, 1305–1334. [Google Scholar] [CrossRef] [Green Version]
  15. Kyurkchiev, N.V. Initial Approximations and Root Finding Methods; Willey: New York, NY, USA, 1998. [Google Scholar]
  16. Henrici, P.; Watkins, B.O. Finding zeros of a polynomial by the Q-D algorithm. Commun. ACM 1965, 8, 570–574. [Google Scholar] [CrossRef]
  17. Hubbard, J.; Schleicher, D.; Sutherland, S. How to find all roots of complex polynomials by Newton’s method. Invent. Math. 2001, 146, 1–33. [Google Scholar] [CrossRef]
  18. Petković, M.; Ilić, S.; Tričković, S. A family of simultaneous zero-finding methods. Comput. Math. Appl. 1997, 34, 49–59. [Google Scholar] [CrossRef] [Green Version]
  19. Petković, M.S.; Petković, L.D.; Herceg, D.D. Point estimation of a family of simultaneous zero-finding methods. Comput. Math. Appl. 1998, 36, 1–12. [Google Scholar] [CrossRef] [Green Version]
  20. Shah, F.A.; Noor, M.A.; Shafiq, M.A. Some generalized recurrence relations and iterative methods for nonlinear equations by using decomposition techniques. Appl. Math. Comput. 2015, 251, 378–386. [Google Scholar]
  21. Kou, J. A third-order modification of Newton method for systems of nonlinear equations. Appl. Math. Comput. 2007, 191, 117–121. [Google Scholar]
  22. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  23. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  24. Gutiérrez, J.M.; Hernández, M.A. Recurrence realtions for the super-Halley method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef] [Green Version]
  25. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: New York, NY, USA, 2013. [Google Scholar]
  26. Amat, S.; Busquier, S.; Plaza, S.; Guttiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef] [Green Version]
  27. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequationes Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  28. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  29. Argyros, I.K.; Magreñán, Á.A. Ball convergence theorems and the convergence planes of an iterative methods for nonlinear equations. SeMA 2015, 71, 39–55. [Google Scholar]
  30. Argyros, I.K.; George, S. Local convergence of some higher-order Newton-like method with frozen derivative. SeMa 2015, 70, 47–59. [Google Scholar] [CrossRef]
  31. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  32. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  33. Ezquerro, J.A.; Hernández, M.A. A uniparametric halley type iteration with free second derivative. Int. J. Pure Appl. Math. 2003, 6, 99–110. [Google Scholar]
  34. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K. Local Convergence of Solvers with Eighth Order Having Weak Conditions. Symmetry 2020, 12, 70. https://doi.org/10.3390/sym12010070

AMA Style

Behl R, Argyros IK. Local Convergence of Solvers with Eighth Order Having Weak Conditions. Symmetry. 2020; 12(1):70. https://doi.org/10.3390/sym12010070

Chicago/Turabian Style

Behl, Ramandeep, and Ioannis K. Argyros. 2020. "Local Convergence of Solvers with Eighth Order Having Weak Conditions" Symmetry 12, no. 1: 70. https://doi.org/10.3390/sym12010070

APA Style

Behl, R., & Argyros, I. K. (2020). Local Convergence of Solvers with Eighth Order Having Weak Conditions. Symmetry, 12(1), 70. https://doi.org/10.3390/sym12010070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop