[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Designing Tasks for Introducing Functions and Graphs within Dynamic Interactive Environments
Next Article in Special Issue
Control Method of Flexible Manipulator Servo System Based on a Combination of RBF Neural Network and Pole Placement Strategy
Previous Article in Journal
Feature Selection for Colon Cancer Detection Using K-Means Clustering and Modified Harmony Search Algorithm
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments

by
Marat Akhmet
1,
Duygu Aruğaslan Çinçin
2,
Madina Tleubergenova
3,4,* and
Zakhira Nugayeva
3,4
1
Department of Mathematics, Middle East Technical University, Ankara 06800, Turkey
2
Department of Mathematics, Süleyman Demirel University, Isparta 32260, Turkey
3
Department of Mathematics, K. Zhubanov Aktobe Regional University, Aktobe 030000, Kazakhstan
4
Institute of Information and Computational Technologies CS MES RK, Almaty 050000, Kazakhstan
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(5), 571; https://doi.org/10.3390/math9050571
Submission received: 9 February 2021 / Revised: 2 March 2021 / Accepted: 4 March 2021 / Published: 7 March 2021

Abstract

:
This is the first time that the method for the investigation of unpredictable solutions of differential equations has been extended to unpredictable oscillations of neural networks with a generalized piecewise constant argument, which is delayed and advanced. The existence and exponential stability of the unique unpredictable oscillation are proven. According to the theory, the presence of unpredictable oscillations is strong evidence for Poincaré chaos. Consequently, the paper is a contribution to chaos applications in neuroscience. The model is inspired by chaotic time-varying stimuli, which allow studying the distribution of chaotic signals in neural networks. Unpredictable inputs create an excitation wave of neurons that transmit chaotic signals. The technique of analysis includes the ideas used for differential equations with a piecewise constant argument. The results are illustrated by examples and simulations. They are carried out in MATLAB Simulink to demonstrate the simplicity of the diagrammatic approaches.

1. Introduction

There are hybrid neural networks, which are neither continuous-time nor purely discrete-time, and among them are dynamical systems with impulses and models with piecewise constant arguments [1,2,3,4,5,6,7,8,9,10]. In recent years, the dynamics of Hopfield-type neural networks have been studied and developed by many authors by using impulsive differential equations [11,12,13,14,15] and differential equations with a piecewise constant argument [16,17,18,19]. In this paper, a new model of Hopfield-type neural networks with an unpredictable input-output, as well as a delayed and advanced generalized piecewise constant argument is proposed. Hopfield-type neural networks are effective at adaptive pattern recognition and vision and image processing [20,21,22]. Differential equations with a piecewise constant argument describing the Hopfield neural networks may “memorize” the values of the phase variable at certain moments of time to utilize the values during the middle process till the next moment [5,6,7,8,9,10,16,17,18,19,23,24,25,26,27,28]. Neural networks, comprised of chaotically oscillating elements, store and transmit information in almost the same way as nerve cells in the brain. It is known that unpredictable oscillations cause chaotic behavior [29,30,31,32,33,34,35,36,37,38,39,40,41]. Therefore, their presence is necessary to study chaotic dynamics in neural networks.
The novelty of our results has to be considered with respect to oscillations, chaos, and modeling for neural networks. Oscillations such as periodic and almost periodic were discussed intensively in [16,17,18,19,23,24,25,26,27,42,43,44,45]. However, the most developed are unpredictable oscillations, which were introduced and developed in [31,32,33,34,35,36,37,38,39]. This is the first time unpredictable oscillations have been considered for neural networks with a generalized-type piecewise constant argument. The argument admits the property to be delayed and advanced, and consequently, it provides rich opportunities for the investigation and application of neural networks.
It is known that oscillations and periodic motions are frequently observed in the activities of the neurons in the brain. Recent developments in the field of neural networks have led to an increased interest in the complexity of the dynamics. Oscillations and chaos in neural networks are actual and have stimulated the interest of many scientists [46,47,48,49,50]. They occur in a neural network system due to the properties of single neurons [46,50,51] and synaptic connections among neurons [52,53]. Neural networks in present research display unpredictable oscillations and chaos. The unpredictable function was introduced in [29] and is based on the dynamics of unpredictable points and Poincaré chaos [30]. More precisely, the function is an unpredictable point of the Bebutov dynamics, and consequently, it is a member of the chaotic set [31]. The notion of the unpredictable point extends the frontiers of the classical theory of dynamical systems, and the unpredictable function provides new problems of the existence of unpredictable oscillations for the theory of differential equations [29,30,31,32,33,34,35,36]. These studies have been identified as major contributing factors for the emergence of new types of sophisticated motion. Significant results have been obtained for unpredictable oscillations of Hopfield-type neural networks, shunting inhibitory cellular neural networks, and inertial neural networks [37,38,39].
To the best of our knowledge, there have been very few results on the dynamical behavior of Hopfield-type neural networks with piecewise constant arguments [16,17,18,19,26,27]. In the present paper, we try to expand them by considering piecewise constant arguments of the generalized type [5,6,7,8,9,10,16,17,18,19,23,24,25,26,27,28,43,44] and by using the theory of unpredictable functions. We improve on previous methods by considering unpredictable inputs, which allow studying the distribution of chaotic signals in neural networks.

2. Preliminaries

Denote by R , N , Z the set of all real numbers, natural numbers, and integers, respectively. Introduce a norm for the vector u = ( u 1 , , u m ) , u i R , i = 1 , , m , as | | u | | = max 1 i m | u i | , where | · | is the absolute value. Correspondingly, for a square matrix A = ( a i j ) m × m , the norm A = max 1 i m j = 1 m | a i j | is utilized.
We fix two real valued sequences θ i , ξ i , i Z , such that θ i < θ i + 1 , θ i ξ i θ i + 1 for all i Z , | θ i | as | i | . It is assumed that there exists a positive number θ such that θ k + 1 θ k θ for all integers k.
The main subject under investigation in this paper is the following Hopfield-type neural network system with a piecewise constant argument:
x i ( t ) = a i x i ( t ) + j = 1 m b i j f j ( x j ( t ) ) + j = 1 m c i j g j ( x j ( γ ( t ) ) ) + ϑ i ( t ) ,
where t , x i R , i = 1 , 2 , m , γ ( t ) = ξ k if θ k t < θ k + 1 , k Z :
  • a i > 0 —the rates with which the units self-regulate or reset their potentials when isolated from other units and inputs;
  • m—the number of neurons in the network;
  • x i ( t ) —the state of the ith unit at time t;
  • f j , g j —the activation functions of the incoming potentials of the unit j;
  • b i j , c i j —the synaptic connection weights of the unit j on the unit i;
  • ϑ i ( t ) —the time-varying stimulus, corresponding to the external input from outside the network to the unit i.
Throughout this paper, we assume that the parameters b i j and c i j are real and that the activation functions f j , g j : R R j = 1 , 2 , , m are continuous functions. Moreover, suppose that there exist positive constants λ and λ ¯ such that the inequality λ a i λ ¯ , holds for each i = 1 , 2 , , m .
We present system (1) in the following vector form:
x ( t ) = A x ( t ) + B f ( x ( t ) ) + C g ( x ( γ ( t ) ) ) + ϑ ( t ) ,
where x = c o l o n ( x 1 , x 2 , , x m ) is the neuron state vector, f ( x ) = c o l o n ( f 1 ( x 1 ) , f 2 ( x 2 ) , , f m ( x m ) ) and g ( x ) = c o l o n ( g 1 ( x 1 ) , g 2 ( x 2 ) , , g m ( x m ) ) are the activations, and ϑ = c o l o n ( ϑ 1 , ϑ 2 , , ϑ m ) is the input vector. Moreover, A = d i a g ( a 1 , a 2 , , a m ) , B = ( b i j ) m × m , C = ( c i j ) m × m are matrices.
As the usual activations for continuous time neural network dynamics, the following sigmoidal functions are considered [45]:
f ( σ ) = t a n h ( σ ) = e σ e σ e σ + e σ , f ( σ ) = 2 π a r c t a n ( 2 σ π ) .
They are used in neural networks as activation functions, since they allow both amplifying weak signals and do not become saturated by strong signals. The activation function and the output function are summed up with the term transfer functions. If the activation function determines the total signal a neuron receives, the transfer function translates the input signals to the output signals.
The block diagram of the Hopfield-type neural network system with a piecewise constant argument is shown in Figure 1, and the symbols for the diagram are described in Table 1.
Definition 1
([29]). A uniformly continuous and bounded function v : R R m is unpredictable if there exist positive numbers ϵ 0 , δ and sequences t n , u n , both of which diverge to infinity such that v ( t + t n ) v ( t ) as n uniformly on compact subsets of R and v ( t + t n ) v ( t ) ϵ 0 for each t [ u n δ , u n + δ ] and n N .

3. Main Results

Let Σ 0 denote the space of m-dimensional vector-functions φ : R R m , φ = ( φ 1 , φ 2 , , φ m ) with the norm φ 1 = sup t R φ ( t ) . The functions of this space are assumed to satisfy the following properties:
(A1)
they are uniformly continuous;
(A2)
there exists a number H > 0 such that φ 1 < H for each function φ ;
(A3)
there exists a sequence t n that diverges to infinity such that φ ( t + t n ) φ ( t ) uniformly on each closed and bounded interval of the real axis for each function φ .
The following conditions on the system (2) are assumed:
(C1)
f ( u ) f ( v ) L u v and g ( u ) g ( v ) L ¯ u v for all u , v R m , where L , L ¯ are positive constants;
(C2)
there exist positive numbers m f , m g such that sup x < H f ( x ) m f and sup x < H g ( x ) m g ;
(C3)
ϑ is a function from the space Σ 0 , and there exists a positive number m ϑ such that sup t R ϑ ( t ) m ϑ ;
(C4)
B m f + C m g + m ϑ < H λ ;
(C5)
B L + C L ¯ < λ ;
(C6)
λ + B L + K C L ¯ < 0 , where
K = 1 θ [ ( λ ¯ + | | B | | L ) ( 1 + | | C | | L ¯ θ ) e ( λ ¯ + | | B | | L ) θ + | | C | | L ¯ ] 1 ;
(C7)
θ [ ( λ ¯ + | | B | | L ) ( 1 + | | C | | L ¯ θ ) e ( λ ¯ + | | B | | L ) θ + | | C | | L ¯ ] < 1 ;
(C8)
there exists a sequence η n with the property η n as n such that θ k η n + t n θ k 0 and ξ k η n + t n ξ k 0 as n on each finite interval of integers, where t n is the sequence given in Definition 1.
Lemma 1
([10]). A function x ( t ) = ( x 1 ( t ) , , x m ( t ) ) is a bounded solution of Equation (1) if and only if it is a solution of the following integral equation:
x ( t ) = t e A ( t s ) B f ( x ( s ) ) + C g ( x ( γ ( s ) ) ) + ϑ ( s ) d s .
Let us introduce the operator Π on Σ 0 such that:
Π φ ( t ) = t e A ( t s ) B f ( φ ( s ) ) + C g ( φ ( γ ( s ) ) ) + ϑ ( s ) d s .
Lemma 2.
Π Σ 0 Σ 0 .
Proof. 
Let us evaluate the derivative of Π φ ( t ) with respect to the time variable t . Then, we have:
d Π φ ( t ) / d t = B f ( φ ( t ) ) + C g ( φ ( γ ( t ) ) ) + ϑ ( t ) + + A t e A ( t s ) B f ( φ ( s ) ) + C g ( φ ( γ ( s ) ) ) + ϑ ( s ) d s .
Hence, we can find for all t R that:
d Π φ ( t ) / d t B f ( φ ( t ) ) + C g ( φ ( γ ( t ) ) ) + ϑ ( t ) + + λ ¯ t e λ ( t s ) B f ( φ ( t ) ) + C g ( φ ( γ ( t ) ) ) + ϑ ( t ) d s B m f + C m g + m ϑ + λ ¯ / λ B m f + C m g + m ϑ = ( 1 + λ ¯ / λ ) B m f + C m g + m ϑ .
Since the derivative of Π φ ( t ) is bounded, Π φ is uniformly continuous. This means that Π φ satisfies the property (A1).
Moreover, we have for φ Σ 0 that:
Π φ ( t ) t e λ ( t s ) B f ( φ ( s ) ) + C g ( φ ( γ ( s ) ) ) + ϑ ( s ) d s t e λ ( t s ) B m f + C m g + m ϑ d s λ 1 B m f + C m g + m ϑ .
The last inequality together with the condition (C4) implies that | | Π φ | | 1 < H . Thus, Π φ satisfies the property (A2).
Now, we need to check the last property (A3) for Π φ . In other words, we have to verify that there exists a sequence t n that diverges to infinity such that for each Π φ Σ 0 , Π φ ( t + t n ) Π φ ( t ) uniformly on each closed and bounded interval of the real axis. Fix an arbitrary positive number ε and a closed interval [ a , b ] , where a , b R with a < b . It is enough to show that | | Π φ ( t + t n ) Π φ ( t ) | | < ε for sufficiently large n and t [ a , b ] . We choose two numbers c < a and ϵ > 0 such that:
2 λ 1 B L H + C L ¯ H + m ϑ e λ ( a c ) < ε / 3 ,
ϵ λ 1 1 + B L < ε / 3 ,
2 λ 1 ( p + 1 ) ϵ + p H ( 1 e λ θ ) C L ¯ < ε / 3 .
Take n large enough such that φ ( t + t n ) φ ( t ) < ϵ and ϑ ( t + t n ) ϑ ( t ) < ϵ on [ c , b ] . Then, for φ Σ 0 , by writing:
Π φ ( t + t n ) Π φ ( t ) = t + t n e A ( t + t n s ) B f ( φ ( s ) ) + C g ( φ ( γ ( s ) ) ) + ϑ ( s ) d s t e A ( t s ) B f ( φ ( s ) ) + C g ( φ ( γ ( s ) ) ) + ϑ ( s ) d s = t e A ( t s ) [ B [ f ( φ ( s + t n ) ) f ( φ ( s ) ) ] + C [ g ( φ ( γ ( s + t n ) ) ) g ( φ ( γ ( s ) ) ) ] + ϑ ( s + t n ) ϑ ( s ) ] d s ,
one can see that:
Π φ ( t + t n ) Π φ ( t ) t e λ ( t s ) [ B L φ ( s + t n ) φ ( s ) + C L ¯ φ ( γ ( s + t n ) ) φ ( γ ( s ) ) + ϑ ( s + t n ) ϑ ( s ) ] d s
is valid. If we divide the last integral into two parts, we get for t [ a , b ] that:
Π φ ( t + t n ) Π φ ( t ) c e λ ( t s ) [ B L φ ( s + t n ) φ ( s ) + C L ¯ φ ( γ ( s + t n ) ) φ ( γ ( s ) ) + ϑ ( s + t n ) ϑ ( s ) ] d s + c t e λ ( t s ) [ B L φ ( s + t n ) φ ( s ) + C L ¯ φ ( γ ( s + t n ) ) φ ( γ ( s ) ) + ϑ ( s + t n ) ϑ ( s ) ] d s 2 λ 1 B L H + C L ¯ H + m ϑ e λ ( a c ) + c t e λ ( t s ) [ 1 + B L ] ϵ d s + c t e λ ( t s ) C L ¯ φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s 2 λ 1 B L H + C L ¯ H + m ϑ e λ ( a c ) + λ 1 [ 1 + B L ] ϵ + c t e λ ( t s ) C L ¯ φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s 2 λ 1 B L H + C L ¯ H + m ϑ e λ ( a c ) + λ 1 [ 1 + B L ] ϵ + C L ¯ c t e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s .
We need to find an upper bound for the last integral. For this purpose, we shall evaluate it by dividing the interval of integration into subintervals as follows. For a fixed t [ a , b ] , we assume without loss of generality that θ i θ i η n + t n and θ i θ i η n + t n = c < θ i + 1 < θ i + 2 < < θ i + p θ i + p η n + t n t < θ i + p + 1 so that there exist exactly p discontinuity moments in the interval [ c , t ] .
Let us denote:
I = c t e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s .
We shall need the following presentation of the last integral.
I = c θ i + 1 e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + θ i + 1 θ i + 1 η n + t n e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + θ i + 1 η n + t n θ i + 2 e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + θ i + 2 θ i + 2 η n + t n e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + θ i + 2 η n + t n θ i + 3 e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + θ i + p η n + t n t e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s = k = i i + p 1 θ k η n + t n θ k + 1 e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + k = i i + p 1 θ k + 1 θ k + 1 η n + t n e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s + θ i + p η n + t n t e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s .
Now, if we define the integrals in the last expression as:
A k = θ k η n + t n θ k + 1 e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s
and:
B k = θ k + 1 θ k + 1 η n + t n e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s ,
where k = i , i + 1 , , i + p 1 , then we can write that:
I = k = i i + p 1 A k + k = i i + p 1 B k + θ i + p η n + t n t e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s .
For t [ θ k η n + t n , θ k + 1 ) , γ ( t ) = ξ k , and we have by the condition (C8) that γ ( t + t n ) = ξ k + η n , k = i , i + 1 , , i + p 1 . Hence, we obtain that:
A k = θ k η n + t n θ k + 1 e λ ( t s ) φ ( ξ k + η n ) φ ( ξ k ) d s = θ k η n + t n θ k + 1 e λ ( t s ) φ ( ξ k + t n + o ( 1 ) ) φ ( ξ k ) d s = θ k η n + t n θ k + 1 e λ ( t s ) φ ( ξ k + t n ) φ ( ξ k ) + φ ( ξ k + t n + o ( 1 ) ) φ ( ξ k + t n ) d s θ k η n + t n θ k + 1 e λ ( t s ) [ φ ( ξ k + t n ) φ ( ξ k ) + φ ( ξ k + t n + o ( 1 ) ) φ ( ξ k + t n ) ] d s θ k η n + t n θ k + 1 e λ ( t s ) ϵ + φ ( ξ k + t n + o ( 1 ) ) φ ( ξ k + t n ) d s .
Since the function φ is uniformly continuous, for large n and ϵ > 0 , we can find a ρ > 0 such that φ ( ξ k + t n + o ( 1 ) ) φ ( ξ k + t n ) < ϵ if | ξ k + η n ξ k t n | < ρ . As a result of this discussion, we conclude that:
A k 2 ϵ θ k 1 η n + t n θ k e λ ( t s ) d s 2 ϵ λ 1 ( 1 e λ θ ) .
Moreover, we have by the condition (C8) that:
B k 2 H θ k θ k η n + t n e λ ( t s ) d s 2 H λ 1 ( 1 e λ θ ) .
Applying a similar idea used for the integral A k , we get:
θ i + p 1 η n + t n t e λ ( t s ) φ ( γ ( s + t n ) ) φ ( γ ( s ) ) d s 2 ϵ λ 1 ( 1 e λ θ ) .
Thus, it is true that:
I 2 ( p + 1 ) ϵ λ 1 ( 1 e λ θ ) + 2 p H λ 1 ( 1 e λ θ ) = 2 λ 1 ( p + 1 ) ϵ + p H ( 1 e λ θ ) .
As a result of these evaluations, it follows that:
Π φ ( t + t n ) Π φ ( t ) 2 λ 1 B L H + C L ¯ H + m ϑ e λ ( a c ) + ϵ λ 1 [ 1 + B L ] + 2 λ 1 ( p + 1 ) ϵ + p H ( 1 e λ θ ) C L ¯
for all t [ a , b ] . Hence, the inequalities (4)–(6) give that | | Π φ ( t + t n ) Π φ ( t ) | | < ε for t [ a , b ] . Thus, the function Π φ satisfies the property (A3). As a result, the operator Π is invariant in Σ 0 . □
Lemma 3.
The operator Π is a contraction on Σ 0 .
Proof. 
Let the functions φ and ψ belong to the space Σ 0 . We obtain for all t R that:
Π φ ( t ) Π ψ ( t ) t e λ ( t s ) [ B L φ ( s ) ψ ( s ) + C L ¯ φ ( γ ( s ) ) ψ ( γ ( s ) ) ] d s t e λ ( t s ) [ B L φ ( s ) ψ ( s ) 1 + C L ¯ φ ( s ) ψ ( s ) 1 ] d s λ 1 B L + C L ¯ φ ( t ) ψ ( t ) 1 .
Then, it is true for all t R that:
Π φ Π ψ 1 λ 1 B L + C L ¯ φ ( t ) ψ ( t ) 1 .
Consequently, the condition (C5) implies that the operator Π : Σ 0 Σ 0 is contractive. The lemma is proven. □
The following assertion is needed in the proof of the stability of the solution.
Lemma 4
([10]). Assume that the conditions (C1), (C7) are fulfilled and z ( t ) is a continuous function with z ( t ) 1 < H . If w ( t ) is a solution of:
w ( t ) = A w ( t ) + B [ f ( w ( t ) + z ( t ) ) f ( z ( t ) ) ] + C [ g ( w ( γ ( t ) ) + z ( γ ( t ) ) ) g ( z ( γ ( t ) ) ) ] ,
then the following inequality:
| | w ( γ ( t ) ) | | K | | w ( t ) | |
holds for all t R , where K is as defined in (C6).
Proof. 
First, we fix an integer i such that t θ i , θ i + 1 and then consider two alternative cases (a) θ i ξ i t < θ i + 1 and (b) θ i t < ξ i < θ i + 1 .
For (a) t ξ i , we have:
| | w ( t ) | | | | w ( ξ i ) | | + ξ i t | | A | | | | w ( s ) | | + | | B | | L | | w ( s ) | | + | | C | | L ¯ | | w ( ξ i ) | | d s | | w ( ξ i ) | | + ξ i t λ ¯ | | w ( s ) | | + | | B | | L | | w ( s ) | | + | | C | | L ¯ | | w ( ξ i ) | | d s | | w ( ξ i ) | | ( 1 + | | C | | L ¯ θ ) + ξ i t λ ¯ + | | B | | L | w ( s ) | | d s .
The Gronwall–Bellman lemma yields that:
| | w ( t ) | | | | w ( ξ i ) | | ( 1 + | | C | | L ¯ θ ) e ( λ ¯ ¯ + | | B | | L ) θ .
Moreover, for t θ i , θ i + 1 , we have:
| | w ( ξ i ) | | | | w ( t ) | | + ξ i t | | A | | | | w ( s ) | | + | | B | | L | | w ( s ) | | + | | C | | L ¯ | | w ( ξ i ) | | d s | | w ( t ) | | + ξ i t [ ( λ ¯ + | | B | | L ) | | w ( s ) | | + | | C | | L ¯ | | w ( ξ i ) | | ] d s | | w ( t ) | | + ξ i t [ ( λ ¯ + | | B | | L ) ( 1 + | | C | | L ¯ θ ) e ( λ ¯ + | | B | | L ) θ | | w ( ξ i ) | | + | | C | | L ¯ | | w ( ξ i ) | | ] d s | | w ( t ) | | + θ [ ( λ ¯ + | | B | | L ) ( 1 + | | C | | L ¯ θ ) e ( λ ¯ + | | B | | L ) θ + | | C | | L ¯ ] | | w ( ξ i ) | | .
Consequently, it follows from the condition (C7) that w ( ξ i ) K w ( t ) , for t θ i , θ i + 1 , i Z . Therefore, (8) holds for all θ i ξ i t < θ i + 1 , i Z .
The assertion for case (b) θ i t < ξ i < θ i + 1 , i Z can be proven in the same way.
Thus, one can conclude that (8) holds for all t R . The lemma is proven. □
Theorem 1.
Assume that the conditions (C1)–(C8) hold true. If the function ϑ is unpredictable, then the system (1) has a unique exponentially stable unpredictable solution.
Proof. 
First, we show that Σ 0 is a complete space. Let ϕ k ( t ) , which has a limit ϕ ( t ) on R , be a Cauchy sequence in the space Σ 0 . It can be easily shown that the limit function ϕ ( t ) is uniformly continuous and bounded, and hence, it satisfies the properties (A2) and (A3). It remains only to show that ϕ ( t ) satisfies the property (A3). Consider a closed and bounded interval I R . We have:
ϕ ( t + t n ) ϕ ( t ) ϕ ( t + t n ) ϕ k ( t + t n ) + ϕ k ( t + t n ) ϕ k ( t ) + ϕ k ( t ) ϕ ( t ) .
If one takes sufficiently large n and k such that each term on the right-hand side of the last inequality is less than ε 3 for a small enough ε > 0 and t I , then the inequality ϕ ( t + t n ) | ϕ ( t ) < ε is satisfied on I . This implies that the sequence ϕ ( t + t n ) converges uniformly to ϕ ( t ) on I . Thus, the space Σ 0 is complete. Since the operator Π is invariant and contractive in Σ 0 , according to Lemmas 2 and 3, respectively, it follows from the contraction mapping theorem that the operator Π has a unique fixed point z ( t ) Σ 0 , which is the unique solution of the neural network system (1). Hence, the uniqueness of the solution is shown.
Next, we verify that this solution is unpredictable. We can find a positive number κ and l , k N such that the following inequalities:
κ < δ ,
κ ( λ ¯ + L B ) ( 1 / l + 2 / k ) 2 L ¯ C + 1 / 2 3 / 2 l
and:
z ( t + s ) z ( t ) < ϵ 0 min { 1 / k , 1 / 4 l } , t R , | s | < κ ,
are satisfied. Suppose that the numbers κ , l , k and n N are fixed.
Denote:
Δ = z ( u n + t n ) z ( u n )
and consider the cases: (i) Δ ϵ 0 / l , (ii) Δ < ϵ 0 / l .
(i) If Δ ϵ 0 / l holds, we have:
z ( t + t n ) z ( t ) z ( u n + t n ) z ( u n ) z ( u n ) z ( t ) z ( t + t n ) z ( u n + t n ) > ϵ 0 / l ϵ 0 / 4 l ϵ 0 / 4 l = ϵ 0 / 2 l
for t [ u n κ , u n + κ ] , n N .
(ii) If Δ < ϵ 0 / l is true, it follows from (11) that:
z ( t + t n ) z ( t ) z ( u n + t n ) z ( u n ) + z ( u n ) z ( t ) + z ( t + t n ) z ( u n + t n ) < ϵ 0 / l + ϵ 0 / k + ϵ 0 / k = ( 1 / l + 2 / k ) ϵ 0
for t [ u n , u n + κ ] . We can see that:
z ( t ) = z ( u n ) + u n t [ A z ( s ) + B f ( z ( s ) ) + C g ( z ( γ ( s ) ) ) + ϑ ( s ) ] d s
and:
z ( t + t n ) = z ( u n + t n ) + u n t [ A z ( s + t n ) + B f ( z ( s + t n ) ) + C g ( z ( γ ( s + t n ) ) ) + ϑ ( s + t n ) ] d s .
Subtracting the first equation from the second one, we get:
z ( t + t n ) z ( t ) = z ( u n + t n ) z ( u n ) + u n t [ A [ z ( s + t n ) z ( s ) ] + B [ f ( z ( s + t n ) ) f ( z ( s ) ) ] + C [ g ( z ( γ ( s + t n ) ) ) g ( z ( γ ( s ) ) ) ] + [ ϑ ( s + t n ) ϑ ( s ) ] ] d s = z ( u n + t n ) z ( u n ) u n t A [ z ( s + t n ) z ( s ) ] d s + u n t B [ f ( z ( s + t n ) ) f ( z ( s ) ) ] d s + u n t C [ g ( z ( γ ( s + t n ) ) ) g ( z ( γ ( s ) ) ) ] d s + u n t [ ϑ ( s + t n ) ϑ ( s ) ] d s .
Therefore, we have that:
z ( t + t n ) z ( t ) z ( u n + t n ) z ( u n ) u n t λ ¯ z ( s + t n ) z ( s ) d s u n t B f ( z ( s + t n ) ) f ( z ( s ) ) d s u n t C g ( z ( γ ( s + t n ) ) ) g ( z ( γ ( s ) ) ) d s + u n t ϑ ( s + t n ) ϑ ( s ) d s ϵ 0 / l λ ¯ κ ( 1 / l + 2 / k ) ϵ 0 B L κ ( 1 / l + 2 / k ) ϵ 0 C L ¯ u n t z ( γ ( s + t n ) ) z ( γ ( s ) ) d s + κ 2 ϵ 0
for t [ u n + κ 2 , u n + κ ] .
For a fixed t [ u n + κ 2 , u n + κ ] , we can take sufficiently small κ so that θ i η n + t n u n < u n + κ 2 t u n + κ < θ i + 1 for some i Z . Hence, γ ( t ) = ξ i for t [ u n + κ 2 , u n + κ ] , which implies together with the condition (C8) that γ ( t + t n ) = ξ i + η n . Since z ( t ) Σ 0 , the function z is uniformly continuous. Using this fact, for ϵ 0 > 0 and for large n, we can find a ρ > 0 such that:
u n t z ( γ ( s + t n ) ) z ( γ ( s ) ) d s = u n t z ( ξ i + η n ) z ( ξ i ) d s u n t z ( ξ i + t n ) z ( ξ i ) d s + u n t z ( ξ i + t n + o ( 1 ) ) z ( ξ i + t n ) d s 2 κ ϵ 0 ,
if ξ i + η n ξ i t n < ρ .
At the end, we have by the inequality (10) that:
z ( t + t n ) z ( t ) ϵ 0 / l λ ¯ ( 1 / l + 2 / k ) κ ϵ 0 B L ( 1 / l + 2 / k ) κ ϵ 0 2 C L ¯ κ ϵ 0 + κ 2 ϵ 0 ϵ 0 / l + 3 ϵ 0 / 2 l ϵ 0 / 2 l .
Based on the inequalities obtained in cases (i) and (ii), we see that the solution z ( t ) is unpredictable with u ¯ n = u n + 3 κ 4 and δ ¯ n = κ 4 .
Lastly, let us consider the stability of the solution z ( t ) .
Denote w ( t ) = y ( t ) z ( t ) , where y ( t ) = c o l o n ( y 1 ( t ) , y 2 ( t ) , , y m ( t ) ) is another solution of the neural network system (1) with a piecewise constant argument of the generalized type. Then, w ( t ) = c o l o n ( w 1 ( t ) , w 2 ( t ) , , w m ( t ) ) will be a solution of (7).
We have that:
| | w ( t ) | | e λ ( t t 0 ) w ( t 0 ) + t 0 t e λ ( t s ) [ | | B | | L | | w ( s ) | | + | | C | | L ¯ | | w ( γ ( s ) ) | | ] d s .
By applying the inequalities (8) to (12), we obtain that:
| | w ( t ) | | e λ ( t t 0 ) | | w ( t 0 ) | | + t 0 t e λ ( t s ) | | B | | L | | w ( s ) | | + | | C | | L ¯ K w ( s ) d s .
Hence, we find that:
w ( t ) e λ ( t t 0 ) w ( t 0 ) + t 0 t e λ ( t s ) ( B L + K C L ¯ ) | | w ( s ) | | d s .
The last inequality can be written as follows:
e λ t w ( t ) e λ t 0 | | w ( t 0 ) | | + ( B L + K C L ¯ ) t 0 t e λ s w ( s ) d s .
If we apply the Gronwall–Bellman lemma for the last inequality, it leads to:
| | w ( t ) | | | | w ( t 0 ) | | e λ + B L + K C L ¯ ( t t 0 ) .
In other words, we have:
| | y ( t ) z ( t ) | | | | y ( t 0 ) z ( t 0 ) | | e λ + B L + K C L ¯ ( t t 0 ) .
Now, based on the condition (C6), we conclude that the solution z ( t ) of (1) is uniformly exponentially stable. The theorem is proven. □

4. Examples and Numerical Simulations

We present two examples in this section. First, we construct an example of an unpredictable function by means of the logistic map considered in [29]. Then, we make use of this function in the second example, which deals with a Hopfield-type neural network system.
Example 1.
Consider the following discrete logistic map given by:
χ i + 1 = μ χ i ( 1 χ i ) ,
where i Z . We know that if μ ( 0 , 4 ] , then the iterations of this map belong to the interval [ 0 , 1 ] [54]. Moreover, if μ [ 3 + ( 2 3 ) 1 / 2 , 4 ] , Equation (13) has an unpredictable solution. Let Ψ i , i Z , denote an unpredictable solution of (13) for μ = 3.93 . There exist a positive number ε 0 and sequences p n q n that diverge to infinity such that | Ψ i + p n Ψ i | 0 as n for each i in bounded intervals of integers and | Ψ p n + q n Ψ q n | ε 0 for each n N .
Consider the following integral defined by:
Θ ( t ) = t e 4 ( t s ) Ω ( s ) d s , t R ,
where Ω ( t ) = Ψ i for t [ i , i + 1 ) , i Z . It is worth noting that Θ ( t ) is bounded on the whole real axis such that sup t R Θ ( t ) 1 / 4 . In [37], it was proven that the function Θ ( t ) is an unpredictable function.
Since we do not know the initial value of the function Θ ( t ) , we are not able to visualize it. For this reason, we represent the function Θ ( t ) as follows:
Θ ( t ) = t e 4 ( t s ) Ω ( s ) d s = e 4 t Θ 0 + 0 t e 4 ( t s ) Ω ( s ) d s ,
where Θ 0 = 0 e 4 s Ω ( s ) d s . It is worth noting that the simulation of an unpredictable function is impossible, since the initial value is not known.
That is why, we simulate a function Φ ( t ) that approaches the function Θ ( t ) as time increases. Let us determine:
Φ ( t ) = e 4 t Φ 0 + 0 t e 4 ( t s ) Ω ( s ) d s ,
where Φ 0 is a fixed number, which is not necessarily equal to Θ 0 . Subtract equality (15) from equality (14) to obtain that Θ ( t ) Φ ( t ) = e 4 t ( Θ 0 Φ 0 ) , t 0 . The last equation demonstrates that the difference Θ ( t ) Φ ( t ) exponentially diminishes. This means that the function Φ ( t ) exponentially tends to the unpredictable function Θ ( t ) , i.e., the graphs of these functions approach, as time increases.
The functions Φ ( t ) and Θ ( t ) are the solutions of the differential equation:
Φ ( t ) = 4 Φ ( t ) + Ω ( t ) ,
and instead of the curve describing the unpredictable solution Θ ( t ) , we can take the graph of Φ ( t ) , which approximates the first one asymptotically. In Figure 2, we depict the graph defined by the initial value Φ ( 0 ) = 0.45 .
Example 2.
Consider the following Hopfield-type neural network given by:
x i ( t ) = a i x i ( t ) + j = 1 3 b i j f j ( x j ( t ) ) + j = 1 3 c i j g j ( x j ( γ ( t ) ) ) + ϑ i ( t ) ,
where a 1 = 0.5 , a 2 = 0.2 , a 3 = 0.25 , f i ( x i ( t ) ) = 0.1 tanh ( x i ( t ) / 8 ) , g i ( x i ( t ) ) = 0.05 tanh ( x i ( t ) / 6 ) , i = 1 , 2 , 3 ,
b 11 b 12 b 13 b 21 b 22 b 23 b 31 b 32 b 33 = 0.1 0.2 0.5 0.3 0.1 0.2 0.2 0.1 0.3 , c 11 c 12 c 13 c 21 c 22 c 23 c 31 c 32 c 33 = 0.1 0.2 0.1 0.2 0.2 0.2 0.1 0.3 0.1
and the time-varying stimulus is:
ϑ 1 ( t ) ϑ 2 ( t ) ϑ 3 ( t ) = 24 Θ ( t ) 0.04 48 Θ 3 ( t ) + 0.05 58 Θ 3 ( t ) + 0.03 .
Here, Θ ( t ) is the unpredictable function mentioned in Example 1.
Let the argument function γ ( t ) = ξ k be defined by the sequences θ k = k , ξ k = 2 k + 1 2 + Ψ k , k Z .
We can see that the conditions (C1)–(C8) are valid for the neural network (16) with λ = 0.2 , λ ¯ = 0.5 , L j = 0.0125 , L ¯ j = 0.0083 , m f = 0.1 , m g = 0.05 , for j = 1 , 2 , 3 , and moreover, m v = 6.04 , K = 1.1648 . If we accept H = 31 , then the system (16) satisfies all conditions of Theorem 1. Therefore, (16) has a unique exponentially stable unpredictable solution x ( t ) .
Since the initial value is not known precisely, it is not possible to simulate the unpredictable solution x ( t ) . For this reason, we consider another solution ψ ( t ) = ( ψ 1 ( t ) , ψ 2 ( t ) , ψ 3 ( t ) ) , which starts initially at the point ψ ( 0 ) = ( 12.4956 , 0.7828 , 12.1987 ) .
The graph of function ψ ( t ) approaches the unpredictable solution x ( t ) of Equation (16), as time increases. That is, instead of the curve describing the unpredictable solution, one can consider the graph of x ( t ) . We present the coordinates of the solution ψ ( t ) in Figure 3. Moreover, Figure 4 indicates the trajectory of the solution.
Further, we describe a circuit implementation of the proposed Hopfield-type neural network (16) using MATLAB Simulink. The Simulink model of the Hopfield-type neural network is depicted in Figure 5, and the symbols are described in Table 2.
In the block diagram, we took the hyperbolic tangent transfer function as the sigmoid functions f and g. To implement the block diagram, we used the transfer function “tansig” from the MATLAB Simulink library. Inputs ϑ 1 ( t ) , ϑ 2 ( t ) , ϑ 3 ( t ) are unpredictable functions.

Author Contributions

M.A., conceptualization, methodology, and investigation; D.A.Ç., conceptualization, investigation, and writing—review and editing; M.T., investigation, supervision, and writing—review and editing; Z.N., software, investigation, and writing—original draft. All authors read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their sincere gratitude to the referees for the helpful criticism and valuable suggestions. M. Tleubergenova was supported by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (Grant No. AP08955400). M. Akhmet was supported by 2247—National Leading Researchers Program of TÜBİTAK, Turkey, N 1199B472000670. M. Akhmet and D. Aruğaslan Çinçin were supported by TÜBİTAK, the Scientific and Technological Research Council of Turkey, under the project 118F161. Z. Nugayeva was supported by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (Grant No. AP08856170). M. Tleubergenova and Z. Nugayeva were supported by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (Grant No. AP09258737).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akhmet, M. Principles of Discontinuous Dynamical Systems; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Wiener, J. Generalized Solutions of Functional Differential Equations; World Scientific: Singapore, 1993. [Google Scholar]
  3. Lakshmikantham, V.; Bainov, D.D.; Simeonov, P.S. Theory of Impulsive Differential Equations; World Scientific: Singapore, 1989. [Google Scholar]
  4. Gopalsamy, K. Stability and Oscillation in Delay Differential Equations of Population Dynamics; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1992. [Google Scholar]
  5. Pinto, M. Asymptotic equivalence of nonlinear and quasi linear differential equations with piecewise constant arguments. Math. Comput. Model. 2009, 49, 1750–1758. [Google Scholar] [CrossRef]
  6. Coronel, A.; Maulén, C.; Pinto, M.; Sepúlveda, D. Dichotomies and asymptotic equivalence in alternately advanced and delayed differential systems. J. Math. Anal. Appl. 2017, 450, 1434–1458. [Google Scholar] [CrossRef] [Green Version]
  7. Huang, Z.; Wang, X.; Xia, Y. A topological approach to the existence of solutions for nonlinear differential equations with piecewise constant argument. Chaos Solitons Fractals 2009, 39, 1121–1131. [Google Scholar] [CrossRef]
  8. Akhmet, M.U. Stability of differential equations with piecewise constant arguments of generalized type. Nonlinear Anal. 2008, 68, 794–803. [Google Scholar] [CrossRef]
  9. Akhmet, M.U.; Aruğaslan, D. Lyapunov-Razumikhin method for differential equations with piecewise constant argument. Discret. Contin. Dyn. Syst. 2009, 25, 457–466. [Google Scholar] [CrossRef]
  10. Akhmet, M. Nonlinear Hybrid Continuous/Discrete-Time Models; Atlantis Press: Paris, France, 2011. [Google Scholar]
  11. Gopalsamy, K. Stability of artificial neural networks with impulses. Appl. Math. Comput. 2004, 154, 783–813. [Google Scholar] [CrossRef]
  12. Guan, Z.; Chen, G. On delayed impulsive Hopfield neural networks(1). Neural Netw. Off. J. Int. Neural Netw. Soc. 1999, 12, 273–280. [Google Scholar] [CrossRef]
  13. Xu, D.; Yang, Z. Impulsive delay differential inequality and stability of neural networks. J. Math. Anal. Appl. 2005, 305, 107–120. [Google Scholar] [CrossRef] [Green Version]
  14. Mohammad, S. Exponential stability in Hopfield-type neural networks with impulses. Chaos Solitons Fractals 2007, 32, 456–467. [Google Scholar] [CrossRef]
  15. Li, Y.; Lu, L. Global exponential stability and existence of periodic solution of Hopfield-type neural networks with impulses. Phys. Lett. A 2004, 333, 62–71. [Google Scholar] [CrossRef]
  16. Akhmet, M.; Yilmaz, E. Hopfield-type neural network system with piecewise constant argument. Int. J. Qual. Theory Differ. Equ. Appl. 2009, 3, 8–14. [Google Scholar] [CrossRef]
  17. Wan, L.; Wu, A. Stabilization control of generalized type neural networks with piecewise constant argument. J. Nonlinear Sci. Appl. 2016, 9, 3580–3599. [Google Scholar] [CrossRef] [Green Version]
  18. Pinto, M.; Sepúlveda, D.; Torres, R. Exponential periodic attractor of impulsive Hopfield-type neural network system with piecewise constant argument. Electron. J. Qual. Theory Differ. Equ. 2018, 34, 1–28. [Google Scholar] [CrossRef]
  19. Akhmet, M.; Yilmaz, E. Impulsive Hopfield-type neural network system with piecewise constant argument. Nonlinear Anal. Real World Appl. 2010, 11, 2584–2593. [Google Scholar] [CrossRef]
  20. Pajares, G. A Hopfield Neural Network for Image Change Detection. IEEE Trans. Neural Netw. 2006, 17, 1250–1264. [Google Scholar] [CrossRef] [PubMed]
  21. Ramya, C.; Kavitha, G.; Shreedhara, K.S. Recalling of images using Hopfield neural network model. arXiv 2011, arXiv:1105.0332. [Google Scholar]
  22. Soni, N.; Sharma, E.K.; Kapoor, A. Application of Hopfield neural network for facial image recognition. IJRTE 2019, 8, 3101–3105. [Google Scholar]
  23. Akhmet, M.U.; Enes, Y. Neural Networks with Discontinuous/Impact Activations; Springer: New York, NY, USA, 2014. [Google Scholar]
  24. Akhmet, M.; Aruğaslan, D.; Cengiz, N. Exponential stability of periodic solutions of recurrent neural networks with functional dependence on piecewise constant argument. Turk. J. Math. 2018, 42, 272–292. [Google Scholar] [CrossRef]
  25. Akhmet, M.U. Integral manifolds of differential equations with piecewise constant argument of generalized type. Nonlinear Anal. 2007, 66, 367–383. [Google Scholar] [CrossRef] [Green Version]
  26. Chiu, K.S.; Pinto, M.; Jeng, J. Existence and global convergence of periodic solutions in recurrent neural network models with a general piecewise alternately advanced and retarded argument. Acta Appl. Math. 2014, 133, 133–152. [Google Scholar] [CrossRef]
  27. Torres, R.; Pinto, M.; Castillo, S.; Kostić, M. Uniform Approximation of Impulsive Hopfield Cellular Neural Networks by Piecewise Constant Arguments on [τ,). Acta Appl. Math. 2021, 171, 8. [Google Scholar] [CrossRef]
  28. Akhmet, M.U. On the reduction principle for differential equations with piecewise constant argument of generalized type. J. Math. Anal. Appl. 2007, 336, 646–663. [Google Scholar] [CrossRef]
  29. Akhmet, M.; Fen, M.O. Poincaré chaos and unpredictable functions. Commun. Nonlinear Sci. Numer. Simulat. 2017, 48, 85–94. [Google Scholar] [CrossRef] [Green Version]
  30. Akhmet, M.; Fen, M.O. Unpredictable points and chaos. Commun. Nonlinear Sci. Numer. Simulat. 2016, 40, 1–5. [Google Scholar] [CrossRef] [Green Version]
  31. Akhmet, M.; Fen, M.O. Existence of unpredictable solutions and chaos. Turk. J. Math. 2017, 41, 254–266. [Google Scholar] [CrossRef]
  32. Akhmet, M.; Fen, M.O. Non-autonomous equations with unpredictable solutions. Commun. Nonlinear Sci. Numer. Simulat. 2018, 59, 657–670. [Google Scholar] [CrossRef]
  33. Akhmet, M.; Fen, M.O.; Tleubergenova, M.; Zhamanshin, A. Unpredictable solutions of linear differential and discrete equations. Turk. J. Math. 2019, 43, 2377–2389. [Google Scholar] [CrossRef]
  34. Akhmet, M.; Tleubergenova, M.; Zhamanshin, A. Quasilinear differential equations with strongly unpredictable solutions. Carpathian J. Math. 2020, 36, 341–349. [Google Scholar]
  35. Akhmet, M.U.; Fen, M.O.; Alejaily, E.M. Dynamics with Chaos and Fractals; Springer: Cham, Switzerland, 2020. [Google Scholar]
  36. Akhmet, M.; Tleubergenova, M.; Fen, M.O.; Nugayeva, Z. Unpredictable solutions of linear impulsive systems. Mathematics 2020, 8, 1798. [Google Scholar] [CrossRef]
  37. Akhmet, M.; Tleubergenova, M.; Nugayeva, Z. Strongly unpredictable oscillations of Hopfield-type neural networks. Mathematics 2020, 8, 1791. [Google Scholar] [CrossRef]
  38. Akhmet, M.; Seilova, R.; Tleubergenova, M.; Zhamanshin, A. Shunting inhibitory cellular neural networks with strongly unpredictable oscillations. Commun. Nonlinear Sci. Numer. Simulat. 2020, 89, 105287. [Google Scholar] [CrossRef]
  39. Akhmet, M.; Tleubergenova, M.; Zhamanshin, A. Inertial neural networks with unpredictable oscillations. Mathematics 2020, 8, 1797. [Google Scholar] [CrossRef]
  40. Miller, A. Unpredictable points and stronger versions of Ruelle–Takens and Auslander–Yorke chaos. Topol. Appl. 2019, 253, 7–16. [Google Scholar] [CrossRef]
  41. Thakur, R.; Das, R. Strongly Ruelle-Takens, strongly Auslander-Yorke and Poincaré chaos on semiflows. Commun. Nonlinear. Sci. Numer. Simulat. 2019, 81, 105018. [Google Scholar] [CrossRef]
  42. Akhmet, M.U. Almost Periodicity, Chaos, and Asymptotic Equivalence; Springer: New York, NY, USA, 2020. [Google Scholar]
  43. Yu, T.; Cao, D.; Liu, S.; Chen, H. Stability analysis of neural networks with periodic coefficients and piecewise constant arguments. J. Frankl. Inst. 2016, 353, 409–425. [Google Scholar] [CrossRef]
  44. Xi, Q. Global exponential stability of Cohen-Grossberg neural networks with piecewise constant argument of generalized type and impulses. Neural Comput. 2016, 28, 229–255. [Google Scholar] [CrossRef] [PubMed]
  45. Danciu, D. Qualitative behavior of the time delay Hopfield type neural networks with time varying stimulus. Ann. Univ. Craiova Ser. El. Eng. 2002, 26, 72–82. [Google Scholar]
  46. Guevara, M.R.; Glass, L.; Mackey, M.C.; Shrier, A. Chaos in neurobiology. IEEE Trans. Syst. Man Cybern. 1983, 13, 790–798. [Google Scholar] [CrossRef]
  47. Derrida, B.; Meir, R. Chaotic behavior of a layered neural network. Phys. Rev. A 1988, 38, 3116–3119. [Google Scholar] [CrossRef]
  48. Wang, L.; Pichler, E.E.; Ross, J. Oscillations and chaos in neural networks: An exactly solvable model. Proc. Nat. Acad. Sci. USA 1990, 87, 9467–9471. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Landau, I.D.; Sompolinsky, H. Coherent chaos in a recurrent neural network with structured connectivity. PLoS Comput. Biol. 2018, 14, e1006309. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Qu, J.; Wang, R.; Yan, C.; Du, Y. Oscillations and synchrony in a cortical neural network. Cogn. Neurodyn. 2014, 8, 157–166. [Google Scholar] [CrossRef] [PubMed]
  51. Muscinelli, S.P.; Gerstner, W.; Schwalger, T. How single neuron properties shape chaotic dynamics and signal transmission in random neural networks. PLoS Comput. Biol. 2019, 15, e1007122. [Google Scholar] [CrossRef]
  52. Penn, Y.; Segal, M.; Moses, E. Network synchronization in hippocampal neurons. Proc. Natl. Acad. Sci. USA 2016, 113, 3341–3346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Bel, A.; Rotstein, H.G. Membrane potential resonance in non-oscillatory neurons interacts with synaptic connectivity to produce network oscillations. J. Comput. Neurosci. 2019, 46, 169–195. [Google Scholar] [CrossRef] [PubMed]
  54. Hale, J.; Koçak, H. Dynamics and Bifurcations; Springer: New York, NY, USA, 1991. [Google Scholar]
Figure 1. The block diagram for the Hopfield-type neural network system (1).
Figure 1. The block diagram for the Hopfield-type neural network system (1).
Mathematics 09 00571 g001
Figure 2. The graph of function Φ ( t ) , which exponentially approaches the unpredictable function Θ ( t ) .
Figure 2. The graph of function Φ ( t ) , which exponentially approaches the unpredictable function Θ ( t ) .
Mathematics 09 00571 g002
Figure 3. The coordinates of function ψ ( t ) , which exponentially converge to the coordinates of the unpredictable solution x ( t ) .
Figure 3. The coordinates of function ψ ( t ) , which exponentially converge to the coordinates of the unpredictable solution x ( t ) .
Mathematics 09 00571 g003
Figure 4. The trajectory of function ψ ( t ) .
Figure 4. The trajectory of function ψ ( t ) .
Mathematics 09 00571 g004
Figure 5. The block diagram for System (16).
Figure 5. The block diagram for System (16).
Mathematics 09 00571 g005
Table 1. Characteristics of elements of the block diagram in Figure 1.
Table 1. Characteristics of elements of the block diagram in Figure 1.
SymbolsDescription
Mathematics 09 00571 i001Integrator block
Mathematics 09 00571 i002Sum block
Mathematics 09 00571 i003Gain blocks, with values A , B , C
Mathematics 09 00571 i004Transfer function block, with nonlinear functions f and g
Mathematics 09 00571 i005MATLAB function block, with the piecewise constant function γ ( t )
ϑ ( t ) Input function
x ( t ) Output function
Table 2. Characteristics of elements of the block diagram in Figure 5.
Table 2. Characteristics of elements of the block diagram in Figure 5.
SymbolsDescription
Mathematics 09 00571 i006Integrator block
Mathematics 09 00571 i007Sum block
Mathematics 09 00571 i008Gain blocks with the values a i , b i j , c i j , i , j = 1 , 2 , 3
Mathematics 09 00571 i009Transfer function block, with nonlinear functions f and g
Mathematics 09 00571 i010MATLAB function block, with the piecewise constant function γ ( t )
Mathematics 09 00571 i011Input function
Mathematics 09 00571 i012Output function
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Akhmet, M.; Aruğaslan Çinçin, D.; Tleubergenova, M.; Nugayeva, Z. Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments. Mathematics 2021, 9, 571. https://doi.org/10.3390/math9050571

AMA Style

Akhmet M, Aruğaslan Çinçin D, Tleubergenova M, Nugayeva Z. Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments. Mathematics. 2021; 9(5):571. https://doi.org/10.3390/math9050571

Chicago/Turabian Style

Akhmet, Marat, Duygu Aruğaslan Çinçin, Madina Tleubergenova, and Zakhira Nugayeva. 2021. "Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments" Mathematics 9, no. 5: 571. https://doi.org/10.3390/math9050571

APA Style

Akhmet, M., Aruğaslan Çinçin, D., Tleubergenova, M., & Nugayeva, Z. (2021). Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments. Mathematics, 9(5), 571. https://doi.org/10.3390/math9050571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop