[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Formalization of Side-Aware DNA Origami Words and Their Rewriting System, and Equivalent Classes
Previous Article in Journal
2D Object Detection: A Survey
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation

1
School of Science, China University of Geosciences (Beijing), Beijing 100083, China
2
School of Urban Construction, Beijing City University, Beijing 101309, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(6), 894; https://doi.org/10.3390/math13060894
Submission received: 25 January 2025 / Revised: 1 March 2025 / Accepted: 4 March 2025 / Published: 7 March 2025
Figure 1
<p>Echo state network architecture: (<b>a</b>) training phase, and (<b>b</b>) testing phase. <math display="inline"><semantics> <mrow> <mi mathvariant="bold">I</mi> <mo>/</mo> <mi mathvariant="bold">R</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="bold">R</mi> <mo>/</mo> <mi mathvariant="bold">O</mi> </mrow> </semantics></math> denote the input-to-reservoir and reservoir-to-output couplers, respectively. <math display="inline"><semantics> <mi mathvariant="bold">R</mi> </semantics></math> denotes the reservoir.</p> ">
Figure 2
<p>Transition of <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> from <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>h</mi> </mrow> </semantics></math> to <span class="html-italic">x</span> under different values of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p> ">
Figure 3
<p>Prediction results of the ESN, H-ESN, and DeepESN for each dimension of the Lorenz system. (<b>a</b>) Lorenz-x, (<b>b</b>) Lorenz-y, and (<b>c</b>) Lorenz-z.</p> ">
Figure 4
<p>EPT variation curves of the three dimensions of the Lorenz system with respect to <math display="inline"><semantics> <mi>θ</mi> </semantics></math> are shown, with blue for Lorenz-x, red for Lorenz-y, and green for Lorenz-z.</p> ">
Figure 5
<p>Comparison of the prediction results for the MG time series between the ESN and H-ESN; the upper panel shows the ESN predictions, and the lower panel shows the H-ESN predictions.</p> ">
Figure 6
<p>Prediction error curves of the H-ESN with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>1.2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>1.25</mn> </mrow> </semantics></math> as functions of varying reservoir sizes <math display="inline"><semantics> <msub> <mi>D</mi> <mi>r</mi> </msub> </semantics></math>.</p> ">
Figure 7
<p>Variation curves of the prediction errors of the ESN, H-ESN, and DeepESN at different spectral radius <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> values.</p> ">
Figure 8
<p>Comparison of the prediction results for the KS system between the ESN and H-ESN: the left panel shows the ESN predictions, while the right panel shows the H-ESN predictions, where <math display="inline"><semantics> <mrow> <msub> <mo>Λ</mo> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mi>t</mi> </mrow> </semantics></math> represents the Lyapunov time.</p> ">
Figure 9
<p>MSE plot of the predicted values and true values for different dimensions of the KS system using the ESN and H-ESN.</p> ">
Figure 10
<p>Comparison of prediction errors of the H-ESN under different Gaussian noise intensities.</p> ">
Versions Notes

Abstract

:
Learning high-dimensional chaos is a complex and challenging problem because of its initial value-sensitive dependence. Based on an echo state network (ESN), we introduce homotopy transformation in topological theory to learn high-dimensional chaos. On the premise of maintaining the basic topological properties, our model can obtain the key features of chaos for learning through the continuous transformation between different activation functions, achieving an optimal balance between nonlinearity and linearity to enhance the generalization capability of the model. In the experimental part, we choose the Lorenz system, Mackey–Glass (MG) system, and Kuramoto–Sivashinsky (KS) system as examples, and we verify the superiority of our model by comparing it with other models. For some systems, the prediction error can be reduced by two orders of magnitude. The results show that the addition of homotopy transformation can improve the modeling ability of complex spatiotemporal chaotic systems, and this demonstrates the potential application of the model in dynamic time series analysis.

1. Introduction

In recent years, machine learning technologies have been widely applied to a variety of tasks, such as speech recognition, medical diagnosis, autonomous driving, image encryption and recommendation systems [1,2,3,4]. Chaos control has always been the focus of nonlinear research, and using machine learning technology to solve this problem has gradually become a trend [5,6,7,8]. We note that usually only finite time series data from certain dynamic processes are available. Thus, this method of learning only from the data itself is called “model-free” learning. The most commonly used method for model-free learning using dynamic time series is delayed coordinate embedding, which has been well established [9,10,11,12,13].
However, delayed coordinate embedding is too complex, and the results often fail to meet the accuracy required by the project. In 2004, the ESN proposed by Jaeger and Haas achieved impressive results in “model-free” chaotic learning tasks, which was published in Science [14]. In addition, many researchers have subsequently applied ESNs to various chaotic learning tasks. For example, Pathak et al. used reservoir computing to perform model-free estimates of the state evolution of chaotic systems and the Lyapunov exponents [15,16]. Moreover, ESN can also infer the unmeasured state variables from a limited set of continuously measured variables [17]. An ESN is very different from a traditional neural network; the difference is that an ESN only needs to train the output weight, and it overcomes the problem of gradient disappearance and explosion when the traditional neural network uses gradient descent on the weight matrix [18]. Therefore, In the following years, many results using ESNs have emerged [19]. For instance, adaptive reservoir computing can capture critical transitions in dynamical systems. This network has been successful in predicting critical transitions in various low-dimensional dynamical systems or high-dimensional systems with simple parameter structures [20]. Moreover, data-informed reservoir computing, which relies solely on data to enhance prediction accuracy, not only effectively reduces computational costs but also minimizes the cumbersome hyperparameter optimization process in reservoir computing [21].
The above results show that echo state networks can be effectively applied to chaos prediction tasks, and our goal is to achieve long-term and accurate predictions. However, chaotic systems are extremely sensitive to initial conditions, which makes long-term predictions more challenging. In the ESN structure, nonlinear activation can simulate the nonlinear relationship of the chaotic system, model data characteristics, and solve complex problems [22,23], so it is very important for the completion of the task.
The update process of the reservoir state largely depends on the activation function [24,25]. The activation function is a function of the network input, the previous state, and the feedback output. According to the reservoir update equation, the network input plays a crucial role in determining the reservoir state update. Different learning tasks involve distinct input characteristics, necessitating different reservoir update methods. However, in traditional ESN models, regardless of the characteristics of the input data, the activation function usually remains unchanged, typically using fixed nonlinear functions such as tanh or sigmoid [26]. Additionally, when noise or interference in the training set increases, the generalization ability of the ESN may decrease [27]. To overcome the shortcomings of traditional single activation functions, in recent years, the double activation function echo state network (DAF-ESN) [28], the echo state network activation function based on bistable stochastic resonance (SR-ESN) [29], and the deep echo state network with multiple activation functions (MAF-DESN) have been proposed [30]. By linearly combining activation functions, the resulting activation function varies as the coefficients change, providing greater flexibility and adaptability than single activation functions. This enhances the network’s expressive power, allowing the model to better adapt to complex learning tasks.
Recognizing this, in order to learn the key features of spatiotemporal chaotic systems, this paper introduces the homotopy transformation in topological theory and proposes a new chaotic prediction model, called the H-ESN. Under the premise of maintaining basic topological properties, our model achieves the optimal balance between nonlinearity and linearity by continuously transforming between different activation functions and adjusting the homotopy parameter, thereby capturing the key features necessary for learning chaos. In the experimental part of this paper, Our model has been successfully applied to the following classical prototype systems in chaotic dynamics: Lorenz system, MG system, and KS system, and it has obtained the following positive results compared to other models.
  • With appropriately chosen parameters, the H-ESN can provide longer prediction times for various high-dimensional chaotic systems.
  • Under the same parameter conditions, the H-ESN demonstrates smaller prediction errors compared to other models when predicting different dimensions of chaotic systems.
  • Compared to traditional methods, the H-ESN exhibits significant advantages in chaotic prediction tasks, particularly in the estimation of the maximal Lyapunov exponent.
The remainder of this paper is organized as follows: Section 2 introduces the principles and methods of the ESN and H-ESN and provides the sufficient conditions for the H-ESN to satisfy the echo state property. Section 3 discusses the application of the H-ESN to three chaotic system examples and compares its performance with other models, achieving significant results. Section 4 summarizes our research findings and outlines future research directions.

2. Correlation Method

2.1. Echo State Network

The ESN, proposed by Jaeger and Haas, has a relatively simple structure and requires only a small number of parameter adjustments [31]. Compared to traditional neural networks, it trains just a portion of the network’s connection weights, specifically, the output weights, while the input weights and the weights of the recurrent connections within the reservoir are randomly generated and remain fixed [32,33]. This simplification makes the learning process faster and more efficient. It can significantly reduce computational costs and helps mitigate the vanishing gradient problem to some extent. Furthermore, when the number of reservoir nodes is much greater than one, we can expect a wide range of desired outputs. ESN is a machine learning framework that has been shown to reproduce the chaotic attractors of its dynamical systems and includes the fractal dimension and the Lyapunov exponential spectrum [34,35,36].
The operation of the ESN during the training phase is shown in Figure 1a; the D-dimensional input vector u ( t ) = [ u 1 ( t ) , u 2 ( t ) , , u D ( t ) ] T is mapped into the reservoir R with D r nodes via the input coupling I / R and the input weight matrix W i n R D r × D . The reservoir state evolves according to Equation (1)
r ( t + Δ t ) = ( 1 α ) r ( t ) + α tan h ( A r ( t ) + W i n u ( t ) + ξ e ) ,
for the vector g = [ g 1 , g 2 , , g D r ] T , the activation function tanh ( g ) is expressed as (tanh( g 1 ), tanh( g 2 ), ⋯, tanh ( g D r ) ) T , which is the hyperbolic tangent function tanh(x). α is the leakage rate, which adjusts the update speed of the reservoir states and affects the dynamic characteristics of the system. ξ e is the biased term, where e = ( 1 , 1 , , 1 ) T R D r × 1 . This introduces an additional degree of freedom to the reservoir, allowing the network to better fit complex nonlinear dynamic systems. In addition to Δ t , α , ξ , and W i n in Equation (1), the reservoir dynamics are also related to the spectral radius ρ and sparsity d of the matrix A R D r × D r . The spectral radius ρ determines the stability and memory capacity of the reservoir, while the sparsity d influences the computational efficiency and dynamic diversity of the reservoir by controlling the proportion of non-zero elements in the matrix A. In Equation (2), W o u t R D v × D r is linearly combined with the D r -dimensional vector r(t) to obtain the output vector v ( t ) R D v × 1 as follows
v ( t + Δ t ) = W o u t r ( t + Δ t ) .
Generally, the output v(t) obtained in Figure 1a is expected to approximate the desired output v d ( t ) . During the training phase T t 0 , the data for u(t) and v d ( t ) are already known. The output weight matrix W o u t is determined by minimizing Equation (3) using ridge regression
T t 0 W o u t r ( t ) v d ( t ) 2 + β W o u t 2 ,
where β > 0 is the penalty parameter, · 2 is the sum of squares of each element, and the minimized loss function’s deduced as follows
W o u t r ( t ) v v d ( t ) 2 + β W o u t 2 = ( W o u t r ( t ) v d ( t ) ) T ( W o u t r ( t ) v d ( t ) ) + β W o u t T W o u t ,
W o u t [ ( W o u t r v d ) T ( W o u t r v d ) + β W o u t T W o u t ] = 0 , W o u t [ r T W o u t T W o u t r r T W o u t T v d v d T W o u t r + v d T v d + β W o u t T W o u t ] = 0 , W o u t [ r T W o u t T W o u t r 2 r T W o u t T v d + v d T v d + β W o u t T W o u t ] = 0 , 2 W o u t r r T 2 v d r T + 2 β W o u t = 0 , 2 W o u t r r T + 2 β W o u t = 2 v d r T , W o u t ( r r T + β I ) = v d r T , W o u t = v d r T ( r r T + β I ) 1 .
After completing the training phase T t 0 and obtaining the output weight matrix, the system enters the prediction phase t > 0 . The desired method is to make the expected output equal to the input, i.e., v d ( t + Δ t ) = u ( t + Δ t ) , and after obtaining the output weight matrix, at t = 0 , the system transitions from Figure 1a to Figure 1b and autonomously operates according to the following formula
r ( t + Δ t ) = ( 1 α ) r ( t ) + α tan h ( A r ( t ) + W i n ( W o u t r ( t ) ) + ξ e ) .

2.2. Echo State Network Based on Homotopy Transformation (H-ESN)

2.2.1. Introduction to H-ESN

The activation function refers to the nonlinear function in neurons. The introduction of nonlinear characteristics into artificial neural networks improves the expression ability of the networks [37]. If a network lacks a nonlinear function, it can only perform linear combinations, making the activation function crucial in neural networks.
This section introduces an ESN model based on homotopy theory. The concept of homotopy originated from topological theory and has been applied to recovery algorithms. The core of the homotopy method lies in the use of continuous changing ideas and homotopy paths.
Theorem 1
([38]). Let f , g : X Y be a continuous function on a topological space. A homotopy from f to g is a continuous function F : X × [ 0 , 1 ] Y , such that for all x X , F ( x , 0 ) = f ( x ) and F ( x , 1 ) = g ( x ) . If such a homotopy exists between f and g, then f is said to be homotopic to g for all x X . Denote this by f g . F is continuous on t, which means that the transformation from f to g is continuous, that is, the path from one function to another.
Theorem 2
([38]). Let C be a convex set. If for x , y C , then, there is θ [ 0 , 1 ] , θ x + ( 1 θ ) y C , so we consider F ( x , θ ) = θ f ( x ) + ( 1 θ ) g ( x ) , because C is a convex set, so for any f ( x ) , g ( x ) C and for any θ [ 0 , 1 ] , θ f ( x ) + ( 1 θ ) g ( x ) C , that is, a function from X × [ 0 , 1 ] to C, so F is homotopy.
On the premise of maintaining the basic topological properties, our model can obtain the key features of chaos for learning through a continuous homotopy transformation between different activation functions. The activation function of our model is shown below
F ( x , θ ) = ( 1 θ ) ( e x e x e x + e x ) + θ x ,
in which θ is the homotopy parameter. Figure 2 shows the graph of F ( x , θ ) as the function transitions from t a n h to x for different values of θ .
To provide a detailed explanation of the H-ESN training and prediction process, the specific flow of the algorithm, as well as its time and space complexity, are presented below (Algorithm 1).
Algorithm 1 H-ESN standard algorithm process.
Require: 
Observed data u ( t ) ; Dimensions of the input data, reservoir, and predicted data is D × 1 , D r × D r and D v × 1 .
Ensure: 
Predicted data v ( t )
  1:
Fix a random seed and generate the input weight matrix W i n and reservoir weight matrix A .
  2:
Set the spectral radius ρ and sparsity d of A .
  3:
Determine the homotopy parameter θ .
  4:
for  i = T to 0 do
  5:
    Update the reservoir state r ( t ) using Equation (1) with the activation function replaced by F ( x , θ ) .
  6:
end for
  7:
Obtain the output weight matrix W o u t by minimizing Equation (3).
  8:
for  t = 0 to T t e s t  do
  9:
    Use Equation (6) with the activation function replaced by F ( x , θ ) to predict the output.
  10:
end for
  11:
Obtain the H-ESN prediction results.

2.2.2. Echo State Property of the H-ESN

The Echo State Property (ESP) is the core theoretical foundation of the ESN, ensuring that the dynamic behavior of the reservoir has good stability and predictability [39,40]. A sufficient condition for the H-ESN to exhibit the echo state property is given below (Table 1).
Assumption 1.
The nonlinear activation function f : X Y is a function from set X to set Y. There exists a constant L 0 such that for any two points x 1 , x 2 in X, the Lipschitz condition is satisfied
f ( u , x 1 ) f ( u , x 2 ) L x 1 x 2
where L is a constant, typically taken as 1.
Theorem 3.
If the following conditions are satisfied for the H-ESN model
(1) 
F ( x ) = ( 1 θ ) t a n h + θ x (the parameter θ satisfies θ [ 0 , 1 ] );
(2) 
The spectral radius ρ A of the internal weight matrix A of the reservoir satisfies ρ A < 1 ;
The H-ESN model has the ESP.
Proof. 
Consider two arbitrary initial states r 1 ( 0 ) and r 2 ( 0 ) , and the same input sequence u ( n ) . As n , the state difference is Δ r ( n ) = r 1 ( n ) r 2 ( n ) . According to the state update equation
r ( n + 1 ) = ( 1 α ) r ( n ) + α ( ( 1 θ ) t a n h ( A r ( n ) + W i n u ( n ) + ξ e ) + θ ( A r ( t ) + W i n u ( n ) + ξ e ) ) , Δ r ( n + 1 ) = ( 1 α ) Δ r ( n ) + α ( ( 1 θ ) ( t a n h ( A r 1 ( n ) + W i n u ( n ) + ξ e ) t a n h ( A r 2 ( n ) + W i n u ( n ) + ξ e ) ) + θ ( A Δ r ( n ) ) ,
The tanh function is Lipschitz continuous, with a Lipschitz constant of 1.
Δ r ( n + 1 ) ( 1 α ) Δ r ( n ) + α ( ( 1 θ ) A Δ r ( n ) + θ A Δ r ( n ) ) , = ( 1 α ) Δ r ( n ) + α A Δ r ( n ) , ( 1 α ) Δ r ( n ) + α ρ A Δ r ( n ) , < ( 1 α ) Δ r ( n ) + α Δ r ( n ) , = Δ r ( n ) , Δ r ( n + 1 ) < Δ r ( n ) .
The ESN activated by homotopy transformation Equation (7) is a new chaotic prediction method proposed in this paper. In Section 3, we show the effect of predicting chaotic Lorenz, MG, and KS systems based on the new ESN model (H-ESN). □

3. Results

We will provide three examples, the Lorenz, Mackey–Glass, and Kuramoto–Sivashinsky systems, to illustrate the advantages of using the H-ESN in predicting chaotic systems.

3.1. Lorenz System

The Lorenz system, proposed by Edward Lorenz in 1963 [41], is a three-dimensional nonlinear dynamical system originally designed to study atmospheric convection. As a fundamental model in chaos theory, it is known for its simplicity and complex dynamics. The system’s differential equations are as follows
d x / d t = a x + a y , d y / d t = b x y x z , d z / d t = c z + x y ,
where a = 10 , b = 28 , and c = 8 / 3 . The system variables x , y , z are known, and the input u ( t ) = ( x ( t ) , y ( t ) , z ( t ) ) T is used to obtain the output weight matrix W o u t through training. Afterward, the system enters the prediction phase for t > 0 . Taking into account the symmetry of the Lorenz equations, Equation (2) is modified to v d ( t ) = W o u t r ˜ , where r ˜ is a vector of dimensions D r in which half of the elements of r ˜ are r ˜ = r i 2 , with i t h representing the components of r ˜ . Based on this, we compare the H-ESN with other commonly used ESN models, and the results are illustrated in Figure 3, with the parameters shown in Table 2. Additionally, the accurate prediction data lengths for the three models on the three variables of the Lorenz system are presented in Table 3.
According to Figure 3 and Table 3, in the initial stages, all three models—Deep ESN, ESN, and H-ESN—can achieve relatively accurate predictions for the three variables of the Lorenz system. However, as the number of data points increases, the prediction trajectory of the Deep ESN deviates first from the true values, with the purple dotted line diverging from the blue solid line. This is because we selected a Deep ESN with three layers, each containing 100 nodes, which results in weaker nonlinear modeling capability compared to an ESN with a single reservoir (300 nodes). Later, the predicted trajectory of the ESN also starts to deviate from the true state with an increase in data points, with the green dotted line moving away from the blue solid line. In contrast, H-ESN demonstrates a significant advantage in prediction duration compared to the other two models, achieving accurate predictions for approximately 500 data points for the three variables of the Lorenz system. For comparison, we computed the mean squared error (MSE) values of the three models for the three variables of the Lorenz system at different prediction lengths in Table 4.
As shown in Table 4, the MSE between the predicted values at 300, 350, 400, 450, and 500 data points and the true values of the three variables of the Lorenz system were calculated separately. Additionally, in Table 5, the average MSE percentage improvement of the H-ESN over the ESN for the three variables of the Lorenz system was calculated based on Equation (9). It can be concluded that the MSE value for the H-ESN model is minimal at different prediction stages, indicating that the model proposed in this paper achieves the best performance in this prediction task.
ESN MSE H-ESN MSE ESN MSE × 100 %
In chaotic prediction tasks, the focus is on the duration and accuracy of predictions. Effective prediction time (EPT) is an important metric for evaluating the performance of time series prediction models. It refers to the limited period during which accurate predictions can be made using the model in chaotic scenarios. This period is finite because chaotic systems are extremely sensitive to initial conditions, leading to significant uncertainty in long-term predictions. In this paper, the effective prediction time is defined as EPT = u ( t ) v ( t ) . The prediction is considered invalid at time t, when the prediction error exceeds the set threshold ϵ , that is, when EPT > ϵ ( ϵ is a given error).
The parameter θ is a very important hyperparameter, and its selection significantly affects the system’s predictive performance. Figure 4 shows the EPT for the three variables under different values of θ . Overall, When θ is small, the EPT tends to decrease because the nonlinearity is too strong, making it difficult for the network to train or generalize effectively. When θ is large, the EPT also tends to decrease due to excessive linearization, causing the network to fail to capture the key dynamics of the chaotic system. However, after reaching the intermediate region at θ = 0.65 , the EPT starts to increase and reaches its maximum value, as the balance between nonlinearity and linearity is optimized. To ensure that the three variables have a longer prediction duration, it is recommended to choose a θ value of approximately 0.7.
The H-ESN introduces a linear component ( θ x ) through homotopy transformation, finding the optimal balance between nonlinearity and linearity. However, the value of θ generally varies for different chaotic systems. Currently, we primarily determine the value of θ through grid search or empirical tuning. While effective, this method can be computationally expensive when dealing with high-dimensional or complex systems. Finding the optimal value of θ quickly and efficiently is a major challenge faced by the H-ESN.

3.2. Mackey–Glass Equation

The Mackey–Glass (MG) equation is a commonly used delay differential equation used to model complex dynamic behaviors in biological and physical systems with time delays, especially in biology and ecology [42]. Its standard form is as follows
d x d t = β x ( t τ ) 1 + x ( t τ ) n γ x ( t ) ,
where β = 0.2, γ = 0.1, τ = 17, and n = 10. The above equation is numerically solved using the Euler method to obtain the chaotic time series of the MG system. The first 2000 data points are used as the training set, and the next 1000 data points as the testing set. Figure 5 shows a comparison of the prediction performance of the ESN and H-ESN on the MG time series, with the parameters listed in Table 6.
As shown in Figure 5, both models can make good short-term predictions for the MG time series. The ESN can accurately predict 533 data points, but it fails to accurately predict the peak information in the time interval between 533 and 800 time steps. On the other hand, the H-ESN can predict for 1000 time steps and is capable of capturing the peak information effectively, indicating that the H-ESN has a clear advantage in predicting the MG time series.
The hyperparameters in ESN have a significant impact on the prediction performance of chaotic systems. The following analysis examines how the MSE between the predicted and true values changes with the spectral radius ρ over the first 500 time steps. Overall, as the spectral radius increases, the MSE decreases, and the H-ESN shows higher prediction accuracy compared to the other two models. For small values of ρ , the prediction accuracy already reaches 1 × 10 4 , and the MSE value reaches 1 × 10 7 when ρ = 1.3.
In addition to the spectral radius ρ , which affects the prediction accuracy of the model, the number of reservoir nodes D r also plays a crucial role in chaotic prediction. The size of D r directly influences the complexity of the state space that the network can represent. Generally speaking, the larger the number of reservoir nodes, the more dynamic and complex patterns the network can capture. Figure 6 illustrates how the MSE between the predicted and true values changes with D r for two different spectral radii. It can be observed that when ρ = 1.2 and D r = 950, the corresponding MSE is minimized, and the prediction accuracy reaches 10 9 . Furthermore, Figure 6 shows that for different reservoir sizes, for D r = 300, the MSE corresponding to ρ = 1.2 is smaller than that corresponding to ρ = 1.25, which is the opposite trend compared to the one observed for H-ESN in Figure 7. This indicates that different reservoir sizes have a significant impact on the prediction ability of the H-ESN.

3.3. Kuramoto–Sivashinsky Equations

Now consider a modified version of the Kuramoto–Sivashinsky(KS) system defined by the following partial differential Equation [43]
y t = y y x [ 1 + μ c o s ( 2 π x λ ) ] y x x y x x x .
If μ = 0 , this equation is reduced to the standard KS equation, and if μ 0 , the cosine term makes the equation spatially inhomogeneous. We will focus on the case where μ = 0 below.
We take into consideration the fact that the KS system has periodic boundary conditions at 0 x < L , that is, y ( x + L , t ) = y ( x , t ) , and the KS equation is numerically integrated into a uniformly spaced grid of size Q. The simulated data consists of Q time series with time step Δ t , represented by vector u ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y Q ( t ) ) T , where y i ( t ) = y ( i Δ t ) , Δ x = L / Q .
Considering that the Kuramoto–Sivashinsky equation has high dimensional spatiotemporal chaos and a certain symmetry, we modify Equation (2) by analogy with the Lorenz system. After the training stage T t 0 , the system uses Tikhonov regularization regression to obtain W o u t . When the output parameters are determined, the system enters the prediction stage t > 0 and independently evolves according to Figure 1b.
As shown in Figure 8, the ESN model achieves the prediction of 7 Λ m a x t steps for the KS system, while the H-ESN model can predict up to 12 Λ m a x t steps, in almost twice the duration taken by the ESN model. In terms of prediction accuracy, the error panel of the H-ESN model is close to 0 during the early prediction stages. For comparison, we have plotted the root mean square error (RMSE) of both models at different prediction dimensions Q in Figure 9. The RMSE values of the H-ESN are consistently below 0.15, with relatively small fluctuations. In contrast, the RMSE values of the ESN model are mostly above 0.15, with a significant difference between the maximum and minimum values. Therefore, the H-ESN model is more accurate in prediction and also more stable in terms of prediction performance.
The most important characteristic of chaotic dynamics is their extreme sensitivity to initial conditions. In chaotic systems, long-term predictions of the system’s state are impossible, as even the smallest errors will exponentially amplify, quickly eroding predictive capability. In predicting chaotic systems, not only is it necessary to optimize the hyperparameters to extend the effective prediction time of various variables, but it is also essential to evaluate the prediction accuracy in terms of the system’s inherent chaotic characteristics. The maximum Lyapunov exponent ( Λ m a x ) is a key metric for measuring the chaotic nature of dynamic systems. It evaluates whether the system exhibits a chaotic behavior by quantifying the rate of divergence of nearby trajectories in the phase space. By comparing the Λ m a x of the KS system, as shown in Table 7, it can be observed that the Λ m a x estimated from the predicted data of the H-ESN model is closest to the true Λ m a x of the KS system, with a difference of 0.0001. In contrast, the Λ m a x obtained from the predicted data of the ESN model deviates from the true value by 0.0017. This difference indicates that the H-ESN model demonstrates a stronger ability to capture the chaotic characteristics and sensitivity of the system. Particularly at longer time scales, the H-ESN is able to more accurately reflect the system’s dynamical behavior, highlighting its advantages in modeling complex dynamical systems.
Chaotic systems are often disturbed by noise in practical applications, which can significantly reduce the performance of prediction models. To verify the robustness of the H-ESN under noisy conditions, we added Gaussian noise with varying intensities (noise levels of 0.01, 0.02, and 0.03) to the KS system, simulating real-world measurement errors. The experiment was conducted with the parameters listed in Table 8, aiming to evaluate the performance of H-ESN under noise conditions.
As shown in Figure 10, when the Gaussian noise intensity is 0.01, the H-ESN can still maintain good predictive capability, with a prediction duration reaching 6 Λ m a x . However, as the noise intensity increases, the predictive capability of the H-ESN gradually declines. This indicates that in low-noise environments, the H-ESN can effectively handle noise interference and maintain high prediction accuracy; however, under high-noise conditions, the impact of noise on model performance becomes more pronounced.

4. Summary and Future Directions

4.1. Summary

A trained ESN can approximate the ergodic properties of its real system, and an ESN based on homotopy theory has demonstrated better performance for short-term predictions in chaotic systems. As shown in Section 3, with the Lorenz and MG system, the choice of parameters has a significant impact on prediction performance. Once the parameters are properly selected, our method has demonstrated longer prediction durations and better accuracy. For high-dimensional spatiotemporal chaotic systems, the H-ESN has demonstrated better performance in chaotic prediction compared to the ESN, doubling the prediction duration and providing more precise estimates of the Lyapunov exponent. Moreover, the H-ESN exhibits a certain degree of robustness in low-noise environments, where it retains reliable prediction capabilities even under mild noise interference. This resilience not only highlights its practical applicability but also ensures the preservation of chaotic system dynamics, making it a promising tool for real-world scenarios where complete noise elimination is challenging.

4.2. Future Directions

From a broader perspective, this paper reveals that echo state networks based on homotopy theory (H-ESN) represent a highly fruitful and detailed research direction in the field of chaotic system measurement data. However, despite its promising potential, the current H-ESN framework still exhibits certain limitations that need to be addressed.
  • Computational inefficiency arises in parameter optimization, particularly in selecting the homotopy parameter θ .
  • H-ESN exhibits certain limitations under high-noise conditions, with prediction accuracy gradually decreasing as noise intensity increases.
  • Reservoir design varies by task, but the lack of universal guidelines makes selecting the right structure and parameters challenging.
Future research can focus on integrating noise reduction techniques or robust optimization methods to further enhance the performance of the H-ESN in high-noise environments. Additionally, exploring more efficient and direct approaches for selecting the homotopy parameter θ is crucial for improving the model’s adaptability and reducing computational costs. At the same time, developing adaptive design methods and establishing a theoretical framework can provide systematic guidance for the selection of reservoir structures and parameters in the H-ESN, thereby further enhancing its performance and generalizability across different application scenarios. These research directions not only hold significant theoretical value but also provide new insights for solving practical engineering problems.

Author Contributions

Conceptualization, S.W. and F.G.; methodology, S.W. and Y.L.; validation, S.W., F.G. and Y.L.; formal analysis, F.G. and Y.L.; investigation, H.L.; writing—original draft preparation, S.W. and F.G.; writing—review and editing, Y.L. and H.L.; visualization, S.W.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities (Grant No. 2652023054) and the 2024 Graduate Innovation Fund Project of China Universit of Geosciences, Beijing, China (Grant No. YB2024YC044).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hinton, G.; Deng, L.; Yu, D.; Mohamed, A.; Jaitly, N. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  2. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  3. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  4. Vismaya, V.S.; Muni, S.S.; Panda, A.K.; Mondal, B. Degn-Harrison map: Dynamical and network behaviours with applications in image encryption. Chaos Solit. Fractals 2025, 192, 115987. [Google Scholar]
  5. Khan, A.Q.; Maqbool, A.; Alharbi, T.D. Bifurcations and chaos control in a discrete Rosenzweig-Macarthur prey-predator model. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 033111. [Google Scholar] [CrossRef]
  6. Farman, M.; Jamil, K.; Xu, C.; Nisar, K.S.; Amjad, A. Fractional order forestry resource conservation model featuring chaos control and simulations for toxin activity and human-caused fire through modified ABC operator. Math. Comput. Simul. 2025, 227, 282–302. [Google Scholar] [CrossRef]
  7. Zhai, H.; Sands, T. Controlling chaos in Van Der Pol dynamics using signal-encoded deep learning. Mathematics 2022, 10, 453. [Google Scholar] [CrossRef]
  8. Kennedy, C.; Crowdis, T.; Hu, H.; Vaidyanathan, S.; Zhang, H.-K. Data-driven learning of chaotic dynamical systems using Discrete-Temporal Sobolev Networks. Neural Netw. 2024, 173, 106152. [Google Scholar] [CrossRef]
  9. Bradley, E.; Kantz, H. Nonlinear time-series analysis revisited. Chaos Interdiscip. J. Nonlinear Sci. 2015, 25, 097610. [Google Scholar] [CrossRef]
  10. Young, C.D.; Graham, M.D. Deep learning delay coordinate dynamics for chaotic attractors from partial observable data. Phys. Rev. E 2023, 107, 034215. [Google Scholar] [CrossRef]
  11. Datseris, G.; Parlitz, U. Delay Coordinates, in Nonlinear Dynamics: A Concise Introduction Interlaced with Code; Springer International Publishing: Cham, Switzerland, 2022; pp. 89–103. [Google Scholar]
  12. Brandstater, A.; Swinney, H.L. Strange attractors in weakly turbulent Couette-Taylor flow. Phys. Rev. A 1987, 35, 2207. [Google Scholar] [CrossRef] [PubMed]
  13. Peng, H.; Wang, W.; Chen, P.; Liu, R. DEFM: Delay-embedding-based forecast machine for time series forecasting by spatiotemporal information transformation. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 043112. [Google Scholar] [CrossRef]
  14. Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
  15. Pathak, J.; Lu, Z.; Hunt, B.R.; Girvan, M.; Ott, E. Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos Interdiscip. J. Nonlinear Sci. 2017, 27, 121102. [Google Scholar] [CrossRef] [PubMed]
  16. Hart, J.D. Attractor reconstruction with reservoir computers: The effect of the reservoir’s conditional Lyapunov exponents on faithful attractor reconstruction. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 043123. [Google Scholar] [CrossRef]
  17. Lu, Z.; Pathak, J.; Hunt, B.; Girvan, M.; Brockett, R.; Ott, E. Reservoir observers: Model-free inference of unmeasured variables in chaotic systems. Chaos Interdiscip. J. Nonlinear Sci. 2017, 27, 041102. [Google Scholar] [CrossRef]
  18. Ozturk, M.C.; Xu, D.; Principe, J.C. Analysis and design of echo state networks. Neural Comput. 2007, 19, 111–138. [Google Scholar] [CrossRef]
  19. Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
  20. Panahi, S.; Lai, Y.C. Adaptable reservoir computing: A paradigm for model-free data-driven prediction of critical transitions in nonlinear dynamical systems. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 051501. [Google Scholar] [CrossRef]
  21. Köster, F.; Patel, D.; Wikner, A.; Jaurigue, L.; Lüdge, K. Data-informed reservoir computing for efficient time-series prediction. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 073109. [Google Scholar] [CrossRef]
  22. Bonas, M.; Datta, A.; Wikle, C.K.; Boone, E.L.; Alamri, F.S.; Hari, B.V.; Kavila, I.; Simmons, S.J.; Jarvis, S.M.; Burr, W.S.; et al. Assessing predictability of environmental time series with statistical and machine learning models. Environmetrics 2025, 36, e2864. [Google Scholar] [CrossRef]
  23. Yadav, M.; Sinha, S.; Stender, M. Evolution beats random chance: Performance-dependent network evolution for enhanced computational capacity. Phys. Rev. E 2025, 111, 014320. [Google Scholar] [CrossRef] [PubMed]
  24. Dubey, S.R.; Singh, S.K.; Chaudhuri, B.B. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing 2022, 503, 92–108. [Google Scholar] [CrossRef]
  25. Yu, D.; Cao, F. Construction and approximation rate for feedforward neural network operators with sigmoidal functions. J. Comput. Appl. Math. 2025, 453, 116150. [Google Scholar] [CrossRef]
  26. Gong, Y.; Lun, S.; Li, M.; Lu, X. An echo state network model with the protein structure for time series prediction. Appl. Soft Comput. 2024, 153, 111257. [Google Scholar] [CrossRef]
  27. Xie, M.; Wang, Q.; Yu, S. Time series prediction of ESN based on Chebyshev mapping and strongly connected topology. Neural Process. Lett. 2024, 56, 30. [Google Scholar] [CrossRef]
  28. Lun, S.-X.; Yao, X.-S.; Qi, H.-Y.; Hu, H.-F. A novel model of leaky integrator echo state network for time-series prediction. Neurocomputing 2015, 159, 58–66. [Google Scholar] [CrossRef]
  29. Liao, Z.; Wang, Z.; Yamahara, H.; Tabata, H. Echo state network activation function based on bistable stochastic resonance. Chaos Solit. Fractals 2021, 153, 111503. [Google Scholar] [CrossRef]
  30. Liao, Y.; Li, H. Deep echo state network with reservoirs of multiple activation functions for time-series prediction. Sādhanā 2019, 44, 146. [Google Scholar] [CrossRef]
  31. Sun, C.; Song, M.; Hong, S.; Li, H. A review of designs and applications of echo state networks. arXiv 2020, arXiv:2012.02974. [Google Scholar] [CrossRef]
  32. Sun, J.; Li, L.; Peng, H. Sequence Prediction and Classification of Echo State Networks. Mathematics 2023, 11, 4640. [Google Scholar] [CrossRef]
  33. González-Zapata, A.M.; Tlelo-Cuautle, E.; Ovilla-Martinez, B.; Cruz-Vega, I.; De la Fraga, L.G. Optimizing echo state networks for enhancing large prediction horizons of chaotic time series. Mathematics 2022, 10, 3886. [Google Scholar] [CrossRef]
  34. Lin, Z.F.; Liang, Y.M.; Zhao, J.L.; Feng, J.; Kapitaniak, T. Control of chaotic systems through reservoir computing. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 121101. [Google Scholar] [CrossRef]
  35. Li, Y.; Li, Y. Predicting chaotic time series and replicating chaotic attractors based on two novel echo state network models. Neurocomputing 2022, 491, 321–332. [Google Scholar] [CrossRef]
  36. Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 2002, 14, 2531–2560. [Google Scholar] [CrossRef]
  37. Faroughi, S.A.; Soltanmohammadi, R.; Datta, P.; Mahjour, S.K.; Faroughi, S. Physics-informed neural networks with periodic activation functions for solute transport in heterogeneous porous media. Mathematics 2023, 12, 63. [Google Scholar] [CrossRef]
  38. Arkowitz, M. Introduction to Homotopy Theory; Springer Science & Business Media: New York, NY, USA, 2011; pp. 3–7. [Google Scholar]
  39. Yildiz, I.B.; Jaeger, H.; Kiebel, S.J. Re-visiting the echo state property. Neural Netw. 2012, 35, 1–9. [Google Scholar] [CrossRef]
  40. Wang, B.; Lun, S.; Li, M.; Lu, X. Echo state network structure optimization algorithm based on correlation analysis. Appl. Soft Comput. 2024, 152, 111214. [Google Scholar] [CrossRef]
  41. Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963, 20, 130–141. [Google Scholar] [CrossRef]
  42. Glass, L.; Mackey, M. Mackey-Glass equation. Scholarpedia 2010, 5, 6908. [Google Scholar] [CrossRef]
  43. Abadie, M.; Beck, P.; Parker, J.P.; Schneider, T.M. The topology of a chaotic attractor in the Kuramoto-Sivashinsky equation. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 013123. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Echo state network architecture: (a) training phase, and (b) testing phase. I / R and R / O denote the input-to-reservoir and reservoir-to-output couplers, respectively. R denotes the reservoir.
Figure 1. Echo state network architecture: (a) training phase, and (b) testing phase. I / R and R / O denote the input-to-reservoir and reservoir-to-output couplers, respectively. R denotes the reservoir.
Mathematics 13 00894 g001
Figure 2. Transition of F ( x , θ ) from t a n h to x under different values of θ .
Figure 2. Transition of F ( x , θ ) from t a n h to x under different values of θ .
Mathematics 13 00894 g002
Figure 3. Prediction results of the ESN, H-ESN, and DeepESN for each dimension of the Lorenz system. (a) Lorenz-x, (b) Lorenz-y, and (c) Lorenz-z.
Figure 3. Prediction results of the ESN, H-ESN, and DeepESN for each dimension of the Lorenz system. (a) Lorenz-x, (b) Lorenz-y, and (c) Lorenz-z.
Mathematics 13 00894 g003
Figure 4. EPT variation curves of the three dimensions of the Lorenz system with respect to θ are shown, with blue for Lorenz-x, red for Lorenz-y, and green for Lorenz-z.
Figure 4. EPT variation curves of the three dimensions of the Lorenz system with respect to θ are shown, with blue for Lorenz-x, red for Lorenz-y, and green for Lorenz-z.
Mathematics 13 00894 g004
Figure 5. Comparison of the prediction results for the MG time series between the ESN and H-ESN; the upper panel shows the ESN predictions, and the lower panel shows the H-ESN predictions.
Figure 5. Comparison of the prediction results for the MG time series between the ESN and H-ESN; the upper panel shows the ESN predictions, and the lower panel shows the H-ESN predictions.
Mathematics 13 00894 g005
Figure 6. Prediction error curves of the H-ESN with ρ = 1.2 and ρ = 1.25 as functions of varying reservoir sizes D r .
Figure 6. Prediction error curves of the H-ESN with ρ = 1.2 and ρ = 1.25 as functions of varying reservoir sizes D r .
Mathematics 13 00894 g006
Figure 7. Variation curves of the prediction errors of the ESN, H-ESN, and DeepESN at different spectral radius ρ values.
Figure 7. Variation curves of the prediction errors of the ESN, H-ESN, and DeepESN at different spectral radius ρ values.
Mathematics 13 00894 g007
Figure 8. Comparison of the prediction results for the KS system between the ESN and H-ESN: the left panel shows the ESN predictions, while the right panel shows the H-ESN predictions, where Λ m a x t represents the Lyapunov time.
Figure 8. Comparison of the prediction results for the KS system between the ESN and H-ESN: the left panel shows the ESN predictions, while the right panel shows the H-ESN predictions, where Λ m a x t represents the Lyapunov time.
Mathematics 13 00894 g008
Figure 9. MSE plot of the predicted values and true values for different dimensions of the KS system using the ESN and H-ESN.
Figure 9. MSE plot of the predicted values and true values for different dimensions of the KS system using the ESN and H-ESN.
Mathematics 13 00894 g009
Figure 10. Comparison of prediction errors of the H-ESN under different Gaussian noise intensities.
Figure 10. Comparison of prediction errors of the H-ESN under different Gaussian noise intensities.
Mathematics 13 00894 g010
Table 1. Time complexity and space complexity of the H-ESN algorithm.
Table 1. Time complexity and space complexity of the H-ESN algorithm.
Time Complexity: O ( d × D r + ( T + T t e s t ) × D r 2 + D r 3 ) Generating Reservoir Weight Matrix A: O ( d × D r )
Reservoir State Update: O ( T × D r 2 )
Output Weight Matrix: O ( D r 3 )
Prediction Phase: O ( T t e s t × D r 2 )
Space Complexity: O ( ( d + T + D v ) × D r + D v × T t e s t ) Reservoir Weight Matrix A: O ( d × D r )
State Matrix: O ( T × D r )
Output Weight Matrix: O ( D r × D v )
Storing Prediction Results: O ( D v × T t e s t )
Table 2. Parameters for the Lorenz system prediction task.
Table 2. Parameters for the Lorenz system prediction task.
ParameterValueParameterValue
D r 300 α 1
ρ 0.7 ξ 0
Δ t0.02 θ 0.7
Table 3. Accurate prediction data lengths for the three variables of the Lorenz system using the ESN, Deep-ESN, and H-ESN.
Table 3. Accurate prediction data lengths for the three variables of the Lorenz system using the ESN, Deep-ESN, and H-ESN.
ESNDeep-ESNH-ESN
Lorenz-x378262521
Lorenz-y382260522
Lorenz-z393273530
Table 4. Comparison table of MSE values for the three dimensions of the Lorenz system at different prediction lengths using the ESN, Deep ESN, and H-ESN.
Table 4. Comparison table of MSE values for the three dimensions of the Lorenz system at different prediction lengths using the ESN, Deep ESN, and H-ESN.
MSE
300350400450500
Lorenz-xESN0.803670.785421.05643.59073.6537
Deep ESN2.92283.05286.967525.373541.3386
H-ESN0.53150.518970.518150.691620.65461
Lorenz-yESN1.86921.78472.68318.53738.3973
Deep ESN7.29367.01912.063237.146755.8747
H-ESN1.21661.16861.17611.65991.5405
Lorenz-zESN2.73212.68912.810612.813512.4176
Deep ESN10.486911.145227.992449.556155.6685
H-ESN1.80541.76621.74192.48172.3127
Table 5. MSE percentage improvement of the H-ESN over the ESN.
Table 5. MSE percentage improvement of the H-ESN over the ESN.
300350400450500
Lorenz-xH-ESN33.87%33.92%50.95%80.74%82.08%
Lorenz-yH-ESN34.91%34.52%56.17%80.56%81.65%
Lorenz-zH-ESN33.92%34.32%38.02%80.63%81.38%
Table 6. Parameters for the MG time series prediction task.
Table 6. Parameters for the MG time series prediction task.
ParameterValueParameterValue
D r 1000 α 0.7
ρ 0.9 ξ 0
Δ t0.1 θ 0.08
Table 7. Comparison table of Λ m a x for the ESN and H-ESN, Λ m a x represents the maximum Lyapunov exponent.
Table 7. Comparison table of Λ m a x for the ESN and H-ESN, Λ m a x represents the maximum Lyapunov exponent.
Actual KSESNH-ESN
Λ m a x 0.04710.04880.0470
Error (%) 3.61%0.21%
Table 8. Parameters for the KS system prediction task.
Table 8. Parameters for the KS system prediction task.
ParameterValueParameterValue
D r 5000d3
ρ 0.4 ξ 0
Q60 θ 0.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Geng, F.; Li, Y.; Liu, H. Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation. Mathematics 2025, 13, 894. https://doi.org/10.3390/math13060894

AMA Style

Wang S, Geng F, Li Y, Liu H. Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation. Mathematics. 2025; 13(6):894. https://doi.org/10.3390/math13060894

Chicago/Turabian Style

Wang, Shikun, Fengjie Geng, Yuting Li, and Hongjie Liu. 2025. "Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation" Mathematics 13, no. 6: 894. https://doi.org/10.3390/math13060894

APA Style

Wang, S., Geng, F., Li, Y., & Liu, H. (2025). Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation. Mathematics, 13(6), 894. https://doi.org/10.3390/math13060894

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop