Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation
<p>Echo state network architecture: (<b>a</b>) training phase, and (<b>b</b>) testing phase. <math display="inline"><semantics> <mrow> <mi mathvariant="bold">I</mi> <mo>/</mo> <mi mathvariant="bold">R</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="bold">R</mi> <mo>/</mo> <mi mathvariant="bold">O</mi> </mrow> </semantics></math> denote the input-to-reservoir and reservoir-to-output couplers, respectively. <math display="inline"><semantics> <mi mathvariant="bold">R</mi> </semantics></math> denotes the reservoir.</p> "> Figure 2
<p>Transition of <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> from <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>h</mi> </mrow> </semantics></math> to <span class="html-italic">x</span> under different values of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p> "> Figure 3
<p>Prediction results of the ESN, H-ESN, and DeepESN for each dimension of the Lorenz system. (<b>a</b>) Lorenz-x, (<b>b</b>) Lorenz-y, and (<b>c</b>) Lorenz-z.</p> "> Figure 4
<p>EPT variation curves of the three dimensions of the Lorenz system with respect to <math display="inline"><semantics> <mi>θ</mi> </semantics></math> are shown, with blue for Lorenz-x, red for Lorenz-y, and green for Lorenz-z.</p> "> Figure 5
<p>Comparison of the prediction results for the MG time series between the ESN and H-ESN; the upper panel shows the ESN predictions, and the lower panel shows the H-ESN predictions.</p> "> Figure 6
<p>Prediction error curves of the H-ESN with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>1.2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>1.25</mn> </mrow> </semantics></math> as functions of varying reservoir sizes <math display="inline"><semantics> <msub> <mi>D</mi> <mi>r</mi> </msub> </semantics></math>.</p> "> Figure 7
<p>Variation curves of the prediction errors of the ESN, H-ESN, and DeepESN at different spectral radius <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> values.</p> "> Figure 8
<p>Comparison of the prediction results for the KS system between the ESN and H-ESN: the left panel shows the ESN predictions, while the right panel shows the H-ESN predictions, where <math display="inline"><semantics> <mrow> <msub> <mo>Λ</mo> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mi>t</mi> </mrow> </semantics></math> represents the Lyapunov time.</p> "> Figure 9
<p>MSE plot of the predicted values and true values for different dimensions of the KS system using the ESN and H-ESN.</p> "> Figure 10
<p>Comparison of prediction errors of the H-ESN under different Gaussian noise intensities.</p> ">
Abstract
:1. Introduction
- With appropriately chosen parameters, the H-ESN can provide longer prediction times for various high-dimensional chaotic systems.
- Under the same parameter conditions, the H-ESN demonstrates smaller prediction errors compared to other models when predicting different dimensions of chaotic systems.
- Compared to traditional methods, the H-ESN exhibits significant advantages in chaotic prediction tasks, particularly in the estimation of the maximal Lyapunov exponent.
2. Correlation Method
2.1. Echo State Network
2.2. Echo State Network Based on Homotopy Transformation (H-ESN)
2.2.1. Introduction to H-ESN
Algorithm 1 H-ESN standard algorithm process. |
|
2.2.2. Echo State Property of the H-ESN
- (1)
- (the parameter θ satisfies );
- (2)
- The spectral radius of the internal weight matrix A of the reservoir satisfies ;
3. Results
3.1. Lorenz System
3.2. Mackey–Glass Equation
3.3. Kuramoto–Sivashinsky Equations
4. Summary and Future Directions
4.1. Summary
4.2. Future Directions
- Computational inefficiency arises in parameter optimization, particularly in selecting the homotopy parameter .
- H-ESN exhibits certain limitations under high-noise conditions, with prediction accuracy gradually decreasing as noise intensity increases.
- Reservoir design varies by task, but the lack of universal guidelines makes selecting the right structure and parameters challenging.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Hinton, G.; Deng, L.; Yu, D.; Mohamed, A.; Jaitly, N. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
- Vismaya, V.S.; Muni, S.S.; Panda, A.K.; Mondal, B. Degn-Harrison map: Dynamical and network behaviours with applications in image encryption. Chaos Solit. Fractals 2025, 192, 115987. [Google Scholar]
- Khan, A.Q.; Maqbool, A.; Alharbi, T.D. Bifurcations and chaos control in a discrete Rosenzweig-Macarthur prey-predator model. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 033111. [Google Scholar] [CrossRef]
- Farman, M.; Jamil, K.; Xu, C.; Nisar, K.S.; Amjad, A. Fractional order forestry resource conservation model featuring chaos control and simulations for toxin activity and human-caused fire through modified ABC operator. Math. Comput. Simul. 2025, 227, 282–302. [Google Scholar] [CrossRef]
- Zhai, H.; Sands, T. Controlling chaos in Van Der Pol dynamics using signal-encoded deep learning. Mathematics 2022, 10, 453. [Google Scholar] [CrossRef]
- Kennedy, C.; Crowdis, T.; Hu, H.; Vaidyanathan, S.; Zhang, H.-K. Data-driven learning of chaotic dynamical systems using Discrete-Temporal Sobolev Networks. Neural Netw. 2024, 173, 106152. [Google Scholar] [CrossRef]
- Bradley, E.; Kantz, H. Nonlinear time-series analysis revisited. Chaos Interdiscip. J. Nonlinear Sci. 2015, 25, 097610. [Google Scholar] [CrossRef]
- Young, C.D.; Graham, M.D. Deep learning delay coordinate dynamics for chaotic attractors from partial observable data. Phys. Rev. E 2023, 107, 034215. [Google Scholar] [CrossRef]
- Datseris, G.; Parlitz, U. Delay Coordinates, in Nonlinear Dynamics: A Concise Introduction Interlaced with Code; Springer International Publishing: Cham, Switzerland, 2022; pp. 89–103. [Google Scholar]
- Brandstater, A.; Swinney, H.L. Strange attractors in weakly turbulent Couette-Taylor flow. Phys. Rev. A 1987, 35, 2207. [Google Scholar] [CrossRef] [PubMed]
- Peng, H.; Wang, W.; Chen, P.; Liu, R. DEFM: Delay-embedding-based forecast machine for time series forecasting by spatiotemporal information transformation. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 043112. [Google Scholar] [CrossRef]
- Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
- Pathak, J.; Lu, Z.; Hunt, B.R.; Girvan, M.; Ott, E. Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos Interdiscip. J. Nonlinear Sci. 2017, 27, 121102. [Google Scholar] [CrossRef] [PubMed]
- Hart, J.D. Attractor reconstruction with reservoir computers: The effect of the reservoir’s conditional Lyapunov exponents on faithful attractor reconstruction. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 043123. [Google Scholar] [CrossRef]
- Lu, Z.; Pathak, J.; Hunt, B.; Girvan, M.; Brockett, R.; Ott, E. Reservoir observers: Model-free inference of unmeasured variables in chaotic systems. Chaos Interdiscip. J. Nonlinear Sci. 2017, 27, 041102. [Google Scholar] [CrossRef]
- Ozturk, M.C.; Xu, D.; Principe, J.C. Analysis and design of echo state networks. Neural Comput. 2007, 19, 111–138. [Google Scholar] [CrossRef]
- Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
- Panahi, S.; Lai, Y.C. Adaptable reservoir computing: A paradigm for model-free data-driven prediction of critical transitions in nonlinear dynamical systems. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 051501. [Google Scholar] [CrossRef]
- Köster, F.; Patel, D.; Wikner, A.; Jaurigue, L.; Lüdge, K. Data-informed reservoir computing for efficient time-series prediction. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 073109. [Google Scholar] [CrossRef]
- Bonas, M.; Datta, A.; Wikle, C.K.; Boone, E.L.; Alamri, F.S.; Hari, B.V.; Kavila, I.; Simmons, S.J.; Jarvis, S.M.; Burr, W.S.; et al. Assessing predictability of environmental time series with statistical and machine learning models. Environmetrics 2025, 36, e2864. [Google Scholar] [CrossRef]
- Yadav, M.; Sinha, S.; Stender, M. Evolution beats random chance: Performance-dependent network evolution for enhanced computational capacity. Phys. Rev. E 2025, 111, 014320. [Google Scholar] [CrossRef] [PubMed]
- Dubey, S.R.; Singh, S.K.; Chaudhuri, B.B. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing 2022, 503, 92–108. [Google Scholar] [CrossRef]
- Yu, D.; Cao, F. Construction and approximation rate for feedforward neural network operators with sigmoidal functions. J. Comput. Appl. Math. 2025, 453, 116150. [Google Scholar] [CrossRef]
- Gong, Y.; Lun, S.; Li, M.; Lu, X. An echo state network model with the protein structure for time series prediction. Appl. Soft Comput. 2024, 153, 111257. [Google Scholar] [CrossRef]
- Xie, M.; Wang, Q.; Yu, S. Time series prediction of ESN based on Chebyshev mapping and strongly connected topology. Neural Process. Lett. 2024, 56, 30. [Google Scholar] [CrossRef]
- Lun, S.-X.; Yao, X.-S.; Qi, H.-Y.; Hu, H.-F. A novel model of leaky integrator echo state network for time-series prediction. Neurocomputing 2015, 159, 58–66. [Google Scholar] [CrossRef]
- Liao, Z.; Wang, Z.; Yamahara, H.; Tabata, H. Echo state network activation function based on bistable stochastic resonance. Chaos Solit. Fractals 2021, 153, 111503. [Google Scholar] [CrossRef]
- Liao, Y.; Li, H. Deep echo state network with reservoirs of multiple activation functions for time-series prediction. Sādhanā 2019, 44, 146. [Google Scholar] [CrossRef]
- Sun, C.; Song, M.; Hong, S.; Li, H. A review of designs and applications of echo state networks. arXiv 2020, arXiv:2012.02974. [Google Scholar] [CrossRef]
- Sun, J.; Li, L.; Peng, H. Sequence Prediction and Classification of Echo State Networks. Mathematics 2023, 11, 4640. [Google Scholar] [CrossRef]
- González-Zapata, A.M.; Tlelo-Cuautle, E.; Ovilla-Martinez, B.; Cruz-Vega, I.; De la Fraga, L.G. Optimizing echo state networks for enhancing large prediction horizons of chaotic time series. Mathematics 2022, 10, 3886. [Google Scholar] [CrossRef]
- Lin, Z.F.; Liang, Y.M.; Zhao, J.L.; Feng, J.; Kapitaniak, T. Control of chaotic systems through reservoir computing. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 121101. [Google Scholar] [CrossRef]
- Li, Y.; Li, Y. Predicting chaotic time series and replicating chaotic attractors based on two novel echo state network models. Neurocomputing 2022, 491, 321–332. [Google Scholar] [CrossRef]
- Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput. 2002, 14, 2531–2560. [Google Scholar] [CrossRef]
- Faroughi, S.A.; Soltanmohammadi, R.; Datta, P.; Mahjour, S.K.; Faroughi, S. Physics-informed neural networks with periodic activation functions for solute transport in heterogeneous porous media. Mathematics 2023, 12, 63. [Google Scholar] [CrossRef]
- Arkowitz, M. Introduction to Homotopy Theory; Springer Science & Business Media: New York, NY, USA, 2011; pp. 3–7. [Google Scholar]
- Yildiz, I.B.; Jaeger, H.; Kiebel, S.J. Re-visiting the echo state property. Neural Netw. 2012, 35, 1–9. [Google Scholar] [CrossRef]
- Wang, B.; Lun, S.; Li, M.; Lu, X. Echo state network structure optimization algorithm based on correlation analysis. Appl. Soft Comput. 2024, 152, 111214. [Google Scholar] [CrossRef]
- Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963, 20, 130–141. [Google Scholar] [CrossRef]
- Glass, L.; Mackey, M. Mackey-Glass equation. Scholarpedia 2010, 5, 6908. [Google Scholar] [CrossRef]
- Abadie, M.; Beck, P.; Parker, J.P.; Schneider, T.M. The topology of a chaotic attractor in the Kuramoto-Sivashinsky equation. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 013123. [Google Scholar] [CrossRef] [PubMed]
Time Complexity: | Generating Reservoir Weight Matrix A: |
Reservoir State Update: | |
Output Weight Matrix: | |
Prediction Phase: | |
Space Complexity: | Reservoir Weight Matrix A: |
State Matrix: | |
Output Weight Matrix: | |
Storing Prediction Results: |
Parameter | Value | Parameter | Value |
---|---|---|---|
300 | 1 | ||
0.7 | 0 | ||
t | 0.02 | 0.7 |
ESN | Deep-ESN | H-ESN | |
---|---|---|---|
Lorenz-x | 378 | 262 | 521 |
Lorenz-y | 382 | 260 | 522 |
Lorenz-z | 393 | 273 | 530 |
MSE | ||||||
---|---|---|---|---|---|---|
300 | 350 | 400 | 450 | 500 | ||
Lorenz-x | ESN | 0.80367 | 0.78542 | 1.0564 | 3.5907 | 3.6537 |
Deep ESN | 2.9228 | 3.0528 | 6.9675 | 25.3735 | 41.3386 | |
H-ESN | 0.5315 | 0.51897 | 0.51815 | 0.69162 | 0.65461 | |
Lorenz-y | ESN | 1.8692 | 1.7847 | 2.6831 | 8.5373 | 8.3973 |
Deep ESN | 7.2936 | 7.019 | 12.0632 | 37.1467 | 55.8747 | |
H-ESN | 1.2166 | 1.1686 | 1.1761 | 1.6599 | 1.5405 | |
Lorenz-z | ESN | 2.7321 | 2.6891 | 2.8106 | 12.8135 | 12.4176 |
Deep ESN | 10.4869 | 11.1452 | 27.9924 | 49.5561 | 55.6685 | |
H-ESN | 1.8054 | 1.7662 | 1.7419 | 2.4817 | 2.3127 |
300 | 350 | 400 | 450 | 500 | ||
---|---|---|---|---|---|---|
Lorenz-x | H-ESN | 33.87% | 33.92% | 50.95% | 80.74% | 82.08% |
Lorenz-y | H-ESN | 34.91% | 34.52% | 56.17% | 80.56% | 81.65% |
Lorenz-z | H-ESN | 33.92% | 34.32% | 38.02% | 80.63% | 81.38% |
Parameter | Value | Parameter | Value |
---|---|---|---|
1000 | 0.7 | ||
0.9 | 0 | ||
t | 0.1 | 0.08 |
Actual KS | ESN | H-ESN | |
---|---|---|---|
0.0471 | 0.0488 | 0.0470 | |
Error (%) | 3.61% | 0.21% |
Parameter | Value | Parameter | Value |
---|---|---|---|
5000 | d | 3 | |
0.4 | 0 | ||
Q | 60 | 0.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Geng, F.; Li, Y.; Liu, H. Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation. Mathematics 2025, 13, 894. https://doi.org/10.3390/math13060894
Wang S, Geng F, Li Y, Liu H. Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation. Mathematics. 2025; 13(6):894. https://doi.org/10.3390/math13060894
Chicago/Turabian StyleWang, Shikun, Fengjie Geng, Yuting Li, and Hongjie Liu. 2025. "Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation" Mathematics 13, no. 6: 894. https://doi.org/10.3390/math13060894
APA StyleWang, S., Geng, F., Li, Y., & Liu, H. (2025). Learning High-Dimensional Chaos Based on an Echo State Network with Homotopy Transformation. Mathematics, 13(6), 894. https://doi.org/10.3390/math13060894