[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Hardware Acceleration-Based Privacy-Aware Authentication Scheme for Internet of Vehicles Using Physical Unclonable Function
Previous Article in Journal
Industrial Potential of Formaldehyde Gas Sensor Based on PdPt Bimetallic Loaded SnO2 Nanoparticles
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Non-Stationary Fuzzy Time Series Forecasting with Bayesian Networks

School of Control Science and Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(5), 1628; https://doi.org/10.3390/s25051628
Submission received: 16 December 2024 / Revised: 23 February 2025 / Accepted: 4 March 2025 / Published: 6 March 2025
(This article belongs to the Section Intelligent Sensors)
Figure 1
<p>The flow chart of the proposed model.</p> ">
Figure 2
<p>Original and first-order differenced time series for seventeen datasets. The top panel depicts the original time series data. The lower panel shows the first-order differenced time series.</p> ">
Figure 3
<p>Error scatter plot produced by the proposed model for (<b>a</b>) BTC–USD time series, (<b>b</b>) Dow Jones time series, (<b>c</b>) ETH–USD time series, (<b>d</b>) EUR–GBP time series, (<b>e</b>) EUR–USD time series, (<b>f</b>) GBP–USD time series, (<b>g</b>) NASDAQ time series, (<b>h</b>) SP500<sub>a</sub> time series, (<b>i</b>) TAIEX time series.</p> ">
Figure 4
<p>Error distribution histogram produced by the proposed model for (<b>a</b>) BTC–USD time series, (<b>b</b>) Dow Jones time series, (<b>c</b>) ETH–USD time series, (<b>d</b>) EUR–GBP time series, (<b>e</b>) EUR–USD time series, (<b>f</b>) GBP–USD time series, (<b>g</b>) NASDAQ time series, (<b>h</b>) SP500<sub>a</sub> time series, (<b>i</b>) TAIEX time series.</p> ">
Figure 5
<p>Prediction intervals yielded by the proposed model and IE-BN-PWFTS for (<b>a</b>) TAIEX time series and (<b>b</b>) EUR–USD time series.</p> ">
Figure 6
<p>Error scatter plot produced by the proposed model for (<b>a</b>) Sunspot time series, (<b>b</b>) MG time series, (<b>c</b>) SP500<sub>b</sub> time series, (<b>d</b>) Radio time series, (<b>e</b>) Lake time series, (<b>f</b>) CO<sub>2</sub> time series, (<b>g</b>) Milk time series, (<b>h</b>) DJ time series.</p> ">
Figure 7
<p>Error distribution histogram produced by the proposed model for (<b>a</b>) Sunspot time series, (<b>b</b>) MG time series, (<b>c</b>) SP500<sub>b</sub> time series, (<b>d</b>) Radio time series, (<b>e</b>) Lake time series, (<b>f</b>) CO<sub>2</sub> time series, (<b>g</b>) Milk time series, (<b>h</b>) DJ time series.</p> ">
Versions Notes

Abstract

:
Despite its interpretability and excellence in time series forecasting, the fuzzy time series forecasting model (FTSFM) faces significant challenges when handling non-stationary time series. This paper proposes a novel hybrid non-stationary FTSFM that integrates time-variant FTSFM, Bayesian network (BN), and non-stationary fuzzy sets. We first apply first-order differencing to extract the fluctuation information of the time series while reducing non-stationarity. A novel time-variant FTSFM updating method is proposed to effectively merge historical knowledge with new observations, enhancing model stability while maintaining sensitivity to time series changes. The updating of fuzzy sets is achieved by incorporating non-stationary fuzzy sets and prediction residuals. Based on updated fuzzy sets, the system reconstructs fuzzy logical relationship groups by combining historical and new data. This approach implements dynamic quantitative modeling of fuzzy relationships between historical and predicted moments, integrating valuable historical temporal fuzzy patterns with emerging temporal fuzzy characteristics. This paper further develops an adaptive BN structure learning method with an adaptive scoring function to update temporal dependence relationships between any two moments while building upon existing dependence relationships. Experimental results indicate that the proposed model significantly outperforms benchmark algorithms.

1. Introduction

The advancement of modern sensor technology has facilitated the proliferation of automated data acquisition systems across diverse fields, resulting in the generation of extensive temporal observations. These data, structured as time series, document the dynamic evolution of system states, establishing a fundamental basis for predictive analysis and decision support. Numerous time series forecasting methods have been proposed [1,2]. The superior interpretability of fuzzy time series forecasting models (FTSFMs) has led to their widespread application in various fields, including financial markets [3] and wind energy [4]. FTSFMs utilize fuzzy sets to model systematic uncertainties in data, specifically imprecision and vagueness. Ref. [5] introduced the FTSFM to address time series forecasting problems. The model first fuzzifies precise original data using fuzzy sets. Next, it employs max–min composition operations on the fuzzified data to build a fuzzy relation matrix that indicates the relationships between various time points. Using historical data and the fuzzy relationship matrix, the model generates fuzzy forecast values, which are converted into precise ones. Ref. [6] improved the model of [5] by replacing the complex max–min operations with simpler arithmetic operations.
Traditional FTSFMs are optimized for stationary time series, assuming data generation from a fixed, albeit unknown, process. However, the time series generation process often changes over time in practical applications. The dynamic nature of the generation process is reflected by the data produced, which is called a non-stationary time series. The probabilistic properties of non-stationary time series change irregularly over time [7]. The development of effective fuzzy time series forecasting methodologies for such non-stationary data represents a critical research challenge.
FTSFMs struggle to keep pace with the dynamic variations of non-stationary time series, affecting forecast accuracy. To address this challenge, researchers have employed two primary approaches for reducing non-stationarity: differencing operations and decomposition methods [8,9]. Ref. [10] demonstrated that training with first-order differentiated data improves prediction accuracy compared with raw time series data. Ref. [11] proposed an intuitionistic FTSFM using the percentage of first-order differenced data between consecutive time intervals. Additionally, utilizing empirical mode decomposition methods can transform raw data into relatively more stationary multivariate time series [12,13]. Both methods decompose the original time series into multiple subsequences known as intrinsic mode functions, which possess stronger stationarity than the original non-stationary time series. However, these subsequences may incorporate future information, leading to the so-called look-ahead bias [14]. The presence of look-ahead bias can severely distort experimental results. While these stationarity transformation approaches can partially mitigate non-stationarity, they cannot completely eliminate the dynamic nature inherent in time series, necessitating fundamental improvements to FTSFMs to better accommodate temporal variations.
Several improved FTSFMs have been developed to adapt to irregular changes in non-stationary time series and reduce data non-stationarity. Ref. [5] defined the time-variant FTSFM: if the fuzzy relation matrix changes over time, it is called a time-variant FTSFM. Ref. [15] further explained the detailed implementation steps for the time-variant FTSFM. For each prediction, a fuzzy relation matrix is built using historical data from a preceding period, enabling the fuzzy relationships to change over time. The length of the historical period is referred to as the window base. Many researchers have explored time-variant FTSFMs and combined them with the aforementioned time series stabilization methods. Ref. [16] utilized differenced time series for time-variant fuzzy time series forecasting. Ref. [17] advanced the work of [16] by improving outlier handling, data fuzzification, and window base determination. Ref. [18] proposed a novel time-variant FTSFM for differenced seasonal data with a systematic search algorithm for the window base. Ref. [19] presented a time-variant FTSFM incorporating a sliding window approach [20], where the fuzzy relation matrix changes as the window slides. A propositional linear temporal logic formula is proposed to analyze the data trend in the window, thereby supporting forecasting. These time-variant FTSFMs aim to construct predictive models that can adapt to the characteristics of the latest period of data. However, neglecting the valuable information in previously trained models may result in reduced prediction quality. Since new data evolves from historical data, fully utilizing previously trained models becomes crucial for improving prediction accuracy on incoming data.
FTSFMs quantify the imprecision and vagueness within time series using fuzzy set theory. It is challenging for fixed fuzzy sets to accommodate dynamic changes in non-stationary time series. Ref. [21] proposed the definition of the non-stationary fuzzy set for dynamically adjusting fuzzy sets. A non-stationary fuzzy set is created by integrating a basic fuzzy set with a perturbation function, where the parameters of the membership function change according to the values of the perturbation function at different time points. Ref. [22] applied non-stationary fuzzy sets to FTSFMs to predict non-stationary time series with trends and scale changes, with interpolation functions serving as perturbation functions. Ref. [23] introduced a non-stationary fuzzy time series (NSFTS) forecasting model capable of handling non-stationary and heteroskedastic time series. The NSFTS model utilizes a residual-based perturbation function to adaptively adjust the membership function parameters of the basic fuzzy set, reflecting changes in the non-stationary time series. The numerical forecast is calculated by combining the midpoints of the right-hand side (RHS) of each matched rule and the membership grades of the observations. Non-stationary fuzzy sets-based forecasting models perform well in short-term non-stationary time series forecasting. However, the unchanging fuzzy relationships limit their performance in long-term non-stationary time series forecasting.
Ref. [24] proposed a time-varying FTSFM that incorporates non-stationary fuzzy sets to improve the accuracy of wind power predictions. The model handles time series variability by dividing the series into segments, each with unique membership and partition functions. Adjustments to membership function parameters employ non-stationary fuzzy set methods to suit non-stationary time series. The model retrains using the latest data window when the most recent prediction error surpasses a predefined threshold to reduce computational requirements. While [24] offers computational efficiency, its strategy of maintaining only the fuzzy relationships from the latest data window may result in the loss of valuable temporal patterns embedded in historical relationships, potentially limiting the model’s ability to capture long-term temporal relationships.
Apart from adaptively adjusting fuzzy sets in the fuzzification stage, adaptive methods have been employed to enhance other aspects of FTSFMs, thereby better accommodating the dynamic nature of non-stationary time series. Ref. [25] proposed an adaptive method that automatically modifies the order of the FTSFM based on prediction accuracy for forecasting various data. Ref. [26] applied the adaptive expectation model [27,28] to optimize the forecast outcomes of a trend-weighted FTSFM following the initial forecasts from the FTSFM. The adaptive expectation model adjusts the forecast value using the difference between it and the observation at the previous time point. Ref. [29] utilized a modified adaptive expectation model with adaptive parameters to enhance forecasting performance. Changes in the adaptive expectation model parameters indicate stock fluctuation and oscillation.
Ref. [30] introduced the Bayesian network (BN) concept. A BN represents knowledge through a probabilistic graph, with nodes denoting random variables and directed edges indicating the dependence relationships between variables. The strength of dependence between two variables is represented by their conditional probability distributions (CPDs) within a BN. The initial task when constructing a BN is to establish the BN structure that depicts the dependence relationships among variables. These relationships serve to model the causal interactions within the system. The BN structure can be manually set based on domain knowledge. In certain situations, the dependence relationships among variables are unknown and need to be inferred from data. Due to the advantages of modeling dependence relationships and handling statistical uncertainty in complex systems, several studies have applied BNs to time series forecasting. Ref. [31] initially applied BN structure learning to determine the dependence relationships in the price-earnings ratio at various time points, representing these as a BN structure. The CPDs of the time points were determined through BN parameter learning. Given historical observations at previous time points, the BN conducts probabilistic inference to generate the predicted values. Ref. [32] leveraged domain knowledge to construct the BN structure after determining the set of variables. The prediction phase initially utilized a BN to forecast the stochastic vehicular speed, followed by error compensation performed by a backpropagation neural network. Ref. [33] utilized Bayesian networks to discover direct and indirect dependence relationships across various time points in time series. Potential temporal patterns were modeled by integrating the BN structure with fuzzy logic relationships (FLRs). The study developed fuzzy empirical probability-weighted fuzzy logical relationship groups (FLRGs) to model statistical and systematic uncertainties, fully accounting for both relationships. In the above BN-based time series forecasting model, once dependence relationships are set based on all training data, they remain unchanged. This restriction reduces the model’s flexibility, which is necessary for effective time series forecasting in many situations. Ref. [34] proposed a method to construct BNs at each time point using data from a preceding period. With this approach, we can intuitively observe changes in the causal relationships within the system. Experimental results from the U.S. and Chinese stock markets indicate that the BN structure remains stable in the short term but changes over the long term. It shows that a fixed BN alone is inadequate for capturing the changing characteristics of the time series. In other words, changes in the dependencies within the BN structure can reflect the diversity of causal relationships in time series. Therefore, to enhance the BN’s ability to model complex relationships in non-stationary time series, it is necessary to develop BN structure learning methods that dynamically change based on input data.
In this study, we present a new hybrid FTSFM to enhance the accuracy of non-stationary time series forecasting. The proposed method begins by performing first-order differencing on the raw time series data. This differencing operation reduces non-stationarity while extracting information, producing a variation time series that captures fluctuations between adjacent time points. We establish the initial FTSFM using the training set of the variation time series. BN and fuzzy logical relationships (FLRs) represent the data’s temporal patterns. The BN structure visually illustrates the dependence relationships between different time points in the variation time series. At the same time, FLRs capture the fuzzy relationships between historical and forecasting moments after fuzzifying the variation time series. Uncertainty in the variation of time series is quantitatively described using FLRGs weighted by fuzzy empirical probabilities, which aggregate the membership values of corresponding FLRs within each FLRG. During the forecasting phase, we employ a sliding window approach, dividing the entire prediction dataset into multiple forecasting windows. The model remains unchanged within each window. The decision to update the existing model is based on its forecasting performance in the previous window. If no update is required, predictions are generated using the existing model; otherwise, the model is updated before predicting. When model updates are required, the proposed method employs a comprehensive updating mechanism: utilizing the training data for the existing model as old data and the actual observations from all prediction windows since the last model update as new data. The proposed method adjusts the parameters of non-stationary fuzzy sets using prediction residuals of the new data, achieving smooth transitions of fuzzy sets to respond to dynamic changes in the variation time series. The adaptive BN structure learning method employs a novel adaptive structure scoring function using old and new data, enhancing structural adaptability to new data while preserving valuable information from the dependence relationships in the existing BN. The model then reconstructs fuzzy empirical probability-weighted FLRGs using the updated BN and non-stationary fuzzy sets. After completing the model update, the framework generates predictions using the updated FLRGs and BN. The main contributions of this study are as follows:
1.
We propose a novel hybrid FTSFM that integrates time-variant FTSFM, BN, and non-stationary fuzzy sets. The traditional time-variant FTSFM update strategy handles the dynamic update of fuzzy relationships. BN structure learning captures adaptive changes in temporal dependence relationships between specific time points. Non-stationary fuzzy sets address irregular changes in data imprecision. This multi-dimensional modeling strategy significantly enhances the model’s adaptability and forecasting accuracy for non-stationary time series.
2.
We develop an adaptive BN structure updating method with a novel dynamic scoring mechanism. The proposed method enables continuous refinement of temporal dependence relationships while preserving crucial historical patterns, thereby achieving an optimal balance between stability and adaptability in temporal relationship modeling.
3.
We introduce a novel non-stationary fuzzy set approach that enhances existing methods through an innovative residual-based perturbation mechanism. This perturbation function enables each fuzzy set to share the impact of prediction residuals through distinct displacement degrees, facilitating smooth transitions of fuzzy sets. It ensures the model’s sensitivity to changes in the vagueness of non-stationary time series while enhancing its stability.
The remaining sections of this paper are structured as follows. In Section 2, we provide a detailed explanation of FTSFMs and BNs serving as the basis for the proposed algorithm. Section 3 provides an in-depth description of the proposed FTSFMs with BNs in non-stationary environments. Experimental result analyses are presented in Section 4. Section 5 concludes the paper.

2. Preliminaries

In this section, basic definitions of FTSFMs, non-stationary fuzzy sets, and BNs are briefly presented.

2.1. Basic Concepts of Fuzzy Time Series Model

Let U be the universe of discourse. A fuzzy set A on U is expressed as
A = u U μ A ( u ) / u ,
where μ A denotes the membership function of A, μ A : U [ 0 , 1 ] . μ A ( u ) is the membership grade of u U , and μ A ( u ) [ 0 , 1 ] . Let the parameters of μ A be p 1 , , p m . μ A ( u ) can be expressed as μ A u , p 1 , , p m .
A triangular fuzzy set A takes the triangular function as the underlying membership function. Denote the lower, midpoint, and upper values of the triangle as a , b , and c, respectively. The membership function is defined as
μ A ( u , a , b , c ) = ( u a ) / ( b a ) , a u b ( c u ) / ( c b ) , b u c 0 , else .
Suppose a time series Y = { y t | t T } is given with y t R . Fuzzy sets A 1 , . . . , A I are defined on the universe of discourse U. Membership grades of y t belong to the fuzzy sets formed from the fuzzified data f t = [ μ A 1 ( y t ) , , μ A I ( y t ) ] at the moment t. F is the fuzzy time series defined on Y with the collection of f t .
When f t results from f t h , . . . , f t 1 , FLRs represent the fuzzy relationship between the antecedent moments t h , . . . , t 1 and the consequent moment t. An FLR for f t h , . . . , f t has the format A i t h , . . . , A i t 1 A i t , where the membership grade of y t k on the fuzzy set A i t k ( k = 0 , . . . , h ) is greater than zero. Each combination of { f t k } k = h 0 can yield multiple FLRs, given that each f t k may comprise multiple elements with non-zero membership grades. FLRs that have the same left-hand side constitute an FLRG denoted as A i t h , . . . , A i t 1 A k 1 , A k 2 , .

2.2. Non-Stationary Fuzzy Set

Ref. [21] introduced non-stationary fuzzy sets with the help of the non-stationary membership function and the perturbation function. The non-stationary membership function reflects the temporal variability present in membership functions. The perturbation function calculates the dynamic component of function parameters when the membership function changes. Non-stationary fuzzy sets reflect data change through positional shifts, changes in width, and noise-induced variations in the membership grade.
A non-stationary fuzzy set A ˙ is denoted as follows:
A ˙ = t T u U μ A ˙ ( t , u ) / u / t ,
where T contains a series of time points and μ A ˙ ( t , u ) : T × U [ 0 , 1 ] is the non-stationary membership function. μ A ˙ ( t , u ) changes over time in the time interval T, which can be expressed as
μ A ˙ ( t , u ) = μ A u , p 1 ( t ) , , p m ( t ) .
p i ( t ) = p i + c i b i ( t ) with a time-variant perturbation function b i ( t ) and a constant c i for i = 1 , , m .

2.3. Bayesian Network

Bayesian networks have demonstrated their remarkable effectiveness for complex data-analysis problems [35]. The BN is a member of probabilistic graphical models. BNs can represent the latent patterns within data by incorporating a set of variables and their dependence in the form of directed acyclic graphs. A BN includes N v nodes representing random variables. Values of these nodes are possible observations of the variables. BNs visually represent conditional independence through directed acyclic graphs. A BN also utilizes an adjacency matrix G representing the edges between variables, with G i j = 1 indicating the directed dependence relationship from the i-th node to the j-th node. The CPD P ( X i | P a i ) for the i-th node represents the strength of the dependence between the i-th node and its parent nodes P a i . P ( X i = x i | P a i = p a j ) denotes the probability of the value x i of the i-th node given the j-th set of observations p a j of parent nodes P a i . The directed acyclic graph of a BN factorizes the joint probability distribution over variables X = { X 1 , , X N v } :
P ( X 1 , , X N v ) = i = 1 N v P ( X i | P a i )
Representing dependence relationships between variables in a BN requires acquiring the directed acyclic graph structure through learning methods, typically achieved using BN structure learning techniques. Apart from structure learning, BN learning also includes parameter learning. Parameter learning involves the determination of CPD parameters, while structure learning aims at generating the adjacency matrix to discover dependence relationships between variables. Structure and parameter learning are interdependent, as parameter learning needs the BN structure to identify the parents of each node before computing CPDs. Moreover, parameter learning is essential for evaluating the matching degree of a candidate network structure and the data.
Data-driven BN structure learning methods principally fall into two categories: constraint-based algorithms and score-based algorithms. The core idea of the latter is to explore the space of all potential directed acyclic graphs using a search strategy to select the optimal graph based on the values yielded from a scoring function on the gathered data. The present research employs a score-based structural learning method using the hill-climbing search method and Bayesian information criterion (BIC) as the score function [36]. The BIC scoring function offers the benefit of decomposability and clear intuitiveness. The BIC score approximates the marginal likelihood function as follows:
B I C ( G | D ) = log P ( G | D ) f ( G | D ) ,
where log P ( G | D ) is the logarithm likelihood of the graph structure G given the dataset D. The CPD parameters are determined by the maximum likelihood estimation algorithm. The penalty term f ( G | D ) = N p a r a m s / 2 · log N D helps prevent overfitting. N p a r a m s represents the count of parameters in all CPDs within the BN. The dataset D contains N D data instances. The BIC function for a BN can be factorized as
B I C ( G | D ) = i B I C ( X i | P a i , D ) = i log P ( X i | P a i , D ) f ( X i | D ) ,
where B I C ( X i | P a i , D ) is the BIC score of the variable X i and f ( X i | D ) = N p a r a m s i / 2 · log N D .
The hill-climbing method is a widely used search algorithm. The search starts with an initial model, which can either be an empty graph or a specific graph. Each search iteration produces candidate models derived from a single modification of the current model. When applying the hill-climbing algorithm to BN structure learning, the model with superior performance is preserved by comparing the scores of each candidate model and the current model via the scoring function. Operations such as adding, deleting, or reversing an edge generate candidate models during each iteration. As the BIC score function is decomposable, the comparison between candidate models and the current model can focus solely on the score of their dissimilar segments. Combining domain knowledge with a data-driven learning method is a common practice in BN learning. This paper models the BN structure by blending temporal adjacency relationships as domain knowledge with raw time-series data. A detailed description of the hill climbing and BIC-based BN structure learning method is introduced in [33].

3. Proposed Method

This section introduces a novel FTSFM, abbreviated as TV-NS-BN-PWFTS (Time-Variant Non-Stationary Bayesian Network-based Probabilistic Weighted Fuzzy Time Series), which incorporates adaptive structure learning of BN and non-stationary fuzzy set into the time-variant FTSFM based on the fundamental concepts mentioned earlier.
The proposed methodology consists of two primary phases: initial model construction during training (Section 3.1), followed by a dynamic forecasting process utilizing a sliding window approach (Section 3.2). During the forecasting phase, the model continuously monitors prediction residuals from the most recent window. When these residuals exceed a preset threshold value, the model updates using data from existing available prediction windows before generating predictions for the current window. Figure 1 depicts the workflow of the proposed model. The following are descriptions of the various stages of the proposed method.

3.1. Training Procedure

Traditional FTSFMs focus on capturing the intrinsic patterns in time series by establishing fuzzy relationships and using fuzzy sets and membership degrees to qualitatively and quantitatively describe systematic uncertainty. The Bayesian network-based probabilistic weighted fuzzy time series (BN-PWFTS) model proposed by [33] models temporal patterns in time series by combining BN and FLRs. BN fully considers direct and indirect dependence relationships between different time points, thus incorporating additional information about potential patterns beyond fuzzy relationships, providing more information for modeling the uncertainty of future time series values. To model systematic and statistical uncertainties, BN-PWFTS defines BN-based probabilistic weighted FLRGs. The probabilistic weights of elements in an FLRG are calculated as fuzzy empirical conditional probabilities using dependence relationships and membership degrees. The weights assigned to the antecedent and consequent components of FLRGs are determined by systematically incorporating the fuzzy empirical conditional probabilities of their constituent elements, leveraging the established dependency relationships. Although BN-PWFTS makes predictions by modeling interrelationships and uncertainties in time series, it lacks the ability to adapt to non-stationary changes in the time series. To address this limitation, we propose an enhanced model, TV-NS-BN-PWFTS, which not only preserves the advantages of BN-PWFTS but also effectively handles non-stationarity in time series data. In order to develop an effective initial model that can adequately capture both the intricate patterns and underlying uncertainties within time series data, we have adopted the training methodology from the fuzzy time series forecasting model proposed by [33]. In the training procedure, first-order differencing is conducted to decrease the non-stationarity of the time series, producing a differenced time series that captures variations between consecutive observations. Subsequently, the BN-PWFTS training process is employed to capture the intricate temporal relationships and uncertainties in the differenced time series. Algorithm 1 contains a detailed description of the training procedure.
Algorithm 1 Training procedure
Input: 
Y t t r —the original value at the time point t in the training dataset, ω —the order of FTSFM.
Output: 
F G —BN-based probabilistic weighted fuzzy logical relationship groups (BN-PWFLRGs), B—the trained BN, A ¯ —fuzzy sets, Z—partition functions.
// Non-stationary time series stabilization
  1:
Compute the variation in time series between two adjacent time points Y D as Y D t = Y t t r Y t 1 t r .
// Fuzzy set construction
  2:
Define U as the universe of discourse of Y D , and split U into I equal-length intervals { U i } with midpoints { m i } . The fuzzy sets A ¯ = { A i } are built on U with membership functions μ A i ( ) , where 1 i I .
// Time series fuzzification
  3:
Generate the fuzzified time series F = { f t } = { [ μ A 1 ( Y D t ) , , μ A I ( Y D t ) ] } . 0 μ A i ( Y D t ) 1 .
// Fuzzy relationship modeling
  4:
Generate FLRs from the fuzzified data with the format A i t ω , . . . , A i t 1 A i t . Multiple FLRs may be generated for some given set of time points.
  5:
Generate FLRGs F G = { F G A i l h s } = { A i t ω , . . . , A i t 1 A k 1 , A k 2 , } by gathering all FLRs with the same left-hand side (LHS) A i l h s = A i t ω , . . . , A i t 1 .
// BN structure learning for dependence relationship modelling
  6:
Y D is transformed into a ω + 1 -variate dataset C = { c 1 , c 2 , . . . , c T 1 ω } . c t = [ Y D t ω , . . . , Y D t ] represents a series of observations of ω historical time points and a prediction moment t.
  7:
Taking observations of the ω + 1 moments as variable values, construct dependence relationships between these moments as a BN B using a hill-climbing algorithm and BIC (Equation (6)) based BN structure learning method.
// BN-based fuzzy empirical probability calculation
  8:
for  0 k ω  do
  9:
   Compute the fuzzy empirical conditional probability P ( A i t k | A j P a t k ) of the moment t k in an FLRG. P ( A i t k | A j P a t k ) is determined below:
P ( A i t k | A j P a t k ) = Y D t k , p a t k μ A i ( Y D t k ) μ A j P a t k ( p a t k ) i Y D t k , p a t k μ A i ( Y D t k ) μ A j P a t k ( p a t k ) ,
where A j P a t k is the j-th fuzzy set group for the parent moments of t k identified by the learned B. p a t k is the set of observations in Y D for the parent moments. μ A j P a t k ( p a t k ) denotes the product of the membership degrees of the parent moments s P a t k μ A i s ( p a s ) .
10:
end for
11:
Compute the fuzzy empirical probability of the LHS of an FLRG with A i l h s based on the fuzzy empirical conditional probabilities of historical time points according to Equations (5) and (7).
P ( A i l h s ) = k = ω 1 P ( A i t k | A j P a t k ) .
12:
Compute the fuzzy empirical probabilities of the RHS of an FLRG with A i l h s according to Bayes rule
P ( A i t | A i l h s ) = P ( A i t , A i l h s ) P ( A i l h s ) .
13:
Calculate the partition functions Z = { Z A i l h s }
Z A i l h s = Y D t ω , . . . , Y D t 1 U k = ω 1 μ A i t k ( Y D t k ) .
14:
Assign weights of the LHS and RHS to construct the BN-PWFLRG F G A i l h s with the format as P ( A i l h s ) · A i l h s P ( A 1 t | A i l h s ) · A 1 t , , P ( A I t | A i l h s ) · A I t .
15:
return the set of BN-PWFLRGs F G = { F G A i l h s } , the learned BN B, the fuzzy sets A ¯ , and the partition functions Z = { Z A i l h s } .

3.2. Forecasting Procedure

In many non-stationary time series forecasting scenarios, newly arrived data often deviates from previously observed patterns in historical data, presenting significant challenges to traditional FTSFM. To address this limitation, we propose a novel forecasting procedure that incorporates a dynamic updating mechanism. This approach allows the model to continuously adapt to the the uncertainty and temporal patterns within new data, ensuring accurate predictions even when the underlying characteristics change over time. The following section first introduces the dynamic updating mechanism for non-stationary fuzzy sets and BN structure, followed by a detailed discussion of the complete forecasting procedure.

3.2.1. Non-Stationary Fuzzy Set Updating with New Perturbation Function

In non-stationary time series forecasting, traditional fuzzy sets struggle to adapt to dynamic changes in data distribution. Ideally, once the perfect model captures all information in the data, its prediction residuals should exhibit a standard normal distribution. Ref. [23] therefore designed perturbation functions based on the mean and variance of residuals to adjust non-stationary fuzzy set parameters, driving prediction residuals toward a standard normal distribution. However, the uniform adjustment with residual means ignores the different contributions of individual fuzzy sets to the residuals. We propose an improved residual-based non-stationary fuzzy set perturbation function. The proposed non-stationary fuzzy sets are built on the first-order differencing time series instead of the original time series to capture additional non-stationary features. Unlike [23], our method updates the fuzzy sets only when the model’s prediction performance for the current time period falls below a threshold. The proposed perturbation function employs a uniform distribution strategy to assign the residual mean to each fuzzy set, enabling gradual adjustments from the original position towards the new arrival data. This strategy maintains the valuable historical information within the existing fuzzy sets while promptly capturing the time series’ evolving characteristics.
This paper designs a new non-stationary fuzzy set parameter adjustment method based on triangular membership function Equation (2). For any non-stationary fuzzy set A i built on the differenced time series, its perturbed membership function is expressed as μ A i ( Y D t , p ( a i , b i , c i , d i , s i ) ) , where the perturbation function p ( a i , b i , c i , d i , s i ) is defined as:
p ( a i , b i , c i , d i , s i ) = { a i + d i s i / 2 , b i + d i , c i + d i + s i / 2 } ,
where d i and s i denote the displacement and scaling factors of fuzzy set A i , respectively, both computed based on prediction residuals. Let the residual mean be E ¯ and the variance be σ E . The displacement parameter d i for fuzzy set migration based on residual statistics is calculated as follows:
d i = i · E ¯ I + i 2 σ E I 1 σ E , if E ¯ 0 ( I i ) · E ¯ I + i 2 σ E I 1 σ E , if E ¯ < 0 .
This design enables fuzzy sets to progressively approach the zero-mean residual direction through i · E ¯ I as i increases while incorporating boundary scaling information via i · 2 σ E / ( I 1 ) σ E . The coverage range of fuzzy sets is modulated by the scaling parameter
s i = | d i 1 d i + 1 | ,
which ensures smooth transitions between adjacent fuzzy sets. The proposed membership function parameter adjustment mechanism enables fuzzy sets to dynamically adapt to the non-stationary characteristics of the time series.

3.2.2. BN Structure Adaptive Updating

Existing FTSFMs exhibit significant limitations when handling non-stationary time series: they either employ fixed FLRs [23] or exclusively utilize the latest data for constructing FLRs. Both strategies fail to achieve effective long-term prediction due to their inability to fully leverage the crucial temporal pattern information embedded in historical data. To overcome this limitation, we propose a novel time-variant FTSFM updating strategy. Once the initial FTSFM is established during training, the model undergoes dynamic updates at irregular intervals during the prediction phase. We assume that all data used to train the current model are old data D o l d , and the actual values of the periods predicted by the current model are new data D n e w . The model update regarding temporal patterns includes two parts: (1) After updating non-stationary fuzzy sets, FLRs and FLRGs are reconstructed using both D o l d and D n e w , retaining historical temporal patterns to some extent while capturing pattern changes in new data. (2) The BN structure, containing dependence relationships for each time point, is updated through a hill-climbing-based structure learning method with an adaptive BIC. This method adaptively adjusts the learning process to balance the influence of new and old data. Additionally, updating based on the existing BN structure ensures the preservation of valuable historical dependence information. The BN updating process is summarized in Algorithm 2.
Algorithm 2 BN structure adaptive updating
Input: 
D o l d —the data used to train the current FTSFM, D n e w —real observations of the forecasting sub-windows predicted by the current FTSFM, B—the current BN, η —weighting parameter in the adaptive BIC, ω —the order of FTSFM.
Output: 
Updated BN B * .
  1:
Initialize variables X = { X 1 , . . . , X ω + 1 } representing the time points in FLRG
  2:
B * B
  3:
Compute the score S o l d of the initial BN structure B with the adaptive BIC score function B I C a ( )
S o l d = B I C a ( B | D o l d , D n e w , η ) = η log P ( B | D o l d ) N p a r a m s 2 log | D o l d | + ( 1 η ) log P ( B | D n e w ) N p a r a m s 2 log | D n e w |
  4:
while not converged do
  5:
for each possible edge operation o p { add , delete , reverse }  do
  6:
  for each possible edge e = X i X j for operation o p  do
  7:
    B Apply o p with edge e in B
  8:
   if  B is acyclic then
  9:
     Δ o p e B I C a ( B | D o l d , D n e w , η ) B I C a ( B | D o l d , D n e w , η ) ,
10:
    if  Δ o p e > 0  then
11:
      S b e s t B I C a ( B | D o l d , D n e w , η )
12:
      B b e s t B
13:
    end if
14:
   end if
15:
  end for
16:
end for
17:
if  S b e s t > S o l d  then
18:
     B * B b e s t
19:
     S o l d S b e s t
20:
else
21:
  return  B *
22:
end if
23:
end while

3.2.3. Integrated Forecasting Framework

The forecasting process employs a dynamic prediction framework that adaptively updates the model based on prediction residuals and historical data. The model’s performance in the previous sub-window is evaluated using the mean absolute scaled error (MASE) before each new forecasting sub-window. The MASE metric is defined as follows:
MASE = 1 l r t = 1 l r | Y t Y ^ t | 1 l r 1 t = 2 l r | Y t Y t 1 | ,
where Y t is the actual value, Y ^ t is the predicted value at time point t, and l r is the length of the prediction sub-window. MASE offers a scale-independent measure of prediction accuracy. MASE < 1 indicates better performance compared with the naive approach of using the previous observation as the prediction. A lower MASE value indicates better predictive performance of the model. The model is updated when the MASE surpasses a predefined threshold θ , which signifies that the current model’s prediction accuracy on the latest prediction window is unsatisfactory. The forecasting procedure is detailed in Algorithm 3, including the numerical prediction generation process in Algorithm 4.
Algorithm 3 Forecasting procedure
Input: 
Y t —the original value at the time point t in the testing dataset, B—the initial trained BN, F G —BN-based probabilistic weighted fuzzy logical relationship groups (BN-PWFLRGs), ω —the order of FTSFM, θ —the threshold for model update, l o —the length of old data memory window, l n —the length of new data memory window, l p —the length of prediction window, A ¯ —fuzzy sets, Z—partition functions, Y t r —the training dataset.
Output: 
Y ^ —all predicted values.
  1:
Initialize the old data memory window W o by the last l o samples of Y t r
  2:
Initialize the new data memory window W n , the prediction result memory window W p , and the true value memory window W p t r u e as ∅
  3:
for  t = 1 to N t e  do
// Check if need update model before forecasting the initial point in each forecasting sub-window
  4:
if  t = 1  then
  5:
  Generate the prediction Y ^ t with Y t ω , . . . , Y t 1 , Z, A ¯ and F G (Algorithm 4)
  6:
   W p W p { Y ^ t }
  7:
end if
  8:
if  t % l r = = 1 and t > 1  then
  9:
    W p t r u e W p t r u e { Y t 1 } , W n W n { Y t 1 }
10:
   if  MASE ( W p t r u e , W p ) θ  then
11:
    D o l d W o , D n e w W n
12:
   Adaptively update the BN B with D o l d , D n e w , ω , and η to obtain the updated BN B * (Algorithm 2) and B B *
13:
   Update the non-stationary fuzzy sets A ¯ with the perturbation function Equations (11)–(13)
14:
   Reconstruct BN-PWFLRGs F G = { F G A i l h s } with the format as P ( A i l h s ) · A i l h s P ( A 1 t | A i l h s ) · A 1 t , , P ( A I t | A i l h s ) · A I t by D o l d , D n e w , the updated A ¯ and B (Equations (7)–(9)).
15:
   Recalculate the partition functions Z based on D o l d , D n e w , the updated A ¯ and B (Equation (10)).
16:
   Generate the prediction Y ^ t with Y t ω , . . . , Y t 1 , Z, A ¯ and F G (Algorithm 4)
17:
    W o the last l o samples of W o W n , W n , W p t r u e , W p { Y ^ t }
18:
  else
19:
   Generate the prediction Y ^ t with Y t ω , . . . , Y t 1 , Z, A ¯ and F G (Algorithm 4)
20:
    W p W p { Y ^ t }
21:
  end if
22:
else
23:
  Generate the prediction Y ^ t with Y t ω , . . . , Y t 1 , Z, A ¯ and F G (Algorithm 4)
24:
   W n W n { Y t 1 } , W p W p { Y ^ t } , W p t r u e W p t r u e { Y t 1 }
25:
end if
26:
end for
27:
return  Y ^ = { Y ^ 1 , , Y ^ N t e } .
Algorithm 4 Generate prediction for time point t
Input: 
Y t ω , . . . , Y t 1 —historical data, A ¯ —fuzzy sets, F G —BN-based probabilistic weighted fuzzy logical relationship groups (BN-PWFLRGs), Z—partition functions.
Output: 
Y ^ t —predicted value.
1:
Generate Y D t l = Y t l Y t l 1 ( l = 1 , . . . , w ) by first-order differencing.
2:
Fuzzify Y D t l into F t l = { A i t l | μ A i ( Y D t l ) > 0 } based on A ¯ , l = 1 , . . . , w .
3:
Construct each possible pair of F t ω , . . . , F t 1 denoted by A i l h s = { A i t ω , . . . , A i t 1 } as the LHS of an FLRG
4:
Locate the BN-PWFLRG F G A i l h s F G that has the same LHS A i l h s as the active FLRG for each A i l h s .
5:
Calculate the expectation E ( m p A i l h s ) of midpoints of fuzzy sets on the RHS of F G A i l h s according to Equations (8) and (9):
E ( m p A i l h s ) = j I P ( A j A i l h s ) · m p j .
6:
Calculate the prediction based on all active FLRGs:
Y D ^ t = i l h s P ( Y D t ω , . . . , Y D t 1 | A i l h s ) i l h s P ( Y D t ω , . . . , Y D t 1 | A i l h s ) E ( m p A i l h s ) ,
where P ( Y D t ω , . . . , Y D t 1 | A i l h s ) = P ( A i l h s ) μ A i l h s ( Y D t ω , . . . , Y D t 1 ) / Z A i l h s according to Equation (10). μ A i l h s ( Y D t ω , . . . , Y D t 1 ) is the product of membership degrees of Y D t ω , . . . , Y D t 1 on fuzzy sets in A i l h s .
7:
Inverse differencing to obtain the final numerical prediction Y ^ t Y t 1 + Y D ^ t .
8:
return  Y ^ t .

4. Experiments

4.1. Experimental Design

This section verifies the superiority of the proposed model in forecasting non-stationary time series. First, an overview of datasets and evaluation metrics is provided [23,37,38,39]. Next, we benchmark the proposed model against various non-stationary FTSFMs and state-of-the-art forecasting models in batch mode. All software is executed on a Windows 11 desktop machine of intel core i5-13400F with 16 GB DDR4 ram with Python version 3.9.18.
The forecasting capability of the proposed model has been tested on different time series. The first group consists of nine-time series TAIEX, SP500a (The dataset SP500a is the daily averages of S&P 500 stock index in [37], while SP500b is the daily open data of S&P 500 in [38]), NASDAQ, Dow Jones, BTC–USD, ETH–USD, EUR–GBP, EUR–USD, and GBP–USD for comparison with existing non-stationary fuzzy time series forecasting models. The second group includes eight classical time series datasets from various domains (Sunspot, MG, SP500b, Radio, Lake, CO2, Milk, and DJ) for comparison with state-of-the-art batch learning models.
Table 1 summarizes the details of each time series. Figure 2 shows the original and first-order differenced time series for the seventeen-time series. Figure 2 demonstrates that all-time series, excluding MG and CO2, display varying trends and heteroscedasticity. While first-order differencing effectively reduces trend non-stationarity in these time series, heteroscedastic characteristics persist. To assess the stationarity properties, we conducted the Augmented Dickey–Fuller (ADF) test and Levene’s test on both original and first-order differenced series. Test results are presented in Table 2. The ADF test was employed to examine the presence of unit roots, with the null hypothesis H 0 indicating non-stationarity (presence of unit root) and the alternative hypothesis H 1 suggesting stationarity (absence of unit root) at a significance level α = 0.05 . Additionally, we applied Levene’s test to evaluate variance homogeneity, where H 0 represents homoscedasticity (equal variances), and H 1 indicates heteroscedasticity (unequal variances) at a significance level α = 0.05 .
The ADF test results reveal that all original time series accept the null hypothesis H 0 except MG, confirming their non-stationarity. After first-order differencing, all-time series reject H 0 , showing that differencing effectively mitigates non-stationarity. According to Levene’s test results, heteroscedasticity persists in both original and differenced series for all datasets except MG. This analysis shows that while differencing effectively mitigates non-stationarity, variance instability continues to be a significant characteristic in most time series. These findings highlight the complex nature of non-stationarity in time series.
The forecasting performance is quantified using root mean squared error (RMSE), mean absolute percentage error (MAPE), and Theil’s U statistic (U) [25,40,41]. RMSE calculates the divergence between the predicted and actual values. MAPE measures a scale-independent error, allowing direct comparison across datasets. Theil’s U statistic evaluates the forecasting performance of a model compared with the naive method. These metrics are presented as follows:
RMSE = 1 N t e t = 1 N t e y t y ^ t 2
MAPE = 1 N t e t = 1 N t e y t y ^ t y t 100
U = 1 N t e t = 1 N t e y t y ^ t 2 1 N t e t = 1 N t e y t y t 1 2 .

4.2. Comparison with Non-Stationary Fuzzy Time Series Forecasting Models

In this section, we conduct comprehensive experiments to evaluate the performance of the proposed TV-NS-BN-PWFTS model against other state-of-the-art non-stationary fuzzy time series forecasting methods. The benchmark FTSFMs include two time-variant FTSFMs [23] (TV-PWFTS, TV-BN-PWFTS) that utilize PWFTS [10] and BN-PWFTS [33] as internal methods, respectively. Incremental ensemble approaches [23] are also applied to construct non-stationary FTSFMs by combining with PWFTS or BN-PWFTS, specifically IE-PWFTS and IE-BN-PWFTS. NSFTS [23] is employed, which maintains constant fuzzy relationships while employing residual-based non-stationary fuzzy sets (Source code for NSFTS, TV-PWFTS, and IE-PWFTS is available on https://github.com/PYFTS/NSFTS (accessed on 31 November 2024). TV-BN-PWFTS, IE-BN-PWFTS, and TV-NS-BN-PWFTS are implemented using the pyFTS library (https://github.com/PYFTS (accessed on 31 November 2024)) for the time-variant framework and pgmpy (https://github.com/pgmpy (accessed on 31 November 2024)) for Bayesian network components). The division of training and testing sets for the first group of datasets is as follows: the first 10% of the data is used for training, and the remaining 90% is used for testing. All experiments were conducted on a Windows 11 desktop computer equipped with an Intel Core i5-13400F processor and 16 GB DDR4 RAM, running Python 3.9.18. We implemented a grid search to identify the optimal parameters for benchmark FTSFMs. For the proposed TV-NS-BN-PWFTS method, we conducted systematic parameter optimization experiments within specified parameter ranges. Parameters are selected from the following ranges: FTSFM order ω {2,3}, number of fuzzy sets from three to fourteen, old data window length l o {100,300}, new data window length l n {50,100}, prediction sub-window length l p {10,30}, model update threshold θ {0.25,1}, and BN adaptive learning weighted parameter η {0.25,0.75}.
Table 3, Table 4 and Table 5 present the comparative prediction performance of six FTSFMs on nine non-stationary time series. Experimental results reveal that the proposed TV-NS-BN-PWFTS achieves superior performance across the majority of datasets. Compared with other BN-PWFTS-based models (IE-BN-PWFTS and TV-BN-PWFTS), the dynamic historical information integration mechanism in TV-NS-BN-PWFTS effectively enhances the FTSFM’s adaptability to the dynamic characteristics of time series. The superior performance of IE-BN-PWFTS over TV-BN-PWFTS further validates the necessity of extracting useful information from historical data for prediction enhancement. Our model’s superior performance over NSFTS reveals that merely adjusting fuzzy set parameters is insufficient to comprehensively capture statistical characteristic changes in time series, reflecting the complex nature of time series non-stationarity. Although TV-NS-BN-PWFTS has slightly higher MAPE and U-values (less than 1.5% difference) compared with IE-BN-PWFTS on the Dow Jones dataset, it retains the optimal RMSE value. This indicates that TV-NS-BN-PWFTS still maintains a highly competitive overall forecasting performance.
Figure 3 presents the prediction residuals generated by the proposed model across nine datasets using their respective test subsets. The error values predominantly cluster around zero, indicating minimal overall prediction bias. The absence of periodic patterns or trending behaviors in the residuals suggests that the model effectively captures the dynamic characteristics of the time series. Figure 4 illustrates the residual distribution histograms and their corresponding density curves across the nine-time series. The experimental results demonstrate that the residuals predominantly exhibit characteristics of a standard normal distribution, validating the model’s effectiveness. While minor deviations from standard normality were detected in the EUR–GBP, NASDAQ, TAIEX, and EUR–USD datasets, the model maintains robust reliability and stability overall.
Furthermore, to facilitate a more extensive and thorough evaluation of TV-NS-BN-PWFTS’s forecasting capabilities, we conducted analyses on the TAIEX, NASDAQ, Dow Jones, and SP500 datasets spanning from 2017 to 2022, with each fiscal year treated as an independent time series. The experimental data were sourced from [42]. For comparative analysis purposes, we adopted their experimental configuration where the initial 22-week period constituted the training dataset, and predictions were conducted on a weekly interval basis. The proposed model was benchmarked against several established time-variant fuzzy forecasting models, including NSFTS, the dynamic evolving neural-fuzzy inference system (DENFIS) [43], and the phase-cum-time variant fuzzy time series model (PTVFTS) [42]. The comparative results, presented in Table 6, illustrate the performance metrics in terms of RMSE and MAPE. The empirical evidence clearly indicates that the proposed TV-NS-BN-PWFTS demonstrates markedly superior predictive performance when benchmarked against other contemporary fuzzy forecasting models. Table 6 demonstrates the significant advantages of TV-NS-BN-PWFTS. The model achieved optimal performance in both RMSE and MAPE metrics across sixteen out of twenty-four annual prediction tasks. While PTVFTS outperformed TV-NS-BN-PWFTS in six specific years (SP500-2021, TAIEX-2017, NASDAQ-2018, SP500-2018, NASDAQ-2021, and Dow Jones-2018), TV-NS-BN-PWFTS maintained consistently high prediction accuracy across the majority of forecasting tasks. The model demonstrated an average performance improvement of over 30.50% across sixteen datasets. In the remaining six datasets where PTVFTS showed superior results, TV-NS-BN-PWFTS exhibited relatively minor performance gaps of 20.16% in RMSE and 24.58% in MAPE metrics.
To evaluate the capability of TV-NS-BN-PWFTS in quantifying prediction uncertainty, we extended TV-NS-BN-PWFTS to construct prediction intervals. Specifically, TV-NS-BN-PWFTS adopts the interval prediction methodology proposed by [10], utilizing fuzzy empirical probability-weighted FLRGs to compute prediction intervals. Two representative non-stationary financial time series, TAIEX and EUR–USD, were selected for experimental analysis. For clarity of presentation, we conducted a detailed analysis of the last 100 observations in the test set. As illustrated in Figure 5, the prediction intervals generated by TV-NS-BN-PWFTS achieved higher true value coverage rates compared with IE-BN-PWFTS, validating the superiority and reliability of our proposed model in non-stationary time series forecasting.

4.3. Comparison with Batch Learning Models

In this section, the proposed model is compared with various outstanding batch learners in the second dataset group to evaluate its predictive performance. We compared the proposed method with classic predictive models such as the multiresolution autoregressive model (MAR), the autoregressive model (AR), the adaptive network-based fuzzy inference system (ANFIS), and the artificial neural network (ANN). Additionally, deep neural network-based models, such as temporal convolutional networks (TCNs), recurrent neural networks (RNNs), the long short-term memory (LSTM) network, and gated recurrent unit (GRU), are included in the comparison. The fuzzy cognitive map (FCM) models integrated with wavelet transform (Wavelet-HFCM [39]) or convolutional neural network (CNN-FCM [44]) are also included in the comparison. Wavelet-HFCM utilizes the redundant wavelet transform to decompose non-stationary series into multivariate time series. HFCM models the latent relationships within these time series and predicts these time series. CNN-FCM applies FCM to learn the relationships between series decomposed by TCN. A regression model then predicts the next observation based on the FCM output. The fuzzy-probabilistic predictive models PWFTS [10] and BN-PWFTS [33] are also employed. This experiment adopts the dataset division scheme in [44]. Considering the characteristics of the second dataset group, the search range for the model order is set from three to thirteen to achieve optimal predictive performance. The optimal results for PWFTS and BN-PWFTS were determined using the grid search method. The results for other benchmark methods were obtained from [39,44].
Table 7 indicates that TV-NS-BN-PWFTS achieves the best forecasting performance on four out of eight datasets while maintaining competitive performance just behind BN-PWFTS on CO2, Lake, and Milk datasets. Compared with BN-PWFTS, TV-NS-BN-PWFTS exhibits stronger adaptability to dynamic changes in non-stationary time series due to its model update strategy driven by phased prediction performance. For datasets like DJ, where significant differences exist between training and testing data distributions, the model updating mechanism of TV-NS-BN-PWFTS has notable advantages. The dynamic adjustment mechanism may introduce extra fluctuations when the time series has strong periodic characteristics, such as CO2 and Milk datasets. TV-NS-BN-PWFTS surpasses FCM-based methods on seven out of eight datasets except for MG. The superior performance over Wavelet-HFCM and CNN-FCM can be attributed to its ability to capture the dynamic changes in temporal patterns of non-stationary time series, which FCM methods with fixed causal relationships cannot achieve. TV-NS-BN-PWFTS’s weaker performance on the MG dataset may be due to the absence of non-stationarity, which prevents the update mechanism of TV-NS-BN-PWFTS from fully exhibiting its advantages. This indirectly proves that the model is more suitable for handling complex time series with non-stationary characteristics.
Figure 6 demonstrates the prediction residuals obtained from the proposed model for eight datasets when benchmarked against batch learning approaches. The balanced distribution of positive and negative errors reveals the model’s ability to generate unbiased predictions without systematic overestimation or underestimation. The absence of temporal patterns in the residual scatter plots confirms the model’s effectiveness in capturing time-varying characteristics of the data. Figure 7 presents the residual distribution characteristics across eight datasets. The SP500b, DJ, and Lake datasets exhibit highly symmetric normal distributions. The density curves for the Sunspot, CO2, and MG datasets display varying degrees of skewness, while the Milk and Radio datasets demonstrate bimodal distribution patterns, potentially attributable to limited sample sizes and inherent data fluctuations.
To further validate the performance of the proposed model, we conducted comparative experiments using NASDAQ daily closing prices from 2001 to 2012 [13]. The model was benchmarked against various FTSFMs with data stationary operations and classical time series forecasting models. The comparison methods include an FTSFM with an improved sparrow search algorithm and complete ensemble empirical mode decomposition with adaptive noises (CEEMDAN-ISSA-FTS) [13], the FTSFM based on fuzzy c-means clustering and the empirical mode decomposition method (EMD-FC-FTS) [45] utilizing empirical mode decomposition, Wavelet-HFCM [39] employing wavelet transform, prophet [46] based on time series additive decomposition, and traditional Chen’s FTSFM (Chen) [6] and ARIMA using the differencing method. As shown in Table 8, TV-NS-BN-PWFTS outperforms these methods in both RMSE and MAPE metrics, demonstrating its significant advantages over traditional feature extraction-based time series prediction models and further validating its effectiveness in handling non-stationary time series forecasting tasks.

4.4. Comparison Considering Multiple Time Series Together

To comprehensively evaluate model performance across all datasets, we conducted the Friedman test [47] and the post-hoc Holm test [48] for non-parametric statistical analysis. The results in Table 9 display TV-NS-BN-PWFTS’s superior performance with an average ranking of 1.06 across nine datasets, considerably outperforming other non-stationary FTSFMs. The Friedman test results include a test statistic z-value of 42.7070 and a p-value of 4.2364 × 10−8, indicating significant performance differences among the FTSFMs at the 5% significance level.
The Holm test results in Table 10 further reveal that TV-NS-BN-PWFTS has statistically significant performance advantages over IE-PWFTS, TV-BN-PWFTS, TV-PWFTS, and NSFTS. In contrast, the performance comparison between TV-NS-BN-PWFTS and IE-BN-PWFTS shows no statistically significant difference.
Statistical comparison with batch learning models through Friedman and Holm tests demonstrates TV-NS-BN-PWFTS’s effectiveness. According to Table 11, TV-NS-BN-PWFTS achieves the top average rank of 2.25 compared with twelve batch learners, notably excelling in predicting time series with complex dynamic characteristics. Friedman test results, displaying a test statistic z-value of 47.7997 and a p-value of 3.3867 × 10−6, reveal significant performance differences among models at the 5% significance level. The Holm test in Table 12 confirms the statistically significant difference of TV-NS-BN-PWFTS over AR, TCN, ANN, GRU, MAR, LSTM, and ANFIS at the 95% confidence level. Despite showing non-significant differences with RNN, CNN-FCM, PWFTS, Wavelet-HFCM, and BN-PWFTS, TV-NS-BN-PWFTS still demonstrates superior overall predictive accuracy according to the RMSE metric.

4.5. Ablation Study

In this section, we conduct ablation experiments to validate the effectiveness of core modules. We first introduce three variants of TV-NS-BN-PWFTS. TV-BN-PWFTS represents an FTSFM that employs BN-PWFTS as its core module with a traditional time-variant updating strategy [15]. This variant only utilizes recent data to construct a new BN-PWFTS for prediction, disregarding useful historical information. TV-BN-PWFTS (NSFS) replaces traditional fuzzy sets with our proposed non-stationary fuzzy sets to demonstrate their effectiveness. TV-BN-PWFTS (adaptive) substitutes the time-variant updating strategy [15] with our dynamic updating approach to verify its efficacy in capturing temporal patterns. Table 13 presents the results across nine time series using three metrics: RMSE, MAPE, and U. The results show that both variant methods incorporating either non-stationary fuzzy sets or model updating modules outperform the baseline TV-BN-PWFTS. The superior performance of TV-BN-PWFTS (adaptive) over TV-BN-PWFTS (NSFS) indicates that dynamic changes in temporal patterns significantly impact the long-term prediction of non-stationary time series. TV-NS-BN-PWFTS offers a comprehensive approach to handling non-stationarity in time series. It not only addresses the dynamic changes in vagueness through non-stationary fuzzy sets but also captures the evolution of temporal relationships, thereby providing more accurate and comprehensive forecasting results.

5. Conclusions

To address the negative impact of non-stationarity on FTSFM forecasting performance, we propose a novel hybrid FTSFM that effectively captures heteroscedasticity and trend changes inherent in time series. It employs a novel dynamic updating scheme that effectively incorporates historical information with new data. First-order differencing reduces time series non-stationarity while extracting time series variation information. Dynamic adjustment of non-stationary fuzzy set parameters based on residuals enables precise modeling of local changes in time series variation. Once old and new data are fuzzified, the model rebuilds fuzzy empirical probability-based FLRGs, enabling dynamic updates of fuzzy relationships. We use adaptive BN structure learning to model dependence relationships dynamically between time points in FLRGs. The updates of the BN and FLRGs reflect changes in temporal relationships within the time series. The proposed hybrid FTSFM successfully integrates historical knowledge preservation with dynamic adaptation according to new data, enhancing FTSFMs’ ability to handle non-stationary time series more effectively. Experimental results show that the proposed model outperforms existing non-stationary FTSFMs and batch learning models. Hypothesis tests verify the reliability of the proposed model.
The proposed model demonstrates excellent performance in improving the forecasting accuracy of FTSFM for non-stationary time series. However, several issues need further investigation. While the current model successfully handles univariate non-stationary time series forecasting, future work will extend it to address the challenges of multivariate non-stationary time series prediction. The current research is limited to using triangular membership functions to construct non-stationary fuzzy sets. In order to expand the applicability of the model, it is necessary to conduct in-depth research on the performance of other types of membership functions, such as Gaussian functions and ladder functions, in non-stationary environments. Due to time series non-stationarity, quantifying prediction uncertainty is essential. We plan to incorporate confidence interval estimation into our forecasting framework to better assess prediction reliability. The potential integration of fuzzy reasoning and neural network fitting will be explored, which may combine the continuous-time modeling advantages of NARX while preserving the interpretability characteristics of fuzzy systems.

Author Contributions

Conceptualization, B.W.; Data curation, B.W.; Formal analysis, B.W.; Investigation, B.W.; Methodology, B.W.; Software, B.W. and X.L.; Visualization, B.W.; Writing and original draft, B.W.; Data curation, B.W.; Project administration, B.W. and X.L.; Writing, review and editing, B.W. and X.L.; Resources, X.L.; Supervision, X.L.; Funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62076049.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets’ sources are fully annotated in the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, J.; He, Z.; Geng, T.; Huang, F.; Gong, P.; Yi, P.; Peng, J. State Causality and Adaptive Covariance Decomposition Based Time Series Forecasting. Sensors 2023, 23, 809. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, J.; Guan, A.; Cheng, S. Double Decomposition and Fuzzy Cognitive Graph-Based Prediction of Non-Stationary Time Series. Sensors 2024, 24, 7272. [Google Scholar] [CrossRef] [PubMed]
  3. Tavares, T.H.B.D.C.; Ferreira, B.P.; Mendes, E.M.A.M. Fuzzy Time Series Model Based on Red–Black Trees for Stock Index Forecasting. Appl. Soft Comput. 2022, 127, 109323. [Google Scholar] [CrossRef]
  4. Wang, J.; Li, H.; Wang, Y.; Lu, H. A Hesitant Fuzzy Wind Speed Forecasting System with Novel Defuzzification Method and Multi-Objective Optimization Algorithm. Expert Syst. Appl. 2021, 168, 114364. [Google Scholar] [CrossRef]
  5. Song, Q.; Chissom, B. Fuzzy Time Series and Its Model. Fuzzy Sets Syst. 1993, 54, 269–277. [Google Scholar] [CrossRef]
  6. Chen, S.M. Forecasting Enrollments Based on Fuzzy Time Series. Fuzzy Sets Syst. 1996, 81, 311–319. [Google Scholar] [CrossRef]
  7. Hyndman, R.J.; Athanasopoulos, G. Forecasting: Principles and Practice, 3rd ed.; OTexts: Melbourne, Australia, 2024. [Google Scholar]
  8. Tsay, R.S. Analysis of Financial Time Series, 2nd ed.; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  9. Bose, M.; Mali, K. Designing Fuzzy Time Series Forecasting Models: A Survey. Int. J. Approx. Reason. 2019, 111, 78–99. [Google Scholar] [CrossRef]
  10. De Lima Silva, P.C.; Sadaei, H.J.; Ballini, R.; Guimarães, F.G. Probabilistic Forecasting with Fuzzy Time Series. IEEE Trans. Fuzzy Syst. 2020, 28, 1771–1784. [Google Scholar] [CrossRef]
  11. Dixit, A.; Jain, S. Intuitionistic Fuzzy Time Series Forecasting Method for Non-Stationary Time Series Data with Suitable Number of Clusters and Different Window Size for Fuzzy Rule Generation. Inf. Sci. 2023, 623, 132–145. [Google Scholar] [CrossRef]
  12. Liang, X.; Wu, Z. Forecasting Tourist Arrivals Using Dual Decomposition Strategy and an Improved Fuzzy Time Series Method. Neural Comput. Appl. 2022, 35, 1–23. [Google Scholar] [CrossRef]
  13. Xian, S.; Lei, H.; Chen, K.; Li, Z. A Novel Fuzzy Time Series Model Based on Improved Sparrow Search Algorithm and CEEMDAN. Appl. Intell. 2023, 53, 11300–11327. [Google Scholar] [CrossRef]
  14. Carnelossi Furlaneto, D.; Oliveira, L.S.; Menotti, D.; Cavalcanti, G.D. Bias Effect on Predicting Market Trends with EMD. Expert Syst. Appl. 2017, 82, 19–26. [Google Scholar] [CrossRef]
  15. Song, Q.; Chissom, B.S. Forecasting Enrollments with Fuzzy Time Series—Part II. Fuzzy Sets Syst. 1994, 62, 1–8. [Google Scholar] [CrossRef]
  16. Hwang, J.-R.; Chen, S.-M.; Lee, C.-H. Handling Forecasting Problems Using Fuzzy Time Series. Fuzzy Sets Syst. 1998, 100, 217–228. [Google Scholar] [CrossRef]
  17. Liu, H.T.; Wei, N.C.; Yang, C.G. Improved Time-Variant Fuzzy Time Series Forecast. Fuzzy Optim. Decis. Mak. 2009, 8, 45–65. [Google Scholar] [CrossRef]
  18. Liu, H.T.; Wei, M.L. An Improved Fuzzy Forecasting Method for Seasonal Time Series. Expert Syst. Appl. 2010, 37, 6310–6318. [Google Scholar] [CrossRef]
  19. Huo, X.; Hao, K.; Chen, L.; Tang, X.s.; Wang, T.; Cai, X. A Dynamic Soft Sensor of Industrial Fuzzy Time Series with Propositional Linear Temporal Logic. Expert Syst. Appl. 2022, 201, 117176. [Google Scholar] [CrossRef]
  20. Yun, U.; Lee, G.; Yoon, E. Advanced Approach of Sliding Window Based Erasable Pattern Mining with List Structure of Industrial Fields. Inf. Sci. 2019, 494, 37–59. [Google Scholar] [CrossRef]
  21. Garibaldi, J.M.; Jaroszewski, M.; Musikasuwan, S. Nonstationary Fuzzy Sets. IEEE Trans. Fuzzy Syst. 2008, 16, 1072–1086. [Google Scholar] [CrossRef]
  22. Alves, M.A.; De Lima e Silva, P.C.; Severiano, C.A., Jr.; Vieira, G.L.; Guimarães, F.G.; Sadaei, H.J. An Extension of Nonstationary Fuzzy Sets to Heteroskedastic Fuzzy Time Series. In Proceedings of the ESANN: 26th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 25–27 April 2018. [Google Scholar]
  23. De Lima e Silva, P.C.; Severiano, C.A.; Alves, M.A.; Silva, R.; Weiss Cohen, M.; Guimarães, F.G. Forecasting in Non-Stationary Environments with Fuzzy Time Series. Appl. Soft Comput. 2020, 97, 106825. [Google Scholar] [CrossRef]
  24. Shi, X.; Wang, J.; Zhang, B. A Fuzzy Time Series Forecasting Model with Both Accuracy and Interpretability Is Used to Forecast Wind Power. Appl. Energy 2024, 353, 122015. [Google Scholar] [CrossRef]
  25. Wong, W.-K.; Bai, E.; Chu, A.W. Adaptive Time-Variant Models for Fuzzy-Time-Series Forecasting. IEEE Trans. Syst. Man. Cybern. Part 2010, 40, 1531–1542. [Google Scholar] [CrossRef] [PubMed]
  26. Cheng, C.; Chen, T.; Teoh, H.; Chiang, C. Fuzzy Time-Series Based on Adaptive Expectation Model for TAIEX Forecasting. Expert Syst. Appl. 2008, 34, 1126–1132. [Google Scholar] [CrossRef]
  27. Kmenta, J. Elements of Econometrics, 2nd ed.; University of Michigan Press: Ann Arbor, MI, USA, 1997. [Google Scholar] [CrossRef]
  28. Yao, Y.; Zhao, Y.; Li, Y. A Volatility Model Based on Adaptive Expectations: An Improvement on the Rational Expectations Model. Int. Rev. Financ. Anal. 2022, 82, 102202. [Google Scholar] [CrossRef]
  29. Cheng, C.H.; Yang, J.H. Fuzzy Time-Series Model Based on Rough Set Rule Induction for Forecasting Stock Price. Neurocomputing 2018, 302, 33–45. [Google Scholar] [CrossRef]
  30. Pearl, J. Probabilistic Reasoning in Intelligent Systems; Elsevier: Amsterdam, The Netherlands, 1988. [Google Scholar] [CrossRef]
  31. Zuo, Y.; Kita, E. Stock Price Forecast Using Bayesian Network. Expert Syst. Appl. 2012, 39, 6729–6737. [Google Scholar] [CrossRef]
  32. Wang, L.; Cui, Y.; Zhang, F.; Coskun, S.; Liu, K.; Li, G. Stochastic Speed Prediction for Connected Vehicles Using Improved Bayesian Networks with Back Propagation. Sci. China Technol. Sci. 2022, 65, 1524–1536. [Google Scholar] [CrossRef]
  33. Wang, B.; Liu, X.; Chi, M.; Li, Y. Bayesian Network Based Probabilistic Weighted High-Order Fuzzy Time Series Forecasting. Expert Syst. Appl. 2024, 237, 121430. [Google Scholar] [CrossRef]
  34. Wang, L.; Wang, Z.; Zhao, S.; Tan, S. Stock Market Trend Prediction Using Dynamical Bayesian Factor Graph. Expert Syst. Appl. 2015, 42, 6267–6275. [Google Scholar] [CrossRef]
  35. Heckerman, D. A Tutorial on Learning with Bayesian Networks. Learn. Graph. Model. 1998, 301–354. [Google Scholar]
  36. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  37. De Lima Silva, P.C. NSFTS Data [Dataset]. 2020. Available online: https://github.com/PYFTS/NSFTS (accessed on 31 November 2024).
  38. Yang, S.; Liu, J. Wavelet-HFCM Data [Dataset]. 2018. Available online: https://github.com/yangysc/Wavelet-HFCM (accessed on 31 November 2024).
  39. Yang, S.; Liu, J. Time-Series Forecasting Based on High-Order Fuzzy Cognitive Maps and Wavelet Transform. IEEE Trans. Fuzzy Syst. 2018, 26, 3391–3402. [Google Scholar] [CrossRef]
  40. Theil, H. Applied Economic Forecasting; Rand McNally: Chicago, IL, USA, 1966. [Google Scholar]
  41. Hyndman, R.J.; Koehler, A.B. Another Look at Measures of Forecast Accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef]
  42. Saleena, A.J.; John, C.J.; Lincy, G.R.M. A Phase-Cum-Time Variant Fuzzy Time Series Model for Forecasting Non-Stationary Time Series and Its Application to the Stock Market. IEEE Access 2024, 12, 188373–188385. [Google Scholar] [CrossRef]
  43. Kasabov, N.; Song, Q. DENFIS: Dynamic evolving neural-fuzzy inference system and its application for time-series prediction. IEEE Trans. Fuzzy Syst. 2002, 10, 144–154. [Google Scholar] [CrossRef]
  44. Liu, P.; Liu, J.; Wu, K. CNN-FCM: System Modeling Promotes Stability of Deep Learning in Time Series Prediction. Knowl.-Based Syst. 2020, 203, 106081. [Google Scholar] [CrossRef]
  45. Ferreira, M.V.D.S.; Rios, R.; Mello, R.; Rios, T.N. Using fuzzy clustering to address imprecision and uncertainty present in deterministic components of time series. Appl. Soft Comput. 2021, 113, 108011. [Google Scholar] [CrossRef]
  46. Taylor, S.J.; Letham, B. Forecasting at scale. Am. Stat. 2018, 72, 37–45. [Google Scholar] [CrossRef]
  47. Friedman, M. A Comparison of Alternative Tests of Significance for the Problem of m Rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  48. Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
Figure 1. The flow chart of the proposed model.
Figure 1. The flow chart of the proposed model.
Sensors 25 01628 g001
Figure 2. Original and first-order differenced time series for seventeen datasets. The top panel depicts the original time series data. The lower panel shows the first-order differenced time series.
Figure 2. Original and first-order differenced time series for seventeen datasets. The top panel depicts the original time series data. The lower panel shows the first-order differenced time series.
Sensors 25 01628 g002
Figure 3. Error scatter plot produced by the proposed model for (a) BTC–USD time series, (b) Dow Jones time series, (c) ETH–USD time series, (d) EUR–GBP time series, (e) EUR–USD time series, (f) GBP–USD time series, (g) NASDAQ time series, (h) SP500a time series, (i) TAIEX time series.
Figure 3. Error scatter plot produced by the proposed model for (a) BTC–USD time series, (b) Dow Jones time series, (c) ETH–USD time series, (d) EUR–GBP time series, (e) EUR–USD time series, (f) GBP–USD time series, (g) NASDAQ time series, (h) SP500a time series, (i) TAIEX time series.
Sensors 25 01628 g003
Figure 4. Error distribution histogram produced by the proposed model for (a) BTC–USD time series, (b) Dow Jones time series, (c) ETH–USD time series, (d) EUR–GBP time series, (e) EUR–USD time series, (f) GBP–USD time series, (g) NASDAQ time series, (h) SP500a time series, (i) TAIEX time series.
Figure 4. Error distribution histogram produced by the proposed model for (a) BTC–USD time series, (b) Dow Jones time series, (c) ETH–USD time series, (d) EUR–GBP time series, (e) EUR–USD time series, (f) GBP–USD time series, (g) NASDAQ time series, (h) SP500a time series, (i) TAIEX time series.
Sensors 25 01628 g004
Figure 5. Prediction intervals yielded by the proposed model and IE-BN-PWFTS for (a) TAIEX time series and (b) EUR–USD time series.
Figure 5. Prediction intervals yielded by the proposed model and IE-BN-PWFTS for (a) TAIEX time series and (b) EUR–USD time series.
Sensors 25 01628 g005
Figure 6. Error scatter plot produced by the proposed model for (a) Sunspot time series, (b) MG time series, (c) SP500b time series, (d) Radio time series, (e) Lake time series, (f) CO2 time series, (g) Milk time series, (h) DJ time series.
Figure 6. Error scatter plot produced by the proposed model for (a) Sunspot time series, (b) MG time series, (c) SP500b time series, (d) Radio time series, (e) Lake time series, (f) CO2 time series, (g) Milk time series, (h) DJ time series.
Sensors 25 01628 g006
Figure 7. Error distribution histogram produced by the proposed model for (a) Sunspot time series, (b) MG time series, (c) SP500b time series, (d) Radio time series, (e) Lake time series, (f) CO2 time series, (g) Milk time series, (h) DJ time series.
Figure 7. Error distribution histogram produced by the proposed model for (a) Sunspot time series, (b) MG time series, (c) SP500b time series, (d) Radio time series, (e) Lake time series, (f) CO2 time series, (g) Milk time series, (h) DJ time series.
Sensors 25 01628 g007
Table 1. Descriptions of seventeen time series.
Table 1. Descriptions of seventeen time series.
DatasetDescriptionNumber
TAIEX [37]Daily averages of open, high, low, and close prices for the Dow Jones Industrial Average4000
SP500a [37]Daily averages of open, high, low, and close prices for the S&P 500 stock index4000
NASDAQ [37]Daily averages of open, high, low, and close prices for the National Association of Securities Dealers Automated Quotations Composite Index4000
Dow Jones [37]Daily averages of the Dow Jones Industrial Index’s open, high, low, and  close prices4000
BTC–USD [37]Daily cryptocurrency exchange rates for Bitcoin quoted in US Dollars2968
ETH–USD [37]Daily cryptocurrency exchange rates for Ethereum quoted in US Dollars1121
EUR–GBP [37]FOREX data, including daily average quotations for Euro to Great British Pound5000
EUR–USD [37]FOREX data, including daily average quotations for US Dollar to Euro5000
GBP–USD [37]FOREX data, including daily average quotations for Great British Pound to US Dollar5000
Sunspot [38]Yearly sunspot count288
MG [38]Obtained by solving a first-order nonlinear differential-delay equation via the fourth-order Runge–Kutta algorithm1000
SP500b [38]Daily open prices of the S&P 500 stock index251
Milk [38]Milk production in pounds on a monthly basis168
DJ [38]Monthly close prices for the Dow Jones industrial index291
Radio [38]Highest permitted radio frequency for broadcasting in Washington, DC, USA240
CO2 [38]CO2 measurements at Mauna Loa192
Lake [38]Monthly level of Lake Erie680
Table 2. Stationarity evaluation based on Augmented Dickey–Fuller test and Levene’s test.
Table 2. Stationarity evaluation based on Augmented Dickey–Fuller test and Levene’s test.
DatasetOriginal Time SeriesFirst Order Differenced Time Series
ADF Test
p Value
Test Result Levene’s Test
p Value
Test Result ADF Test
p Value
Test ResultLevene’s Test
p Value
Test Result
TAIEX0.1175 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
SP5000.7733 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
NASDAQ0.9841 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
Dow Jones0.8189 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
BTC–USD0.6710 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
ETH–USD0.3546 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
EUR–GBP0.4537 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
EUR–USD0.3579 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
GBP–USD0.7022 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
Sunspot0.1462 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
MG0.0000 H 0 Rejected0.9164 H 0 Accepted0.0000 H 0 Rejected0.9018 H 0 Accepted
SP5000.8298 H 0 Accepted0.0000 H 0 Rejected0.0000 H 0 Rejected0.0000 H 0 Rejected
Milk0.6274 H 0 Accepted0.0102 H 0 Rejected0.0301 H 0 Rejected0.0110 H 0 Rejected
DJ0.3550 H 0 Accepted0.0023 H 0 Rejected0.0000 H 0 Rejected0.0028 H 0 Rejected
Radio0.2491 H 0 Accepted0.0001 H 0 Rejected0.0102 H 0 Rejected0.0000 H 0 Rejected
CO20.9964 H 0 Accepted0.0000 H 0 Rejected0.0001 H 0 Rejected0.0000 H 0 Rejected
Lake0.1109 H 0 Accepted0.0135 H 0 Rejected0.0000 H 0 Rejected0.0098 H 0 Rejected
Table 3. Comparison of the proposed method with other non-stationary fuzzy time series forecasting models in terms of root mean squared error (RMSE). The optimal value is represented in bold.
Table 3. Comparison of the proposed method with other non-stationary fuzzy time series forecasting models in terms of root mean squared error (RMSE). The optimal value is represented in bold.
DatasetTV-PWFTSIE-PWFTSIE-BN-PWFTSTV-BN-PWFTSNSFTSTV-NS-BN-PWFTS
TAIEX123.99991018.541595.2133137.1233107.499492.9266
SP500a8.841542.50277.257813.25807.83077.1490
NASDAQ35.0900202.547728.096043.570533.727727.5051
Dow Jones69.5462284.934157.9956104.495862.661357.7796
BTC–USD306.16261364.0944142.1741197.9182151.4576138.6654
ETH–USD44.5400158.832818.891927.836519.398718.3194
EUR–USD0.00690.01900.00610.01170.00640.0060
EUR–GBP0.00350.00480.00310.00610.00320.0031
GBP–USD0.00830.02830.00720.01410.00920.0070
Table 4. Comparison of the proposed method with other non-stationary fuzzy time series forecasting models in terms of mean absolute percentage error. The optimal value is represented in bold.
Table 4. Comparison of the proposed method with other non-stationary fuzzy time series forecasting models in terms of mean absolute percentage error. The optimal value is represented in bold.
DatasetTV-PWFTSIE-PWFTSIE-BN-PWFTSTV-BN-PWFTSNSFTSTV-NS-BN-PWFTS
TAIEX1.412210.70811.04281.52461.20961.0174
SP500a0.61301.99560.49140.94420.55050.4881
NASDAQ0.90004.02330.75581.18910.97910.7534
Dow Jones0.60541.60090.50490.95140.57550.5120
BTC–USD6.586836.26012.43703.70923.02032.5151
ETH–USD7.350037.05023.43054.86083.84073.4196
EUR–USD0.38920.92490.34250.66310.36420.3402
EUR–GBP0.31220.32300.27250.54990.27980.2696
GBP–USD0.36300.89860.31630.63250.38810.3114
Table 5. Comparison of the proposed method with other non-stationary fuzzy time series forecasting models in terms of Theil’s U statistic. The optimal value is represented in bold.
Table 5. Comparison of the proposed method with other non-stationary fuzzy time series forecasting models in terms of Theil’s U statistic. The optimal value is represented in bold.
DatasetTV-PWFTSIE-PWFTSIE-BN-PWFTSTV-BN-PWFTSNSFTSTV-NS-BN-PWFTS
TAIEX1.304710.76871.00161.44971.14200.9870
SP500a1.11645.40130.91641.67421.00190.9144
NASDAQ1.25007.28341.01021.56151.21540.9913
Dow Jones1.10434.55220.92071.66891.00620.9275
BTC–USD1.98318.92130.92051.29160.99990.9151
ETH–USD2.06007.97950.94871.39201.00030.9432
EUR–USD1.12963.11340.99021.90321.04760.9845
EUR–GBP1.10541.48790.96691.89331.00550.9745
GBP–USD1.12473.84100.97891.91401.25800.9581
Table 6. Prediction performance of the models on NASDAQ, SP500, Dow Jones, and TAIEX 2017–2022 in terms of RMSE and MAPE. The optimal value is represented in bold.
Table 6. Prediction performance of the models on NASDAQ, SP500, Dow Jones, and TAIEX 2017–2022 in terms of RMSE and MAPE. The optimal value is represented in bold.
DatasetRMSEMAPE
NSFTS DENFIS PTVFTS TV-NS-BN-PWFTS NSFTS DENFIS PTVFTS TV-NS-BN-PWFTS
NASDAQ-2017108.7284239.125948.192436.82771.37422.90020.55770.4102
NASDAQ-2018176.4645632.8676115.4398136.89191.87966.94521.14411.5356
NASDAQ-2019142.2725443.877981.557142.47181.46964.28530.79090.3875
NASDAQ-2020268.2565869.2766199.946796.49911.96315.99341.47950.5843
NASDAQ-2021295.9192679.2459159.6753199.38051.59153.60260.82721.0269
NASDAQ-2022284.4377481.7911238.0055204.99932.18083.59951.60001.3231
SP500-201724.833378.840413.06059.98890.72862.44400.39080.2895
SP500-201851.3791187.374731.834041.73651.58535.45650.87311.2067
SP500-201952.9810125.793824.226512.61551.56853.31770.60880.3087
SP500-202074.2154202.960560.160724.40321.83724.71541.48940.5327
SP500-202187.8825190.376740.098944.84781.61623.35080.73400.7523
SP500-202283.9869185.154564.866458.35211.71173.93911.28731.0746
Dow Jones-2017155.63731099.0488105.9790101.49490.48873.75950.37690.3240
Dow Jones-2018460.03471473.7939300.8507402.61991.49944.59500.87551.2875
Dow Jones-2019512.6081632.8676234.7223125.28021.70996.94520.65740.3313
Dow Jones-2020606.3734885.0938515.7433249.57811.73132.61011.54780.6366
Dow Jones-2021610.85711023.9746283.7141308.02701.53152.33450.65580.6414
Dow Jones-2022587.79882435.7356445.4243393.28051.48336.15441.11680.8883
TAIEX-2017172.0256218.046364.582165.04681.41301.66050.49140.4982
TAIEX-2018163.9537623.0989109.007791.31041.11074.62900.71660.7262
TAIEX-2019179.5368514.149477.920467.57681.33983.61550.53580.4666
TAIEX-2020218.2937961.8291145.8409116.13981.30345.73390.91520.6687
TAIEX-2021221.4625679.2459159.8782100.86451.04443.60260.73380.4578
TAIEX-2022323.61821006.9976233.6213174.78791.84015.86421.21320.9504
Table 7. Comparison of the proposed method with batch learning models in terms of RMSE. The optimal value is represented in bold.
Table 7. Comparison of the proposed method with batch learning models in terms of RMSE. The optimal value is represented in bold.
CO2DJLakeMGMilkRadioSP500bSunspot
RNN1.419026.23200.37400.001029.25300.613027.896019.2920
ANFIS0.910027.52600.45800.00109.57800.651014.935022.7530
LSTM2.160026.93600.38400.001032.74300.590046.266019.0060
ANN1.695028.53200.40200.005027.11300.652017.696019.9010
AR1.350029.82200.63800.035057.71700.902017.897035.2620
MAR0.812026.73300.39000.002037.83800.662016.041019.1860
GRU1.561025.21100.38500.001036.09400.832020.407019.4080
TCN3.120025.21400.40900.001033.85800.602051.267022.4490
Wavelet-HFCM0.560023.15900.37700.00408.25800.547016.105018.9160
CNN-FCM0.731025.19000.39100.001030.47400.567020.816017.9490
PWFTS0.488422.64540.38160.00508.30040.370511.692223.6950
BN-PWFTS0.341222.92750.36630.00136.03920.329011.797818.8784
TV-NS-BN-PWFTS0.375722.56170.36920.00186.12440.328911.595617.5088
Table 8. Results of the proposed model on NASDAQ 2001-2012 in terms of RMSE and MAPE. The optimal value is represented in bold.
Table 8. Results of the proposed model on NASDAQ 2001-2012 in terms of RMSE and MAPE. The optimal value is represented in bold.
CEEMDAN-ISSA-FTSChenARIMAProphetEMD-FC-FTSWavelet-HFCMTV-NS-BN-PWFTS
RMSE45.860080.400097.6200120.0600126.950074.430029.3285
MAPE1.22002.33002.73003.48003.47002.12000.7574
Table 9. Rankings of non-stationary fuzzy time series forecasting models across nine datasets. The optimal value is represented in bold.
Table 9. Rankings of non-stationary fuzzy time series forecasting models across nine datasets. The optimal value is represented in bold.
DatasetTV-PWFTSIE-PWFTSIE-BN-PWFTSTV-BN-PWFTSNSFTSTV-NS-BN-PWFTS
TAIEX4.006.002.005.003.001.00
SP5004.006.002.005.003.001.00
NASDAQ4.006.002.005.003.001.00
Dow Jones4.006.002.005.003.001.00
BTC–USD5.006.002.004.003.001.00
ETH–USD5.006.002.004.003.001.00
EUR–USD4.006.002.005.003.001.00
EUR–GBP4.005.001.506.003.001.50
GBP–USD3.006.002.005.004.001.00
Average4.115.891.944.893.111.06
Table 10. Holm test results of non-stationary fuzzy time series forecasting models.
Table 10. Holm test results of non-stationary fuzzy time series forecasting models.
Comparisonz-Valuep-Value
1TV-NS-BN-PWFTS vs. IE-PWFTS5.48050.0000
2TV-NS-BN-PWFTS vs. TV-BN-PWFTS4.34660.0000
3TV-NS-BN-PWFTS vs. TV-PWFTS3.46470.0027
4TV-NS-BN-PWFTS vs. NSFTS2.33080.0198
5TV-NS-BN-PWFTS vs. IE-BN-PWFTS1.00790.9405
Table 11. Rankings of the proposed method and batch learning models across eight datasets. The optimal value is represented in bold.
Table 11. Rankings of the proposed method and batch learning models across eight datasets. The optimal value is represented in bold.
MethodsCO2DJLakeMGMilkRadioSP500bSunspotAverage
RNN9.008.003.003.507.008.0011.007.007.06
ANFIS7.0011.0012.003.505.009.004.0011.007.81
LSTM12.0010.006.003.509.006.0012.005.007.94
ANN11.0012.0010.0011.506.0010.007.009.009.56
AR8.0013.0013.0013.0013.0013.008.0013.0011.75
MAR6.009.008.009.0012.0011.005.006.008.25
GRU10.006.007.003.5011.0012.009.008.008.31
TCN13.007.0011.003.5010.007.0013.0010.009.31
Wavelet-HFCM4.004.004.0010.003.004.006.004.004.88
CNN-FCM5.005.009.003.508.005.0010.002.005.94
PWFTS3.002.005.0011.504.003.002.0012.005.31
BN-PWFTS1.003.001.007.001.002.003.003.002.62
TV-NS-BN-PWFTS2.001.002.008.002.001.001.001.002.25
Table 12. Holm test results of the proposed method and batch learning models.
Table 12. Holm test results of the proposed method and batch learning models.
Comparisonz-Valuep-Value
1TV-NS-BN-PWFTS vs. AR4.87870.0000
2TV-NS-BN-PWFTS vs. TCN3.62700.0014
3TV-NS-BN-PWFTS vs. ANN3.75540.0016
4TV-NS-BN-PWFTS vs. GRU3.11340.0111
5TV-NS-BN-PWFTS vs. MAR3.08130.0144
6TV-NS-BN-PWFTS vs. LSTM2.92080.0349
7TV-NS-BN-PWFTS vs. ANFIS2.85660.0471
8TV-NS-BN-PWFTS vs. RNN2.47150.1615
9TV-NS-BN-PWFTS vs. CNN-FCM1.89370.1748
10TV-NS-BN-PWFTS vs. PWFTS1.57280.2316
11TV-NS-BN-PWFTS vs. Wavelet-HFCM1.34810.7105
12TV-NS-BN-PWFTS vs. BN-PWFTS0.19260.8473
Table 13. Ablation study in terms of RMSE, MAPE, and U.
Table 13. Ablation study in terms of RMSE, MAPE, and U.
TAIEXSP500NASDAQDow JonesBTC–USDETH–USDEUR–USDEUR–GBPGBP–USD
RMSETV-BN-PWFTS317.294018.834644.6259144.3049344.375148.59970.03430.01340.0312
TV-BN-PWFTS (NSFS)94.77247.776327.875762.1150156.445119.45650.00610.00320.0074
TV-BN-PWFTS (adaptive)93.89057.164528.272156.5443141.002920.60790.00600.00310.0071
TV-NS-BN-PWFTS92.92667.149027.505157.7796138.665418.31940.00600.00310.0070
MAPETV-BN-PWFTS3.57851.37291.22721.26366.85629.28592.10521.27011.3969
TV-BN-PWFTS (NSFS)1.03780.53410.76580.55232.62833.50730.34500.27600.3203
TV-BN-PWFTS (adaptive)1.02150.48280.76590.50122.50943.48010.34090.27280.3160
TV-NS-BN-PWFTS1.01740.48810.75340.51202.51513.41960.34020.26960.3114
UTV-BN-PWFTS3.35142.39021.60192.28602.22152.42915.60604.19734.2130
TV-BN-PWFTS (NSFS)1.00650.99471.00470.99721.03251.00181.00401.00421.0021
TV-BN-PWFTS (adaptive)0.99720.91641.01900.90770.93061.06110.98730.97410.9714
TV-NS-BN-PWFTS0.98700.91440.99130.92750.91510.94320.98450.97450.9581
The bold values indicate the best performance for each metric on each dataset.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, B.; Liu, X. Adaptive Non-Stationary Fuzzy Time Series Forecasting with Bayesian Networks. Sensors 2025, 25, 1628. https://doi.org/10.3390/s25051628

AMA Style

Wang B, Liu X. Adaptive Non-Stationary Fuzzy Time Series Forecasting with Bayesian Networks. Sensors. 2025; 25(5):1628. https://doi.org/10.3390/s25051628

Chicago/Turabian Style

Wang, Bo, and Xiaodong Liu. 2025. "Adaptive Non-Stationary Fuzzy Time Series Forecasting with Bayesian Networks" Sensors 25, no. 5: 1628. https://doi.org/10.3390/s25051628

APA Style

Wang, B., & Liu, X. (2025). Adaptive Non-Stationary Fuzzy Time Series Forecasting with Bayesian Networks. Sensors, 25(5), 1628. https://doi.org/10.3390/s25051628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop