Evaluating Predictive Accuracy of Regression Models with First-Order Autoregressive Disturbances: A Comparative Approach Using Artificial Neural Networks and Classical Estimators
<p>Graph of error (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>) against (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>) showing autocorrelation.</p> "> Figure 2
<p>Actual and predicted number of people employed (100% testing).</p> "> Figure 3
<p>Graph of error (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>) against (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>) showing autocorrelation.</p> "> Figure 4
<p>Actual and predicted MPG (average miles per gallon) (100% testing), where ML and REML denote maximum likelihood estimator and restricted maximum likelihood estimator.</p> "> Figure 5
<p>Graph of error (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>) against (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>) showing autocorrelation.</p> "> Figure 6
<p>Actual and predicted GDP data (100% testing), where ML and REML denote maximum likelihood estimator and restricted maximum likelihood estimator.</p> ">
Abstract
:1. Introduction
2. Methodology
2.1. Linear Regression Model Setup
- : vector of observed response variables;
- : matrix of predictor variables (with observations and predictors, including the intercept term);
- : vector of unknown regression coefficients;
- : vector of independent and identically distributed error terms with a mean of zero and variance ;
2.1.1. Ordinary Least Squares (OLS) Estimation
2.1.2. Prediction and Residuals
2.1.3. Sum of Squared Errors (SSE)
Differentiation of SSE with Respect to :
2.1.4. Properties of the OLS Estimator
- ○
- Unbiasedness: ;
- ○
- Variance of :
- ○
- Best Linear Unbiased Estimator (BLUE): Under the Gauss–Markov assumptions, has the smallest variance among all the linear unbiased estimators.
2.1.5. Autocorrelation Detection
- represents the residuals from the OLS fit;
- ranges from 0 to 4, with indicating no autocorrelation.
Interpretation of the Durbin–Watson Statistic
- ○
- suggests no autocorrelation;
- ○
- indicates positive autocorrelation;
- ○
- indicates negative autocorrelation.
- ○
- If , there is strong evidence of positive autocorrelation;
- ○
- If , there is strong evidence of negative autocorrelation;
- ○
- If , the evidence is inconclusive.
- are parameters to be estimated;
- is a white noise error term.
- ○
- is the sample autocorrelation at lag ;
- ○
- is the number of observations;
- ○
- is the number of lags being tested.
2.2. Implications of AR(1) in Linear Regression
2.2.1. Cochrane–Orcutt Procedure
Step 1: Initial OLS Fit
- : matrix of predictors;
- : vector of observed responses.
Step 2: Estimate Autocorrelation
- ○
- : residual at time ;
- ○
- : residual at time .
Step 3: Model Transformation
- ○
- and : current and previous observed response values;
- ○
- and : current and previous values of the predictors;
- ○
- : transformed error term, now assumed to be uncorrelated.
Step 4: Iteration Until Convergence
- Recompute for the transformed model using OLS:
- Recalculate residuals, for the updated model and estimate again using Equation (2).
- Update the transformation and repeat until the change in between successive iterations is below a pre-defined threshold, indicating convergence.
2.2.2. Prais–Winsten Transformation
Step 1: First-Observation Transformation
- ○
- : the observed response at ;
- ○
- : the predictors at ;
- ○
- : the transformed error term, now scaled to account for .
Step 2: Transformation of Remaining Observations
- ○
- and : current and previous observed response values;
- ○
- and : current and previous predictor values;
- ○
- : transformed error term, now assumed to be uncorrelated.
Step 3: Iterative Estimation of and
- Using the initial OLS estimates, calculate the residuals, and compute an initial estimate of :
- Transform and using the current value and re-estimate with OLS on the transformed model:
- Update using the residuals from the transformed model and iterate until stabilizes between iterations, signaling convergence.
2.2.3. Maximum Likelihood Estimation (MLE)
- ○
- : variance covariance matrix of the error terms, accounting for autocorrelation;
- ○
- : inverse of the variance covariance matrix, incorporating into the model structure.
Log Likelihood Function
- Maximization with Respect to :
- 2.
- Maximization with Respect to :
- 3.
- Maximization with Respect to :
2.2.4. Restricted Maximum Likelihood (RMLE)
Log Restricted Likelihood Derivation
Estimating , , and with RMLE:
- Estimate :The RMLE estimator for is similar to the MLE estimator, with adjustments in the degrees of freedom:
- Iteration for Convergence: Similarly to MLE, RMLE involves iterative estimation for , typically through numerical methods. By adjusting and recalculating at each iteration, convergence is achieved once the parameters stabilize.
2.2.5. Artificial Neural Network (ANN) Model
Feedforward Equation
- ○
- : input features;
- ○
- : weights associated with each input;
- ○
- : bias term;
- ○
- : activation function applied to the weighted sum.
Activation Function
Weight Update Rule (Backpropagation)
- ○
- : learning rate, controlling the step size in each iteration;
- ○
- : loss function to be minimized;
- ○
- : partial derivative of the loss with respect to weight .
Mean Squared Error Loss
Ordinary Least Squares (OLS) on ANN Predictions
- ○
- : matrix of predictor variables;
- ○
- : vector of predicted responses from the ANN.
2.3. Performance Metrics
2.4. Data Sources and Descriptions
2.4.1. Dataset One: Longley Data
- ○
- : number of people employed (in thousands);
- ○
- : GNP implicit price deflator;
- ○
- : GNP (in millions of dollars);
- ○
- : number of people unemployed (in thousands);
- ○
- : number of people in the armed forces (in thousands;
- ○
- : non-institutionalized population over 16 years of age;
- ○
- : year (coded as 1 for 1959, 2 for 1960, up to 47 for 2005).
2.4.2. Dataset Two: MPG Data
- ○
- : MPG (miles per gallon);
- ○
- : SP (top speed, in miles per hour);
- ○
- : HP (engine horsepower);
- ○
- : WT (vehicle weight, in hundred pounds).
2.4.3. Dataset Three: GDP Data
- : Real Gross Domestic Product (in millions);
- : total tax revenue (in millions);
- : current exchange rate (percentage);
- : inflation rate (percentage);
- : external debt (in millions);
- : average tax revenue.
3. Results
3.1. Presentation of Result from Selected Longley Data Dataset
3.1.1. Test for Autocorrelation in Longley Data
3.1.2. Model Comparison Based on MSE, MAE, and MAPE
3.1.3. Comparison Between Actual and Predicted Values for Longley Data
3.2. Presentation of Result from Selected MPG Dataset
3.2.1. Test for Autocorrelation in MPG Data
Comparison Based on MAE, MSE, and MAPE for MPG Data
3.2.2. Comparison Between Actual and Predicted Values for MPG Data
3.3. Presentation of Result from Selected GDP Data
3.3.1. Test for Autocorrelation in GDP Data
3.3.2. Comparison Based on MAE, MSE, and MAPE
3.3.3. Comparison Between Actual and Predicted Values for GDP Data
3.4. Addressing Overfitting in Artificial Neural Networks
4. Conclusions
4.1. Summary of Findings
4.2. Recommendations
- For predictive tasks, especially in the presence of autocorrelation, the Artificial Neural Network should be considered over traditional linear regression models due to its demonstrated efficiency and robustness.
- Training the ANN with a large proportion of data improves its predictive precision, aligning with the recommendations from Smarra et al. (2020) on optimizing neural networks through extensive training.
- Special attention should be given to the design of the ANN architecture, particularly in selecting the appropriate number of hidden layers, as this can impact model accuracy and performance.
- In cases with limited data availability, the Cochrane–Orcutt method should be considered for improved efficiency, as it has shown effectiveness in handling small sample sizes with minimal standard errors.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. R Code
# Clear workspace rm(list = ls()) # Load required libraries library(readxl) library(orcutt) library(lmtest) library(prais) library(nlme) library(neuralnet) library(matlib) # Load and view dataset data <- read_excel(“C:/Users/Rauf/Desktop/DAta.xlsx”, sheet = “loglinear_model”) View(data) attach(data) # Check for missing values apply(data, 2, function(x) sum(is.na(x))) # No missing values # OLS Regression on the full dataset linear.model <- lm(Y ~ ., data = data) summary(linear.model) res <- residuals(linear.model) # Test for Autocorrelation dwtest(linear.model, alternative = “two.sided”) # Visualize Autocorrelation plot(res[−1], res[-nrow(data)], main = “Autocorrelation Detection”, col = 3) abline(lm(res[-nrow(data)] ~ res[−1]), col = 2) # Split data into training (80%) and test (20%) sets n <- round(0.8 * nrow(data)) train <- data[1:n, ] test <- data[(n+1):nrow(data), ] # Ordinary Least Squares (OLS) Regression on training set lm.fit <- lm(Y ~ ., data = train) summary(lm.fit) pred_train <- predict(lm.fit, train) pred_test <- predict(lm.fit, test) # Prais-Winsten Estimation prais_winsten_model <- prais.winsten(Y ~ X1 + X2 + X3, data = train) prais_winsten_model # Cochrane-Orcutt Estimation cochrane_orcutt_model <- cochrane.orcutt(lm.fit) summary(cochrane_orcutt_model) predocc_train <- predict(cochrane_orcutt_model) # Maximum Likelihood Estimation (MLE) mle_model <- gls(Y ~ X1 + X2 + X3, data = train, correlation = corAR1(form = ~1), method = “ML”) summary(mle_model) pred_mle_train <- predict(mle_model, train) pred_mle_test <- predict(mle_model, test) # Restricted Maximum Likelihood Estimation (REML) reml_model <- gls(Y ~ X1 + X2 + X3, data = train, correlation = corARMA(p = 1), method = “REML”) summary(reml_model) pred_reml_train <- predict(reml_model, train) pred_reml_test <- predict(reml_model, test) # Data preprocessing for Neural Network (ANN) maxs <- apply(data, 2, max) mins <- apply(data, 2, min) scaled_data <- as.data.frame(scale(data, center = mins, scale = maxs − mins)) train_scaled <- scaled_data[1:n, ] test_scaled <- scaled_data[(n+1):nrow(data), ] # Neural Network Model set.seed(1) f <- as.formula(paste(“Y ~”, paste(names(train_scaled)[−1], collapse = “ + “))) nn <- neuralnet(f, data = train_scaled, hidden = c(1), linear.output = TRUE, rep = 10, likelihood = TRUE) plot(nn) # Neural Network predictions pr.nn_train <- compute(nn, train_scaled[, −1])$net.result * (max(data$Y) − min(data$Y)) + min(data$Y) pr.nn_test <- compute(nn, test_scaled[, −1])$net.result * (max(data$Y) − min(data$Y)) + min(data$Y) # OLS on ANN Predictions X <- as.matrix(cbind(1, train[, −1])) Y <- as.matrix(pr.nn_train) beta_ann <- solve(t(X) %*% X) %*% t(X) %*% Y # Calculate Standard Error for ANN OLS Parameters s <- sum((train$Y − pr.nn_train)^2)/(n − ncol(X)) std_errors <- sqrt(diag(s * solve(t(X) %*% X))) # Model Selection Metrics: MSE, MAE, MAPE # MSE Calculation mse_ols_test <- mean((test$Y − pred_test)^2) mse_co_test <- mean((test$Y − predict(cochrane_orcutt_model, test))^2) mse_pw_test <- mean((test$Y − predict(prais_winsten_model, test))^2) mse_mle_test <- mean((test$Y − pred_mle_test)^2) mse_reml_test <- mean((test$Y − pred_reml_test)^2) mse_nn_test <- mean((test$Y − pr.nn_test)^2) # MAE Calculation mae_ols_test <- mean(abs(test$Y − pred_test)) mae_co_test <- mean(abs(test$Y − predict(cochrane_orcutt_model, test))) mae_pw_test <- mean(abs(test$Y − predict(prais_winsten_model, test))) mae_mle_test <- mean(abs(test$Y − pred_mle_test)) mae_reml_test <- mean(abs(test$Y − pred_reml_test)) mae_nn_test <- mean(abs(test$Y − pr.nn_test)) # MAPE Calculation mape_ols_test <- mean(abs((test$Y − pred_test)/test$Y)) * 100 mape_co_test <- mean(abs((test$Y − predict(cochrane_orcutt_model, test))/test$Y)) * 100 mape_pw_test <- mean(abs((test$Y − predict(prais_winsten_model, test))/test$Y)) * 100 mape_mle_test <- mean(abs((test$Y − pred_mle_test)/test$Y)) * 100 mape_reml_test <- mean(abs((test$Y − pred_reml_test)/test$Y)) * 100 mape_nn_test <- mean(abs((test$Y − pr.nn_test)/test$Y)) * 100 # Compile Results results <- data.frame( Model = c(“OLS”, “Cochrane-Orcutt”, “Prais-Winsten”, “MLE”, “REML”, “ANN”), MSE = c(mse_ols_test, mse_co_test, mse_pw_test, mse_mle_test, mse_reml_test, mse_nn_test), MAE = c(mae_ols_test, mae_co_test, mae_pw_test, mae_mle_test, mae_reml_test, mae_nn_test), MAPE = c(mape_ols_test, mape_co_test, mape_pw_test, mape_mle_test, mape_reml_test, mape_nn_test) ) print(results) # Unload libraries and detach dataset detach(data) detach(“package:neuralnet”, unload = TRUE) detach(“package:matlib”, unload = TRUE) detach(“package:orcutt”, unload = TRUE) detach(“package:nlme”, unload = TRUE) detach(“package:prais”, unload = TRUE) detach(“package:lmtest”, unload = TRUE) |
References
- Rauf, R.I.; Ifeyinwa, O.J.; Yahaya, H.U. Robustness test of selected estimators of linear regression with autocorrelated error term: A Monte Carlo simulation study. Asian J. Probab. Stat. 2021, 109, 102274. [Google Scholar] [CrossRef]
- Rauf, R.I.; Hamidu, B.A.; Kikelomo, B.O.; Kayode, A.; Olusegun, A.O. Heteroscedasticity correction measures in stochastic frontier analysis. Ann. Univ. Oradea Econ. Sci. 2024, 33, 1–22. Available online: https://anale.steconomiceuoradea.ro/en/wp-content/uploads/2024/11/AUOES.July_.2024.18.pdf (accessed on 1 November 2024). [CrossRef] [PubMed]
- Rauf, R.I.; Alabi, O.O.; Bello, H.A.; Bodunwa, O.K.; Ayinde, K. New Approach in Stochastic Frontier Analysis Estimation for Addressing Joint Assumption Violation of Heteroscedasticity and Multicollinearity. Asian J. Probab. Stat. 2024, 26, 9–26. [Google Scholar] [CrossRef]
- Lu, J.; Peng, J.; Chen, J.; Sugeng, K.A. Prediction method of autoregressive moving average models for uncertain time series. Int. J. Gen. Syst. 2020, 49, 546–572. [Google Scholar] [CrossRef]
- Farhi, L.; Yasir, A. Optimized intelligent auto-regressive neural network model (ARNN) for prediction of non-linear exogenous signals. Wirel. Pers. Commun. 2022, 124, 1151–1167. [Google Scholar] [CrossRef]
- Rauf, R.I.; Ayinde, K.; Bello, H.A.; Bodunwa, O.K.; Alabi, O.O. Enhanced methods for multicollinearity mitigation in stochastic frontier analysis estimation. J. Niger. Soc. Phys. Sci. 2024, 6, 2091. [Google Scholar] [CrossRef]
- López, G.; Arboleya, P. Short-term wind speed forecasting over complex terrain using linear regression models and multivariable LSTM and NARX networks in the Andes Mountains, Ecuador. Renew. Energy 2022, 183, 351–368. [Google Scholar] [CrossRef]
- Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015; Available online: https://books.google.com.ng/books/about/Time_Series_Analysis.html?id=rNt5CgAAQBAJ&redir_esc=y (accessed on 1 November 2024).
- Kanaparthi, V. Robustness evaluation of LSTM-based deep learning models for Bitcoin price prediction in the presence of random disturbances. Int. J. Innov. Sci. Mod. Eng. (IJISME) 2024, 12, 14–23. [Google Scholar] [CrossRef]
- Loossens, T.; Tuerlinckx, F.; Verdonck, S. A comparison of continuous and discrete time modeling of affective processes in terms of predictive accuracy. Sci. Rep. 2021, 11, 6218. [Google Scholar] [CrossRef] [PubMed]
- Lara-Benítez, P.; Carranza-García, M.; Luna-Romera, J.M. Temporal Convolutional Networks Applied to Energy-Related Time Series Forecasting. Appl. Sci. 2020, 10, 2322. [Google Scholar] [CrossRef]
- Maulik, R.; Lusch, B.; Balaprakash, P. Non-autoregressive time-series methods for stable parametric reduced-order models. Phys. Fluids 2020, 32, 087107. [Google Scholar] [CrossRef]
- Beneventano, P.; Cheridito, P.; Graeber, R.; Jentzen, A.; Kuckuck, B. Deep neural network approximation theory for high-dimensional functions. arXiv 2021, arXiv:2112.14523. [Google Scholar]
- Smarra, F.; Di Girolamo, G.D.; De Iuliis, V.; Jain, A.; Mangharam, R.; D’Innocenzo, A. Data-driven switching modeling for MPC using regression trees and random forests. Nonlinear Anal. Hybrid Syst. 2020, 36, 100882. [Google Scholar] [CrossRef]
- Ballestrín, J.; Polo, J.; Martín-Chivelet, N.; Barbero, J.; Carra, E.; Alonso-Montesinos, J.; Marzo, A. Soiling forecasting of solar plants: A combined heuristic approach and autoregressive model. Energy 2022, 239, 122442. [Google Scholar] [CrossRef]
- Jeong, S.; Ghosal, S. Unified Bayesian theory of sparse linear regression with nuisance parameters. Electron. J. Stat. 2021, 15, 3040–3111. [Google Scholar] [CrossRef]
- Kaur, J.; Parmar, K.S.; Singh, S. Autoregressive models in environmental forecasting time series: A theoretical and application review. Environ. Sci. Pollut. Res. 2023, 30, 19617–19641. [Google Scholar] [CrossRef] [PubMed]
- Ayodele, B.V.; Mustapa, S.I.; Mohammad, N.; Shakeri, M. Long-term energy demand in Malaysia as a function of energy supply: A comparative analysis of non-linear autoregressive exogenous neural networks and multiple non-linear regression models. Energy Strategy Rev. 2021, 38, 100750. [Google Scholar] [CrossRef]
- Le, C.M.; Li, T. Linear regression and its inference on noisy network-linked data. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2022, 84, 1851–1885. [Google Scholar] [CrossRef]
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 2.45 × 1010 | 2.40 × 1010 | 2.48 × 1010 | 2.40 × 1010 | 2.43 × 1010 | 2.36 × 108 |
60 | 40 | 711,851,329 | 458,678,299 | 1.11 × 109 | 466,112,028 | 3.46 × 108 | 3.15 × 108 |
40 | 60 | 645,534,989 | 624,725,975 | 666,879,097 | 623,752,954 | 5.43 × 108 | 4.78 × 108 |
20 | 80 | 3.35 × 109 | 4.20 × 109 | 5.54 × 109 | 4.16 × 109 | 6.04 × 109 | 5.82 × 107 |
100 | 100 | 1.48 × 108 | 1.49 × 108 | 1.49 × 108 | 1.49 × 108 | 1.52 × 108 | 1.57 × 107 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 131,694.40 | 130,322.22 | 126,851.23 | 130,388.42 | 131,838.80 | 7315.73 |
60 | 40 | 12.56 | 8.14 | 17.85 | 8.30 | 4.90 | 3.94 |
40 | 60 | 16,764.69 | 16,202.92 | 17,293.96 | 16,175.35 | 13,584.43 | 8651.32 |
20 | 80 | 52,185.61 | 58,553.12 | 67,423.98 | 58,278.66 | 70,410.37 | 6725.66 |
100 | 100 | 6557.24 | 6644.13 | 6642.34 | 6635.23 | 7075.15 | 2277.74 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 107.86 | 106.78 | 102.46 | 106.84 | 108.14 | 5.50 |
60 | 40 | 39.22 | 39.22 | 76.78 | 59.36 | 119.17 | 38.72 |
40 | 60 | 11.04 | 10.62 | 11.43 | 10.60 | 8.69 | 5.08 |
20 | 80 | 37.94 | 42.57 | 49.03 | 42.37 | 51.21 | 5.00 |
100 | 100 | 5.40 | 5.39 | 5.41 | 5.39 | 5.79 | 2.01 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 1420.43 | 1912.43 | 2617.82 | 1898.51 | 2742.16 | 115.80 |
60 | 40 | 134.12 | 134.10 | 475.10 | 349.53 | 997.96 | 103.85 |
40 | 60 | 39.78 | 46.55 | 573.88 | 54.75 | 65.71 | 14.83 |
20 | 80 | 137.25 | 23.02 | 4.50 | 10.39 | 882.17 | 42.14 |
100 | 100 | 11.70 | 12.34 | 16.13 | 13.33 | 96.98 | 10.89 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 25.52 | 28.64 | 30.86 | 28.41 | 44.66 | 8.82 |
60 | 40 | 8.65 | 8.65 | 19.12 | 12.56 | 30.18 | 8.87 |
40 | 60 | 5.48 | 6.19 | 22.29 | 6.85 | 7.40 | 3.08 |
20 | 80 | 8.83 | 3.39 | 1.85 | 2.32 | 29.08 | 5.73 |
100 | 100 | 2.57 | 7.13 | 2.49 | 2.60 | 7.81 | 2.46 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 114.40 | 129.92 | 132.48 | 129.08 | 183.23 | 37.04 |
60 | 40 | 39.22 | 39.22 | 76.78 | 59.36 | 119.17 | 38.72 |
40 | 60 | 24.02 | 26.64 | 101.59 | 30.21 | 33.58 | 14.85 |
20 | 80 | 47.50 | 17.86 | 9.52 | 12.29 | 148.70 | 31.04 |
100 | 100 | 7.42 | 30.26 | 6.77 | 7.37 | 29.37 | 7.52 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 4.58 × 1014 | 4.49 × 1014 | 4.38 × 1014 | 4.6 × 1014 | 4.49 × 1014 | 1.57 × 1011 |
60 | 40 | 2.26 × 1015 | 1.82 × 1015 | 4.8 × 1015 | 1.81 × 1015 | 1.68 × 1015 | 1.72 × 1010 |
40 | 60 | 1.57 × 1013 | 8.92 × 1012 | 6.65 × 1012 | 8.48 × 1012 | 7.24 × 1012 | 6.90 × 1010 |
20 | 80 | 1.56 × 1012 | 1.43 × 1011 | 2.05 × 1011 | 1.35 × 1011 | 1.28 × 1011 | 3.76 × 1010 |
100 | 100 | 5.99 × 109 | 3 × 1010 | 9.29 × 1016 | 3.11 × 1010 | 3.26 × 1010 | 1.05 × 109 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 12,388,203 | 12,274,842 | 12,127,509 | 12,436,011 | 12,274,969 | 333,760.5 |
60 | 40 | 33,042,525 | 29,253,970 | 48,289,635 | 29,165,857 | 28,072,647 | 118,849.2 |
40 | 60 | 2,539,889 | 1,802,965 | 1,534,034 | 1,748,181 | 1,587,751 | 176,293.7 |
20 | 80 | 930,815.3 | 297,228 | 357,425.2 | 291,987.3 | 286,309 | 159,024.4 |
100 | 100 | 65,077.26 | 140,568.2 | 3.05 × 108 | 143,785.7 | 148,305.7 | 23,344.64 |
% TEST | % TRAIN | OLS | PW | CO | MLE | RMLE | ANN |
---|---|---|---|---|---|---|---|
80 | 20 | 3389.55 | 3357.17 | 3314.10 | 3397.27 | 3357.24 | 102.77 |
60 | 40 | 7329.33 | 6444.82 | 10,652.83 | 6424.23 | 6170.01 | 35.98 |
40 | 60 | 520.46 | 371.15 | 315.01 | 358.71 | 322.05 | 35.06 |
20 | 80 | 173.81 | 51.32 | 66.57 | 50.54 | 49.72 | 27.44 |
100 | 100 | 275.19 | 775.69 | 942,400.3 | 795.06 | 823.31 | 81.66 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rauf, R.I.; Alrasheedi, M.A.; Sadiq, R.; Aldawsari, A.M.A. Evaluating Predictive Accuracy of Regression Models with First-Order Autoregressive Disturbances: A Comparative Approach Using Artificial Neural Networks and Classical Estimators. Mathematics 2024, 12, 3966. https://doi.org/10.3390/math12243966
Rauf RI, Alrasheedi MA, Sadiq R, Aldawsari AMA. Evaluating Predictive Accuracy of Regression Models with First-Order Autoregressive Disturbances: A Comparative Approach Using Artificial Neural Networks and Classical Estimators. Mathematics. 2024; 12(24):3966. https://doi.org/10.3390/math12243966
Chicago/Turabian StyleRauf, Rauf I., Masad A. Alrasheedi, Rasheedah Sadiq, and Abdulrahman M. A. Aldawsari. 2024. "Evaluating Predictive Accuracy of Regression Models with First-Order Autoregressive Disturbances: A Comparative Approach Using Artificial Neural Networks and Classical Estimators" Mathematics 12, no. 24: 3966. https://doi.org/10.3390/math12243966
APA StyleRauf, R. I., Alrasheedi, M. A., Sadiq, R., & Aldawsari, A. M. A. (2024). Evaluating Predictive Accuracy of Regression Models with First-Order Autoregressive Disturbances: A Comparative Approach Using Artificial Neural Networks and Classical Estimators. Mathematics, 12(24), 3966. https://doi.org/10.3390/math12243966