[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

A Comparison of Trend Estimators Under Heteroscedasticity

  • Conference paper
  • First Online:
Artificial Intelligence and Soft Computing (ICAISC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12854))

Included in the following conference series:

  • 1084 Accesses

Abstract

Trend estimation, i.e. estimating or smoothing a nonlinear function without any independent variables, belongs to important tasks in various applications within signal and image processing, engineering, biomedicine, analysis of economic time series, etc. We are interested in estimating trend under the presence of heteroscedastic errors in the model. So far, there seem no available studies of the performance of robust neural networks or the taut string (stretched string) algorithm under heteroscedasticity. We consider here the Aitken-type model, analogous to known models for linear regression, which take heteroscedasticity into account. Numerical studies with heteroscedastic data possibly contaminated by outliers yield improved results, if the Aitken model is used. The results of robust neural networks turn out to be especially favorable in our examples. On the other hand, the taut string (and especially its robust \(L_1\)-version) inclines to overfitting and suffers from heteroscedasticity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 63.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 79.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Atkinson, A.C., Riani, M., Torti, F.: Robust methods for heteroskedastic regression. Comput. Stat. Data Anal. 104, 209–222 (2016)

    Article  MathSciNet  Google Scholar 

  2. Brockwell, P.J., Davis, R.A.: Introduction to Time Series and Forecasting. STS, 2nd edn. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-29854-2

    Book  MATH  Google Scholar 

  3. Davies, P.L.: Data Analysis and Approximate Models. CRC Press, Boca Raton (2014)

    Book  Google Scholar 

  4. Davies, P.L., Kovac, A.: Local extremes, runs, strings and multiresolution. Ann. Statist. 29, 1–65 (2001)

    Article  MathSciNet  Google Scholar 

  5. Davies, L., Kovac, A.: Ftnonpar: features and strings for nonparametric regression. R package version 0.1-88 (2019). https://CRAN.R-project.org/package=ftnonpar

  6. Dümgen, L., Kovac, A.: Extensions of smoothing via taut strings. Electron. J. Stat. 3, 41–75 (2009)

    MathSciNet  MATH  Google Scholar 

  7. Greene, W.H.: Econometric Analysis, 8th edn. Pearson, London (2017)

    Google Scholar 

  8. Haykin, S.O.: Neural Networks and Learning Machines: A Comprehensive Foundation, 2nd edn. Prentice Hall, Upper Saddle River (2009)

    Google Scholar 

  9. Jurečková, J., Picek, J., Schindler, M.: Robust Statistical Methods with R, 2nd edn. CRC Press, Boca Raton (2019)

    Book  Google Scholar 

  10. Kalina, J., Schlenker, A.: A robust supervised variable selection for noisy high-dimensional data. BioMed Res. Int. 2015, Article 320385 (2015)

    Google Scholar 

  11. Kalina, J., Tichavský, J.: On robust estimation of error variance in (highly) robust regression. Meas. Sci. Rev. 20, 6–14 (2020)

    Article  Google Scholar 

  12. Kalina, J., Vidnerová, P.: Robust training of radial basis function neural networks. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2019. LNCS (LNAI), vol. 11508, pp. 113–124. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20912-4_11

    Chapter  Google Scholar 

  13. Kim, S., Pokojovy, M., Wan, X.: The taut string approach to statistical inverse problems: theory and applications. J. Comput. Appl. Math. 382, Article 113098 (2021)

    Google Scholar 

  14. Koenker, R., Mizera, I.: The alter egos of the regularized maximum likelihood density estimators: Deregularized maximum-entropy, Shannon, Rényi, Simpson, Gini, and stretched strings. In: Hušková, M., Janžura, M. (eds.) Prague Stochastics, pp. 145–157. Matfyzpress, Prague (2006)

    Google Scholar 

  15. Makovetskii, A., Voronin, S., Kober, V., Voronin, A.: Tube-based taut string algorithms for total variation regularization. Mathematics 8, Article 1141 (2020)

    Google Scholar 

  16. Ng, N.H., Gabriel, R.A., McAuley, J., Elkan, C., Lipton, Z.C.: Predicting surgery duration with neural heteroscecastic regression. Proc. Mach. Learn. Res. 68(26), 100–111 (2017)

    Google Scholar 

  17. Overgaard, N.C.: On the taut string interpretation and other properties of the Rudin-Osher-Fatemi model in one dimension. J. Math. Imaging Vis. 61, 1276–1300 (2019)

    Article  MathSciNet  Google Scholar 

  18. Paliwal, M., Kumar, U.A.: The predictive accuracy of feed forward neural networks and multiple regression in the case of heteroscedastic data. Appl. Soft Comput. 11, 3859–3869 (2011)

    Article  Google Scholar 

  19. Paul, C., Vishwakarma, G.K.: Back propagation neural networks and multiple regressions in the case of heteroscedasticity. Comm. Stat. Simul. Comput. 46, 6772–6789 (2017)

    Article  Google Scholar 

  20. R Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna (2019). https://www.R-project.org/

  21. Rousseeuw, P.J., Leroy, A.M.: Robust Regression and Outlier Detection. Wiley, New York (1987)

    Book  Google Scholar 

  22. Rousseeuw, P.J., Van Driessen, K.: Computing LTS regression for large data sets. Data Min. Knowl. Disc. 12, 29–45 (2006)

    Article  MathSciNet  Google Scholar 

  23. Rusiecki, A.: Robust LTS backpropagation learning algorithm. In: Sandoval, F., Prieto, A., Cabestany, J., Graña, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 102–109. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73007-1_13

    Chapter  Google Scholar 

  24. Víšek, J.Á.: The least trimmed squares. Part I: consistency. ybernetika 42, 1–36 (2006)

    MATH  Google Scholar 

  25. Víšek, J.Á.: Consistency of the least weighted squares under heteroscedasticity. Kybernetika 47, 179–206 (2011)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The research is supported by the projects GA19-05704S and GA18-23827S of the Czech Science Foundation. Jiří Tumpach and Patrik Janáček provided technical support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Petra Vidnerová .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kalina, J., Vidnerová, P., Tichavský, J. (2021). A Comparison of Trend Estimators Under Heteroscedasticity. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2021. Lecture Notes in Computer Science(), vol 12854. Springer, Cham. https://doi.org/10.1007/978-3-030-87986-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87986-0_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87985-3

  • Online ISBN: 978-3-030-87986-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics