[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Some refinements of the standard autoassociative neural network

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Improving the training algorithm, determining near-optimal number of nonlinear principal components (NLPCs), extracting meaningful NLPCs, and increasing the nonlinear, dynamic, and selective processing capability of the standard autoassociative neural network are the objectives of this article that are achieved independently by some new refinements of the network structure and the training algorithm. In addition, three different topologies of the network are presented, which make it possible to perform local nonlinear principal component analysis. Performances of all methods are evaluated by a stock price database that demonstrates their efficiency in different situations. Finally, as it will be illustrated in the last section, the proposed structures can be easily combined together, which introduce them as efficient tools in a wide range of signal processing applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Haykin S (1999) Neural networks—a comprehensive foundation. Prentice Hall, Upper Saddle River, NJ

    MATH  Google Scholar 

  2. Rega G, Troger H (2005) Dimension reduction of dynamical systems: methods, models, applications. Nonlinear Dyn 41(1–3):1–15

    Article  MathSciNet  MATH  Google Scholar 

  3. Wen XM (2007) A note on sufficient dimension reduction. Stat Probab Lett 77(8):817–821

    Article  MATH  Google Scholar 

  4. Ture M, Kurt I, Akturk Z (2007) Comparison of dimension reduction methods using patient satisfaction data. Expert Syst Appl 32(2):422–426

    Article  Google Scholar 

  5. Oja E, Harmeling S, Almeida L (2004) Independent component analysis and beyond. Signal Process 84(2):215–216

    Article  Google Scholar 

  6. Stone JV (2002) Independent component analysis: an introduction. Trends Cogn Sci 6(2):59–64

    Article  Google Scholar 

  7. Frolov AA, Husek D, Muraviev IP, Polyakov PY (2007) Boolean factor analysis by attractor neural network. IEEE Trans Neural Netw 18(3):698–707

    Article  Google Scholar 

  8. Hartigan JA (1975) Clustering algorithms. Wiley, New York

    MATH  Google Scholar 

  9. Lappalainen H, Honkela A (2000) Bayesian nonlinear independent component analysis by multi-Layer perceptrons. In: Girolami M (ed) Advances in independent component analysis. Springer, New York, pp 93–121

    Chapter  Google Scholar 

  10. Bailing Z, Minyue F, Hong Y (2001) A nonlinear neural network model of mixture of local principal component analysis: application to handwritten digits recognition. Pattern Recogn 34(2):203–214

    Article  MATH  Google Scholar 

  11. Duda R, Hart P (1988) Pattern classification theory and systems. Springer, Berlin

    Google Scholar 

  12. Makki B, Salehi SA, Sadati N, Noori Hosseini M (2007) Voice conversion using nonlinear principal component analysis. In: IEEE symposium on computational intelligence in image and signal processing, pp 336–339

  13. Sanger T (1989) Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw 2:459–473

    Article  Google Scholar 

  14. DenWux T, Masson MH (2004) Principal component analysis of fuzzy data using autoassociative neural networks. IEEE Trans Fuzzy Syst 12:336–349

    Article  Google Scholar 

  15. López-Rubio E, Muñoz-Pérez J, Gómez-Ruiz J (2004) A principal components analysis self-organizing map. Neural Netw 17(2):261–270

    Article  MATH  Google Scholar 

  16. Voegtlin T (2005) Recursive principal components analysis. Neural Netw 18(8):1051–1063

    Article  Google Scholar 

  17. Embrechts MJ, Hargis BJ, Linton JD (2010) Augmented efficient BackProp for backpropagation learning in deep autoassociative neural networks. In: The international joint conference on neural networks (IJCNN), pp 1–6

  18. Chen M-H (2010) Pattern recognition of business failure by autoassociative neural networks in considering the missing values. In: The international computer symposium, pp 711–715

  19. Licciardi G, Del Frate F, Duca R (2009) Feature reduction of hyperspectral data using Autoassociative neural networks algorithms. In: IEEE international geoscience and remote sensing symposium, pp 176–179

  20. Kumar Gellaboina M, Venkoparao VG (2009) Graphic symbol recognition using auto-associative neural network model. In: The seventh international conference on advances in pattern recognition, pp 297–301

  21. Makki B, Noori HosseiniM, Seyyedsalehi SA (2010) An evolving neural network to perform dynamic principal component analysis. Neural Comput Appl 19(3):459–463

    Article  Google Scholar 

  22. Kramer MA (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE J 37:233–243

    Article  Google Scholar 

  23. Schoelkopf B, Smola A (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10:1299–1319

    Article  Google Scholar 

  24. Stamkopoulos T, Diamantaras K, Maglaveras N, Strintzis M (1998) ECG analysis using nonlinear PCA neural networks for ischemia detection. IEEE Trans Signal Proc 46(11):3058–3067

    Article  Google Scholar 

  25. Hsieh WW (2001) Nonlinear principal component analysis by neural networks. Tellus 53A:599–615

    Google Scholar 

  26. Rosipal R, Girolami M (2001) An expectation-maximization approach to nonlinear component analysis. Neural Comput 13:505–510

    Article  MATH  Google Scholar 

  27. Zheng W, Zou C, Zhao L (2005) An improved algorithm for kernel principal component analysis. Neural Process Lett 22(1):49–56

    Article  Google Scholar 

  28. Sarma P, Durlofsky LJ, Aziz K (2008) Kernel principal component analysis for efficient, differentiable parameterization of multipoint geostatistics. Math Geosci 40(1):3–32

    Google Scholar 

  29. Guttman L (1941) The quantification of a class of attributes: a theory and a method of scale construction. In: Horst P (ed) The prediction of personal adjustment. Social Science Research Council, New York, pp 319–348

    Google Scholar 

  30. Kruskal JB (1965) Analysis of factorial experiments by estimating monotone transformations of the data. J R Stat Soc B 27:251–263

    MathSciNet  Google Scholar 

  31. Shepard RN (1966) Metric structures in ordinal data. J Math Psychol 3:287–315

    Article  Google Scholar 

  32. Young F, Takane Y, de Leeuw J (1978) The principal components of mixed measurement level multivariate data: an alternating least squares method with optimal scaling. Psychometrika 43:279–281

    Article  MATH  Google Scholar 

  33. Winsberg S, Ramsay JO (1983) Monotone spline transformations for dimension reduction. Psychometrika 48:575–595

    Article  Google Scholar 

  34. Oja E (1982) A simplified neuron model as a principal component analyzer. J Math Biol 15:267–273

    Article  MathSciNet  MATH  Google Scholar 

  35. Oja E, Karhunen J (1985) On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. J Math Anal Appl 106:69–84

    Article  MathSciNet  MATH  Google Scholar 

  36. Marseguerra M, Zoia A (2005) The autoassociative neural network in signal analysis: I. The data dimensionality reduction and its geometric interpretation. Ann Nucl Energy 32(11):1191–1206

    Article  Google Scholar 

  37. Saegusa R, Sakano H, Hashimoto S (2004) Nonlinear principal component analysis to preserve the order of principal components. Neurocomputing 61:57–70

    Article  Google Scholar 

  38. Makki B, Salehi SA, Noori Hosseini M, Sadati N (2007) Principal component analysis using constructive neural networks. In: International joint conference on neural networks, pp 558–562

  39. Rios A, Kabuka M (1995) Image compression with a dynamic autoassociative neural network. Math Comput Modell 21(1–2):159–171

    Article  Google Scholar 

  40. Valpola H, Karhunen J (2002) An unsupervised ensemble learning method for nonlinear dynamic state-space models. Neural Comput 14(11):2647–2692

    Article  MATH  Google Scholar 

  41. Alvarez-Ramirez J, Ibarra-Valdez C (2001) Modeling stock market dynamics based on conservation principles. Phys A 301(1–4):493–511

    MATH  Google Scholar 

  42. Shively PA (2003) The nonlinear dynamics of stock prices. Q Rev Econ Finance 43(3):505–517

    Article  Google Scholar 

  43. Stock price database: http://standard.blogfa.com

  44. Holmström L, Koistinen P (1992) Using additive noise in backpropagation training. IEEE Trans Neural Netw 3:24–38

    Article  Google Scholar 

  45. Myers R, Hancock ER (2001) Empirical modeling of genetic algorithms. Evol Comput 9(4):461–493

    Article  Google Scholar 

  46. Plaut D, Nowlan S, Hinton G (1986) Experiments on learning by back Propagation, technical report CMU-CS-86-126, Department of Computer Science, Carnegie Mellon University, Pittsburgh

  47. Reiser P (1996) Genetic algorithms in engineering systems: innovations and applications. Comput Control Eng J 7(3):144

    Google Scholar 

  48. Vasudevan BG, Gohil BS, Agarwal VK (2004) Backpropagation neural-network-based retrieval of atmospheric water vapor and cloud liquid water from IRS-P4 MSMR. IEEE Trans Geosci Remote Sens 42(5):985–990

    Google Scholar 

  49. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  50. Byungjoo Y, Lacher RC (1994) Extracting rules by destructive learning, In: Int. Conf Neural Netw 3:1766–1771

    Google Scholar 

  51. Parekh R, Yang J, Honavar V (2000) Constructive neural-network learning algorithms for pattern classification. IEEE Trans Neural Netw 11(2):436–451

    Article  Google Scholar 

  52. Xin Y (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447

    Article  Google Scholar 

  53. Honda K, Ichihashi H (2004) Linear fuzzy clustering techniques with missing values and their application to local principal component analysis. IEEE Trans Fuzzy Syst 12(2):183–193

    Article  MathSciNet  Google Scholar 

  54. Weingessel A, Hornik K (2000) Local PCA algorithms. IEEE Trans Neural Netw 11(6):1242–1250

    Article  Google Scholar 

  55. Zhi-Yong L, Lei X (2003) Topological local principal component analysis. Neurocomputing 55(3–4):739–745

    Google Scholar 

  56. Rajasekaran S, Vijayalakshmi Pai GA (2002) Recurrent neural dynamic models for equilibrium and eigenvalue problems. Math Comput Modell 35(1–2):229–240

    Article  MathSciNet  MATH  Google Scholar 

  57. Zohdy MA, Karam M, Abdel-Aty MA, Hoda S (1997) A recurrent dynamic neural network for noisy signal representation. Neurocomputing 17(2):77–97

    Article  Google Scholar 

  58. Jolliffe IT (2002) Principal component analysis. Springer, New York

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Behrooz Makki.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Makki, B., Hosseini, M.N. Some refinements of the standard autoassociative neural network. Neural Comput & Applic 22, 1461–1475 (2013). https://doi.org/10.1007/s00521-012-0825-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-0825-5

Keywords

Navigation