Abstract
This paper proposes novel hierarchical self-organizing associative memory architecture for machine learning. This memory architecture is characterized with sparse and local interconnections, self-organizing processing elements (PE), and probabilistic synaptic transmission. Each PE in the network dynamically estimates its output value from the observed input data distribution and remembers the statistical correlations between its inputs. Both feed forward and feedback signal propagation is used to transfer signals and make associations. Feed forward processing is used to discover relationships in the input patterns, while feedback processing is used to make associations and predict missing signal values. Classification and image recovery applications are used to demonstrate the effectiveness of the proposed memory for both hetero-associative and auto-associative learning.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Chang, J.Y., Cho, C.W.: Second-order Asymmetric BAM Design with a Maximal Basin of Attraction. IEEE Trans. on System, Man, and Cybernetics, Part A: Systems and Humans 33, 421–428 (2003)
Salih, I., Smith, S.H., Liu, D.: Synthesis Approach for Bidirectional Associative Memories Based on the Perceptron Training Algorithm. Neurocomputing 35, 137–148 (2000)
Wang, L.: Multi-associative Neural Networks and Their Applications to Learning and Retrieving Complex Spatio-temporal Sequences. IEEE Trans. on System, Man, and Cybernetics, part B-Cybernetics 29, 73–82 (1999)
Hopfield, J.J.: Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Nat. Acad. Sci. USA 79, 2554–2558 (1982)
Vogel, D.: Auto-associative Memory Produced by Disinhibition in a Sparsely Connected Network. Neural Networks 5(11), 897–908 (1998)
Vogel, D., Boos, W.: Sparsely Connected, Hebbian Networks with Strikingly Large Storage Capacities. Neural Networks 4(10), 671–682 (1997)
Wang, M., Chen, S.: Enhanced EMAM Based on Empirical Kernel Map. IEEE Trans. on Neural Network 16, 557–563 (2005)
Starzyk, J.A., Zhu, Z., Liu, T.-H.: Self-Organizing Learning Array. IEEE Trans. on Neural Networks 16(2), 355–363 (2005)
Triesch, J.: Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons. Neural Information Processing System (NIPS) 17 (2004)
Starzyk, J.A., Wang, F.: Dynamic Probability Estimator for Machine Learning. IEEE Trans. on Neural Networks 15(2), 298–308 (2004)
Fisher, R.A.: The Use of Multiple Measurements in Taxonomic Problem. Ann. Eugenics 7(2), 179–188 (1936)
Djuric, P.M., Huang, Y., Ghirmai, E.: Perfect Sampling: a Review and Applications to Signal Processing. IEEE Trans. on Signal Processing 50(2), 345–356 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Starzyk, J.A., He, H., Li, Y. (2007). A Hierarchical Self-organizing Associative Memory for Machine Learning. In: Liu, D., Fei, S., Hou, ZG., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4491. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72383-7_49
Download citation
DOI: https://doi.org/10.1007/978-3-540-72383-7_49
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72382-0
Online ISBN: 978-3-540-72383-7
eBook Packages: Computer ScienceComputer Science (R0)