Abstract
In previous works [1,2,3], the behavior of a fully-connected single-layer Random Neural Network (RN) [4,5] has been illustrated in a problem of pattern completion. We applied the gradient-descent learning algorithm which has been introduced by Gelenbe [6,7] for recurrent RN networks. The recall of any training pattern from a corrupted version consists in a progressive retrieval process with adaptive threshold. We have reduced the influence of the pattern geometry on the performance by modifying the computation of the network state. The experimental results are now compared to thoses obtained with Hopfield's network. As the learning times in such a model become rapidly prohibitive, we look into the use of a single-layer network with local interactions between neurons. The connectivity influence on the convergence of the learning algorithm and on the recognition rates is particularly examined.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
C. HUBERT, Autoassociative Memory of Schematic Images with the Random Neural Network Model using Gelenbe's Learning Algorithm, in E. Gelenbe, ”Neural Networks: Advances and Applications II”, Elsevier, North-Holland, pp 199–214, 1992.
C. HUBERT, Supervised learning and retrieval of simple images with the Random Neural Network, Proc. of the International Symposium on Computer and Information Sciences (ISCIS VII), Antalya, Turkey, pp 295–302, 1992.
C. HUBERT, Apprentissage supervisé et rappel d'images simples avec le Réseau Neuronal Aleatoire, to appear in Comptes Rendus de l'Académie des Sciences, Paris, France, January 1993.
E. GELENBE, Random Neural Networks with Negative and Positive Signals and Product Form Solution, Neural Computation, Vol. 1, No. 4, pp 502–510, 1989.
E. GELENBE, Theory of the Random Neural Network Model, in E. Gelenbe, “Neural networks: advances and applications”, Elsevier, North-Holland, 1991.
E. GELENBE, Learning in the Recurrent Random Neural Network Model, in E. Gelenbe, “Neural Networks: Advances and Applications II”, Elsevier, North-Holland, 1992.
E. GELENBE, G-nets and Learning Recurrent Random Networks, Proc. of the International Conference on Artificial Neural Networks (1CANN-92), Brighton, UK, pp 943–946, 1992.
Y. LE CUN, Modèles connexionnistes de l'apprentissage, Ph.D Thesis, Paris 6 University, France, 1987.
D.O. HEBB, The organization of behavior, John Wiley and Sons, New York, 1949.
E. GELENBE, Stability of the random neural network model, Neural Computation, Vol. 2, No. 2, pp 239–247, 1990.
D.E. RUMELHART, G.E. HINTON and R.J. WILLIAMS, Learning internal representations by error propagation in D.E. Rumelhart and J.L. McClelland ”Parallel Distributed Processing”, Vol. 1, Bradford Books and MIT Press, Cambridge, Massachussets, pp 318–362, 1986.
J.J. HOPFIELD, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. of the National Academy of Sciences 81, USA, pp 3088–3092, 1984.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hubert, C. (1993). Design of fully and partially connected random neural networks for pattern completion. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_137
Download citation
DOI: https://doi.org/10.1007/3-540-56798-4_137
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-56798-1
Online ISBN: 978-3-540-47741-9
eBook Packages: Springer Book Archive