Abstract
Artificial neural networks(ANN) have been used successfully in applications such as pattern recognition, image processing, automation and control. Majority of today's applications use backpropagate feedforward ANN. In this paper, two methods of P pattern L layer ANN learning on n x n RMESH have been presented. One required memory space of O(nL) but conceptually is simpler to develop and the other uses pipelined approach which reduces the memory requirement to O(L). Both of these algorithms take O(PL) time and are optimal for RMESH architecture.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
G. Chinn, K. A. Grajski, C. Chen, C. Kuszmaul, and S. Tomboulian, “Systolic Array implementations of Neural Nets on the MasPar MP-1 Massively Parallel Processor”, International Conference Neural Networks, vol. 2, pp 169–173, San Diego, 1990
L. Chu and W. Wah “Optimal mapping of Neural-Network Learning on Message-Passing Multicomputers”, Journal of Parallel and Distributed Computing, vol. 14, pp-319–339, 1992
J. Chung, H. Yoon, and S. R. Maeng, “A systolic Array Exploiting the Inherent Parallelisms of Artificial Neural Networks”, International Conference on Parallel Processing, vol. 1, pp 652–653, 1991
T. G. Clarkson, C. K. Ng, and Y. Guan, “The pRAM: An Adaptive VLSI Chip”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 408–411, May 1993
A. El-Amawy, and P. Kulasinghe, “Algorithmic Mapping of Feedforward Neural Networks onto Multiple Bus Systems”, IEEE transactions on Parallel and Distributed Systems, vol. 8, ppl30–136, Feb. 1997
D. Hammerstrom, “A VLSI Architecture for High-Performance, Low-Cost, Onchip Learning”, International Joint Conference on Neural Networks, vol. 2, pp 537–543, 1990
S. Haykin, Neural Networks, A Comprehensive Foundation, IEEE Press, 1994
A. Hiraiwa, S. Kurosu, S. Arisawa and M. Ionue, “A Two Level pipeline RISC Processor Array for ANN”, International Joint Conference on Neural Networks, Washington DC, vol. 2, ppl37–140, 1990
J Jenq and S. Sahni “Reconfigurable Mesh Algorithms for Fundamental Data Manipulation Operations”, Computing on Distributed Memory Multiprocessors, NATO Series F, ed. F. Ozguner, Springer Verlag, 1993
J. Jenq and W. Li “Artificial Neural Networks on Reconfigurable Meshes”, CSCI-TR-98-01 Department of Computer Science, University of Arkansas
V. Kumar, S. Shekhar, and M. Amin, “A Scaleable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures”, IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 10, pp 1073–1090, Oct. 1994
S. Y. Kung, “Parallel Architectures for Artificial Neural Nets”, International Conference on Systolic Arrays, pp 163–174, 1988
J. Lansner and T. Lehmann, “An Analog CMOS Chip Set for Neural Networks with Arbitrary Topologies”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 441–444, May 1993
C. Lehmann, M. Viredaz, and F. Blayo, “A Generic Systolic Array Building Block for Neural Networks with on-Chip Learning”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 400–407, May 1993
W. Lin, V. K. Prasanna, and K. W. Przytula, “Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines”, IEEE Transactions on Computers, vol. 40, no. 12, pp 1390–1401, Dec. 1991
Q. M. Malluhi, M. Bayoumi, and T. R. N. Rao, “Efficient mapping of ANNs on Hypercube Massively Parallel Machines”, IEEE Transactions on Computers, vol. 44, no. 6, pp769–779, June 1995
T. Nordstrom and B. Svensson, “Using and Designing Massively Parallel Computers for Artificial Neural Networks”, Journal of Parallel and Distributed Computing, vol. 14, pp 260–285, 1992
K. Parker and A. Thornbrugh, “Parallelized Back-Propagation Training and Its Effectiveness”, vol. 2, International Conference on Neural Networks, Washington DC, vol. 2, pp, 179–182, Jan. 1990
U. Ramacher, “SYNAPSE-A Neuralcomputer that Synthesizes Neural Algorithms on a Parallel Systolic Engine”, Journal of Parallel and Distributed Computing, vol. 14, pp 306–318, 1992
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jenq, JF., Ning Li, W. (1998). Artificial neural networks on reconfigurable meshes. In: Rolim, J. (eds) Parallel and Distributed Processing. IPPS 1998. Lecture Notes in Computer Science, vol 1388. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64359-1_693
Download citation
DOI: https://doi.org/10.1007/3-540-64359-1_693
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64359-3
Online ISBN: 978-3-540-69756-5
eBook Packages: Springer Book Archive