Abstract
In an attempt to solve the lengthy training times of neural networks, we proposed Parallel Circuits (PCs), a biologically inspired architecture. Previous work has shown that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way Dropout prevents node co-adaption, in this paper, we suggest an improvement by extending Dropout to the PC architecture. The paper provides multiple insights into this combination, including a variety of fusion approaches. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Suresh, S., Omkar, S.N., Mani, V.: Parallel implementation of back-propagation algorithm in networks of workstations. IEEE Trans. Parallel Distrib. Syst. 16(1), 24–34 (2005). doi:10.1109/tpds.2005.11
Phan, K.T., Maul, T.H., Vu, T.T.: A parallel circuit approach for improving the speed and generalization properties of neural networks. In: Paper presented at the 11th International Conference on Natural Computation (ICNC), Zhangjiajie, 15–17 August 2015
Wu, H., Gu, X.: Towards dropout training for convolutional neural networks. Neural Netw. 71, 1–10 (2015). doi:10.1016/j.neunet.2015.07.007
Baldi, P., Sadowski, P.: The dropout learning algorithm. Artif. Intell. 210, 78–122 (2014).http://www.sciencedirect.com/science/article/pii/S0004370214000216
Matsugu, M., Mori, K., Mitari, Y., Kaneda, Y.: Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. Official J. Int. Neural Netw. Soc. 16(5–6), 555–559 (2003). doi:10.1016/s0893-6080(03)00115-1
Kashtan, N., Alon, U.: Spontaneous evolution of modularity and network motifs. Proc. Natl. Acad. Sci. U.S. A. 102(39), 13773–13778 (2005). doi:10.1073/pnas.0503610102
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors, pp. 1–18 (2012). doi:arXiv:1207.0580
Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014). doi:10.1214/12-aos1000
Wan, L., Zeiler, M.: Regularization of neural networks using dropconnect. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13)(1), pp. 109–111 (2013)
Lichman, M.: UCI Machine Learning Repository (2013). http://archive.ics.uci.edu/ml
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Phan, K.T., Maul, T.H., Vu, T.T., Lai, W.K. (2016). Improving Neural Network Generalization by Combining Parallel Circuits with Dropout. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9949. Springer, Cham. https://doi.org/10.1007/978-3-319-46675-0_63
Download citation
DOI: https://doi.org/10.1007/978-3-319-46675-0_63
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46674-3
Online ISBN: 978-3-319-46675-0
eBook Packages: Computer ScienceComputer Science (R0)