[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Improving Neural Network Generalization by Combining Parallel Circuits with Dropout

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9949))

Included in the following conference series:

Abstract

In an attempt to solve the lengthy training times of neural networks, we proposed Parallel Circuits (PCs), a biologically inspired architecture. Previous work has shown that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way Dropout prevents node co-adaption, in this paper, we suggest an improvement by extending Dropout to the PC architecture. The paper provides multiple insights into this combination, including a variety of fusion approaches. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Suresh, S., Omkar, S.N., Mani, V.: Parallel implementation of back-propagation algorithm in networks of workstations. IEEE Trans. Parallel Distrib. Syst. 16(1), 24–34 (2005). doi:10.1109/tpds.2005.11

    Article  Google Scholar 

  2. Phan, K.T., Maul, T.H., Vu, T.T.: A parallel circuit approach for improving the speed and generalization properties of neural networks. In: Paper presented at the 11th International Conference on Natural Computation (ICNC), Zhangjiajie, 15–17 August 2015

    Google Scholar 

  3. Wu, H., Gu, X.: Towards dropout training for convolutional neural networks. Neural Netw. 71, 1–10 (2015). doi:10.1016/j.neunet.2015.07.007

    Article  Google Scholar 

  4. Baldi, P., Sadowski, P.: The dropout learning algorithm. Artif. Intell. 210, 78–122 (2014).http://www.sciencedirect.com/science/article/pii/S0004370214000216

    Article  MathSciNet  MATH  Google Scholar 

  5. Matsugu, M., Mori, K., Mitari, Y., Kaneda, Y.: Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. Official J. Int. Neural Netw. Soc. 16(5–6), 555–559 (2003). doi:10.1016/s0893-6080(03)00115-1

    Article  Google Scholar 

  6. Kashtan, N., Alon, U.: Spontaneous evolution of modularity and network motifs. Proc. Natl. Acad. Sci. U.S. A. 102(39), 13773–13778 (2005). doi:10.1073/pnas.0503610102

    Article  Google Scholar 

  7. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors, pp. 1–18 (2012). doi:arXiv:1207.0580

  8. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014). doi:10.1214/12-aos1000

    MathSciNet  MATH  Google Scholar 

  9. Wan, L., Zeiler, M.: Regularization of neural networks using dropconnect. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13)(1), pp. 109–111 (2013)

    Google Scholar 

  10. Lichman, M.: UCI Machine Learning Repository (2013). http://archive.ics.uci.edu/ml

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kien Tuong Phan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Phan, K.T., Maul, T.H., Vu, T.T., Lai, W.K. (2016). Improving Neural Network Generalization by Combining Parallel Circuits with Dropout. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9949. Springer, Cham. https://doi.org/10.1007/978-3-319-46675-0_63

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46675-0_63

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46674-3

  • Online ISBN: 978-3-319-46675-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics