[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Accelerating Convolutional Neural Networks Using Fine-Tuned Backpropagation Progress

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2017)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10179))

Included in the following conference series:

Abstract

In computer vision many tasks have achieved state-of-the-art performance using convolutional neural networks (CNNs) [11], typically at the cost of massive computational complexity. A key problem of the training is the low speed of the progress. It may cost much time especially when computational resources are limited. The focus of this paper is speeding up the training progress based on fine-tuned backpropagation progress. More specifically, we train the CNNs with standard backpropagation firstly. When the feature extraction layers got better features, then we start to block the standard backpropagation in the whole layers, the loss function values only back propagates between fully connected layers. So it can not only save time but also pay more attention to train the classifier to get the same or better result compared with training with standard backpropagation all the time. Comprehensive experiments on JD (https://www.jd.com/) datasets demonstrate significant reduction in computational time, at the cost of negligible loss in accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arel, I., Rose, D.C., Karnowski, T.P.: Deep machine learning-a new frontier in artificial intelligence research [research frontier]. IEEE Comput. Intell. Mag. 5(4), 13–18 (2010)

    Article  Google Scholar 

  2. Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bengio, Y.: Deep learning of representations: looking forward. In: Dediu, A.-H., Martín-Vide, C., Mitkov, R., Truthe, B. (eds.) SLSP 2013. LNCS (LNAI), vol. 7978, pp. 1–37. Springer, Heidelberg (2013). doi:10.1007/978-3-642-39593-2_1

    Chapter  Google Scholar 

  4. Di, W., Wah, C., Bhardwaj, A., Piramuthu, R., Sundaresan, N.: Style finder: fine-grained clothing style detection and retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 8–13 (2013)

    Google Scholar 

  5. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 391–407. Springer, Heidelberg (2016). doi:10.1007/978-3-319-46475-6_25

    Chapter  Google Scholar 

  6. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Aistats 9, 249–256 (2010)

    Google Scholar 

  7. Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866 (2014)

  9. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM (2014)

    Google Scholar 

  10. Jiang, X., Pang, Y., Li, X., Pan, J.: Speed up deep neural network based pedestrian detection by sharing features across multi-scale models. Neurocomputing 185, 163–170 (2016)

    Article  Google Scholar 

  11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  12. Le Cun, B.B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems. Citeseer (1990)

    Google Scholar 

  13. Lebedev, V., Ganin, Y., Rakhuba, M., Oseledets, I., Lempitsky, V.: Speeding-up convolutional neural networks using fine-tuned CP-decomposition. arXiv preprint arXiv:1412.6553 (2014)

  14. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  15. LeCun, Y., Kavukcuoglu, K., Farabet, C., et al.: Convolutional networks and applications in vision. In: ISCAS, pp. 253–256 (2010)

    Google Scholar 

  16. Sánchez, J., Perronnin, F.: High-dimensional signature compression for large-scale image classification. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1665–1672. IEEE (2011)

    Google Scholar 

  17. van Doorn, J.: Analysis of deep convolutional neural network architectures (2014)

    Google Scholar 

  18. Wang, L., Yang, Y., Min, M.R., Chakradhar, S.: Accelerating deep neural network training with inconsistent stochastic gradient descent. arXiv preprint arXiv:1603.05544 (2016)

Download references

Acknowledgement

This work is supported by National Natural Science Foundation of China (project no. 61300137), Science and Technology Planning Project of Guangdong Province, China (No. 2013B010406004), Tip-top Scientific and Technical Innovative Youth Talents of Guangdong special support program (No. 2015TQ01X633) and Science and Technology Planning Major Project of Guangdong Province (No. 2015A070711001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Cai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Li, Y., Chen, Z., Cai, Y., Huang, D., Li, Q. (2017). Accelerating Convolutional Neural Networks Using Fine-Tuned Backpropagation Progress. In: Bao, Z., Trajcevski, G., Chang, L., Hua, W. (eds) Database Systems for Advanced Applications. DASFAA 2017. Lecture Notes in Computer Science(), vol 10179. Springer, Cham. https://doi.org/10.1007/978-3-319-55705-2_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55705-2_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55704-5

  • Online ISBN: 978-3-319-55705-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics