[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

The Effect of Tensor Rank on CNN’s Performance

  • Conference paper
  • First Online:
Artificial Intelligence Applications and Innovations (AIAI 2023)

Abstract

The goal of this work is to combine existing convolutional layers (CLs) to design a computationally efficient Convolutional Neural Network (CNN) for image classification tasks. The current limitations of CNNs in terms of memory requirements and computational cost have driven the demand for a simplification of their architecture. This work investigates the use of two consecutive CLs with 1-D filters to replace one layer of full rank 2-D set of filters. First we provide the mathematical formalism, derive the properties of the equivalent tensor and calculate the rank of tensor’s slices in closed form. We apply this architecture with several parameterizations to the well known AlexNet without transfer learning and experiment with three different image classification tasks, which are compared against the original architecture. Results showed that for most parameterizations, the achieved reduction in dimensionality, which yields lower computational complexity and cost, maintains equivalent, or even marginally better classification accuracy.

This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH - CREATE - INNOVATE (project code: DFVA Deep Football Video Analytics T2EK\(\Delta \)K-04581).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 89.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 109.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
GBP 109.99
Price includes VAT (United Kingdom)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Astrid, M., Lee, S.I.: Deep compression of convolutional neural networks with low-rank approximation. ETRI J. 40(4), 421–434 (2018)

    Article  Google Scholar 

  2. Denil, M., Shakibi, B., Dinh, L., Ranzato, M., De Freitas, N.: Predicting parameters in deep learning. Adv. Neural Inf. Process. Syst. 26 (2013)

    Google Scholar 

  3. Denton, E.L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efficient evaluation. Adv. Neural Inf. Process. Syst. 27 (2014)

    Google Scholar 

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  5. Hitchcock, F.L.: The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 6(1–4), 164–189 (1927)

    Article  MATH  Google Scholar 

  6. Howard, A.G., et al.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  7. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866 (2014)

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  9. Lebedev, V., Ganin, Y., Rakhuba, M., Oseledets, I., Lempitsky, V.: Speeding-up convolutional neural networks using fine-tuned cp-decomposition. arXiv preprint arXiv:1412.6553 (2014)

  10. Lee, S., Kim, H., Jeong, B., Yoon, J.: A training method for low rank convolutional neural networks based on alternating tensor compose-decompose method. Appl. Sci. 11(2), 643 (2021)

    Article  Google Scholar 

  11. Li, F., Andreetto, M., Ranzato, M., Perona, P.: Caltech101. Computational Vision Group, California Institute of Technology (2003)

    Google Scholar 

  12. Mamalet, F., Garcia, C.: Simplifying ConvNets for fast learning. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012. LNCS, vol. 7553, pp. 58–65. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33266-1_8

  13. Phan, A.-H., et al.: Stable low-rank tensor decomposition for compression of convolutional neural network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 522–539. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_31

  14. Rigamonti, R., Sironi, A., Lepetit, V., Fua, P.: Learning separable filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2754–2761 (2013)

    Google Scholar 

  15. Stewart, G.W.: On the early history of the singular value decomposition. SIAM Rev. 35(4), 551–566 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  16. Tai, C., et al.: Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067 (2015)

Download references

Acknowledgements

This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH - CREATE - INNOVATE (project code: DFVA Deep Football Video Analytics T2EK\(\Delta \)K-04581).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Konstantinos Delibasis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vorgiazidou, E., Delibasis, K., Maglogiannis, I. (2023). The Effect of Tensor Rank on CNN’s Performance. In: Maglogiannis, I., Iliadis, L., MacIntyre, J., Dominguez, M. (eds) Artificial Intelligence Applications and Innovations. AIAI 2023. IFIP Advances in Information and Communication Technology, vol 675. Springer, Cham. https://doi.org/10.1007/978-3-031-34111-3_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-34111-3_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-34110-6

  • Online ISBN: 978-3-031-34111-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics