[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

A New Human Factor Study in Developing Practical Vision-Based Applications with the Transformer-Based Deep Learning Model

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13336))

Included in the following conference series:

Abstract

The convolutional neural network is a deep learning architecture that has dominated most computer vision tasks for several years. But starting from 2020, Transformer architecture has turned to be a new challenger that has been expected to replace convolutional neural networks in the near future. Unlike researchers that prefer observing any new possibility in order to look for chances of improvement, achieving a new state-of-the-art model is not a goal for most practitioners. This paper observes in detail how the two types of architectures allow practitioners to easily use them in actual applications. Major models regarding each architecture in each computer vision task are described and summarized according to their task variety, availability, outputted performances, and computational resources. In conclusion, this paper discovers that the younger Transformer-based models are not inferior in terms of task variety, outputted performance, and computational resources. But it is the problem of availability that makes Transformer-based models more difficult to use at this moment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 71.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Times per inference step were obtained from https://keras.io/api/applications/ on December 28, 2021.

  2. 2.

    This information refers to https://github.com/tensorflow/models/tree/master/research/object_detection/models on February 10, 2022.

  3. 3.

    https://github.com/ultralytics/yolov5 accessed on December 30, 2021.

  4. 4.

    https://huggingface.co/.

References

  1. Bao, H., Dong, L., Wei, F.: BEiT: BERT pre-training of image transformers. arXiv:2106.08254 (2021)

  2. Brüngel, R., Friedrich, C.M.: DETR and YOLOv5: exploring performance and self-training for diabetic foot ulcer detection. In: IEEE International Symposium on Computer-Based Medical Systems (CBMS) (2021)

    Google Scholar 

  3. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13

    Chapter  Google Scholar 

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805v1 [cs.CL] (2018)

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pp. 4171–4186 (2019)

    Google Scholar 

  6. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv:2010.11929v1 [cs.CV] (2020)

  7. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  8. Fogg, B.: A behavior model for persuasive design. In: International Conference on Persuasive Technology (Persuasive), pp. 1–7 (2009)

    Google Scholar 

  9. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2014)

    Google Scholar 

  10. Goodfellow, I.J., et al.: Generative adversarial nets. In: International Conference on Neural Information Processing Systems (NIPS), vol. 2, pp. 2672–2680 (2014)

    Google Scholar 

  11. Goodfellow, I.J., et al.: Generative adversarial networks. arXiv:1406.2661v1 [stat.ML] (2014)

  12. Jouppi, N.: Quantifying the performance of the TPU, our first machine learning chip. Google Cloud, AI & Machine Learning (2017). https://cloud.google.com/blog/products/gcp/quantifying-the-performance-of-the-tpu-our-first-machine-learning-chip. Accessed 28 Dec 2021

  13. Krishnan, K.S., Krishnan, K.S.: Vision transformer based COVID-19 detection using chest X-rays. In: IEEE International Conference on Signal Processing, Computing and Control (ISPCC), pp. 644–648 (2021)

    Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems (NIPS 2012), vol. 1, pp. 1097–1105 (2012)

    Google Scholar 

  15. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10012–10022 (2021)

    Google Scholar 

  16. Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., Xie, S.: A ConvNet for the 2020s. arXiv:2201.03545 (2022)

  17. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943). https://doi.org/10.1007/BF02478259

    Article  MathSciNet  MATH  Google Scholar 

  18. Mooney, P.: 2020 Kaggle Data Science & Machine Learning Survey. Kaggle (2020). https://www.kaggle.com/paultimothymooney/2020-kaggle-data-science-machine-learning-survey. Accessed 28 Dec 2021

  19. Radosavovic, I., Kosaraju, R.P., Girshick, R., He, K., Dollár, P.: Designing network design spaces. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10425–10433 (2020)

    Google Scholar 

  20. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: International Conference on Neural Information Processing Systems (NIPS), vol. 1, pp. 91–99 (2015)

    Google Scholar 

  21. Sharma, C., Singh, S., Poornalatha, G., Ajitha Shenoy, K.B.: Performance analysis of object detection algorithms on YouTube video object dataset. Eng. Lett. 29(2), 813–817 (2021)

    Google Scholar 

  22. Srinivasan, A., Srikanth, A., Indrajit, H., Narasimhan, V.: A novel approach for road accident detection using DETR algorithm. In: IEEE International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (2020)

    Google Scholar 

  23. Tolstikhin, I., et al.: MLP-mixer: an all-MLP architecture for vision. In: Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  24. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jegou, H.: Training data-efficient image transformers & distillation through attention. In: International Conference on Machine Learning (ICML), vol. 139, pp. 10347–10357 (2021)

    Google Scholar 

  25. Vaswani, A., et al.: Attention is all you need. In: International Conference on Neural Information Processing Systems (NIPS), pp. 6000–6010 (2017)

    Google Scholar 

  26. Wongpanich, A., et al.: Training EfficientNets at supercomputer scale: 83% ImageNet top-1 accuracy in one hour. arXiv:2011.00071v2 [cs.LG] (2020)

  27. Zhang, Z., Lu, X., Cao, G., Yang, Y., Jiao, L., Liu, F.: ViT-YOLO: transformer-based YOLO for object detection. In: IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thitirat Siriborvornratanakul .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Siriborvornratanakul, T. (2022). A New Human Factor Study in Developing Practical Vision-Based Applications with the Transformer-Based Deep Learning Model. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05643-7_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05642-0

  • Online ISBN: 978-3-031-05643-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics