Abstract
The convolutional neural network is a deep learning architecture that has dominated most computer vision tasks for several years. But starting from 2020, Transformer architecture has turned to be a new challenger that has been expected to replace convolutional neural networks in the near future. Unlike researchers that prefer observing any new possibility in order to look for chances of improvement, achieving a new state-of-the-art model is not a goal for most practitioners. This paper observes in detail how the two types of architectures allow practitioners to easily use them in actual applications. Major models regarding each architecture in each computer vision task are described and summarized according to their task variety, availability, outputted performances, and computational resources. In conclusion, this paper discovers that the younger Transformer-based models are not inferior in terms of task variety, outputted performance, and computational resources. But it is the problem of availability that makes Transformer-based models more difficult to use at this moment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Times per inference step were obtained from https://keras.io/api/applications/ on December 28, 2021.
- 2.
This information refers to https://github.com/tensorflow/models/tree/master/research/object_detection/models on February 10, 2022.
- 3.
https://github.com/ultralytics/yolov5 accessed on December 30, 2021.
- 4.
References
Bao, H., Dong, L., Wei, F.: BEiT: BERT pre-training of image transformers. arXiv:2106.08254 (2021)
Brüngel, R., Friedrich, C.M.: DETR and YOLOv5: exploring performance and self-training for diabetic foot ulcer detection. In: IEEE International Symposium on Computer-Based Medical Systems (CBMS) (2021)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805v1 [cs.CL] (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pp. 4171–4186 (2019)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv:2010.11929v1 [cs.CV] (2020)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (ICLR) (2021)
Fogg, B.: A behavior model for persuasive design. In: International Conference on Persuasive Technology (Persuasive), pp. 1–7 (2009)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2014)
Goodfellow, I.J., et al.: Generative adversarial nets. In: International Conference on Neural Information Processing Systems (NIPS), vol. 2, pp. 2672–2680 (2014)
Goodfellow, I.J., et al.: Generative adversarial networks. arXiv:1406.2661v1 [stat.ML] (2014)
Jouppi, N.: Quantifying the performance of the TPU, our first machine learning chip. Google Cloud, AI & Machine Learning (2017). https://cloud.google.com/blog/products/gcp/quantifying-the-performance-of-the-tpu-our-first-machine-learning-chip. Accessed 28 Dec 2021
Krishnan, K.S., Krishnan, K.S.: Vision transformer based COVID-19 detection using chest X-rays. In: IEEE International Conference on Signal Processing, Computing and Control (ISPCC), pp. 644–648 (2021)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems (NIPS 2012), vol. 1, pp. 1097–1105 (2012)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10012–10022 (2021)
Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., Xie, S.: A ConvNet for the 2020s. arXiv:2201.03545 (2022)
McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943). https://doi.org/10.1007/BF02478259
Mooney, P.: 2020 Kaggle Data Science & Machine Learning Survey. Kaggle (2020). https://www.kaggle.com/paultimothymooney/2020-kaggle-data-science-machine-learning-survey. Accessed 28 Dec 2021
Radosavovic, I., Kosaraju, R.P., Girshick, R., He, K., Dollár, P.: Designing network design spaces. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10425–10433 (2020)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: International Conference on Neural Information Processing Systems (NIPS), vol. 1, pp. 91–99 (2015)
Sharma, C., Singh, S., Poornalatha, G., Ajitha Shenoy, K.B.: Performance analysis of object detection algorithms on YouTube video object dataset. Eng. Lett. 29(2), 813–817 (2021)
Srinivasan, A., Srikanth, A., Indrajit, H., Narasimhan, V.: A novel approach for road accident detection using DETR algorithm. In: IEEE International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (2020)
Tolstikhin, I., et al.: MLP-mixer: an all-MLP architecture for vision. In: Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems (NeurIPS) (2021)
Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jegou, H.: Training data-efficient image transformers & distillation through attention. In: International Conference on Machine Learning (ICML), vol. 139, pp. 10347–10357 (2021)
Vaswani, A., et al.: Attention is all you need. In: International Conference on Neural Information Processing Systems (NIPS), pp. 6000–6010 (2017)
Wongpanich, A., et al.: Training EfficientNets at supercomputer scale: 83% ImageNet top-1 accuracy in one hour. arXiv:2011.00071v2 [cs.LG] (2020)
Zhang, Z., Lu, X., Cao, G., Yang, Y., Jiao, L., Liu, F.: ViT-YOLO: transformer-based YOLO for object detection. In: IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Siriborvornratanakul, T. (2022). A New Human Factor Study in Developing Practical Vision-Based Applications with the Transformer-Based Deep Learning Model. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-05643-7_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05642-0
Online ISBN: 978-3-031-05643-7
eBook Packages: Computer ScienceComputer Science (R0)