Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition
<p>The overall architecture of the proposed model.</p> "> Figure 2
<p>The autoencoder structure of the proposed model.</p> "> Figure 3
<p>The sample images of each class in the dataset.</p> "> Figure 4
<p>Represents different kinds of data augmentation techniques, (<b>a</b>) show the original images, (<b>b</b>) contrast adjustments, (<b>c</b>) rotations and (<b>d</b>) zooming.</p> "> Figure 5
<p>The confusion matrix of the proposed model using the test set.</p> "> Figure 6
<p>The training and validation graphs in consideration of accuracy and loss.</p> ">
Abstract
:1. Introduction
- We propose an intelligent method for Arabic SL recognition that utilizes a customized variant of the EfficientNetB3 model as the foundation for feature extraction. Our model incorporates stacked autoencoders to enable robust feature selection, ensuring the optimal mapping of input images. Through extensive experimentation using various CNN models, our approach demonstrates superior recognition capabilities for Arabic sign language. The integration of densely linked coding layers further enhances the model’s performance, facilitating the accurate and efficient recognition of Arabic SL gestures.
- We conducted an extensive review of the current state-of-the-art methods for Arabic sign language recognition, with a specific focus on CNN-based approaches recognized for their high-performance capabilities in this field. Our thorough analysis revealed that the proposed model surpasses existing methods, exhibiting superior performance and holding significant potential for real-world deployment, even under limited resource constraints. By offering both efficiency and accuracy, our model presents a compelling solution for effectively and accurately recognizing Arabic sign language in various practical applications.
- The superiority of our model is substantiated through comprehensive experimentation using the ArSL2018 benchmark dataset, wherein it outperforms state-of-the-art approaches and ablation studies. Our model exhibits lower false discovery rates and achieves higher identification accuracy, affirming its exceptional performance and efficacy in Arabic sign language recognition. Furthermore, the proposed model is deployable for resource-constraint devices and can apply to different organizations.
2. Related Work
- Many approaches in the field rely on conventional weight-initialization methods, leading to issues such as vanishing gradients and high computational complexity. These challenges hinder the overall accuracy and performance of Arabic sign language recognition.
- Despite previous efforts, the existing approaches have achieved only a restricted level of accuracy in recognizing Arabic sign language. This indicates the need for further advancements to attain more precise and reliable recognition results.
- The current approaches may lack robustness when dealing with complex hand gestures, varying lighting conditions, and occlusions. This limitation hampers their effectiveness in real-world scenarios where such challenges commonly occur.
- Another notable drawback is the high computational complexity associated with the existing methods, which can impede their practical deployment, particularly in resource-constrained environments.
3. The Proposed Model
3.1. EfficientNetB3: Backbone Architecture
3.2. Autoencoder
3.3. Weight Randomization
3.4. Technical Details of the Proposed Model
4. Experiments and Discussions
4.1. Dataset Description
4.2. Data Preprocessing
4.2.1. Data Augmentation
4.2.2. Data Splitting
4.3. Evaluation Metric
4.4. Model Evaluation
4.5. Comparative Analysis
4.6. Ablation Studies
5. Conclusions and Future Research Directions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Shukla, P.; Garg, A.; Sharma, K.; Mittal, A. A DTW and fourier descriptor based approach for Indian sign language recognition. In Proceedings of the 2015 Third International Conference on Image Information Processing (ICIIP), Waknaghat, India, 21–24 December 2015; pp. 113–118. [Google Scholar]
- Kushalnagar, R. Deafness and hearing loss. In Web Accessibility; Springer: Berlin/Heidelberg, Germany, 2019; pp. 35–47. [Google Scholar]
- Almasre, M.A.; Al-Nuaim, H. A comparison of Arabic sign language dynamic gesture recognition models. Heliyon 2020, 6, e03554. [Google Scholar] [CrossRef] [PubMed]
- Elons, A.S.; Abull-Ela, M.; Tolba, M.F. A proposed PCNN features quality optimization technique for pose-invariant 3D Arabic sign language recognition. Appl. Soft Comput. 2013, 13, 1646–1660. [Google Scholar] [CrossRef]
- Tharwat, A.; Gaber, T.; Hassanien, A.E.; Shahin, M.K.; Refaat, B. Sift-based arabic sign language recognition system. In Proceedings of the Afro-European Conference for Industrial Advancement; Springer: Berlin/Heidelberg, Germany, 2015; pp. 359–370. [Google Scholar]
- Shahin, A.; Almotairi, S. Automated Arabic sign language recognition system based on deep transfer learning. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2019, 19, 144–152. [Google Scholar]
- Bencherif, M.A.; Algabri, M.; Mekhtiche, M.A.; Faisal, M.; Alsulaiman, M.; Mathkour, H.; Al-Hammadi, M.; Ghaleb, H. Arabic sign language recognition system using 2D hands and body skeleton data. IEEE Access 2021, 9, 59612–59627. [Google Scholar] [CrossRef]
- Mustafa, M. A study on Arabic sign language recognition for differently abled using advanced machine learning classifiers. J. Ambient Intell. Humaniz. Comput. 2021, 12, 4101–4115. [Google Scholar] [CrossRef]
- Hisham, B.; Hamouda, A. Supervised learning classifiers for Arabic gestures recognition using Kinect V2. SN Appl. Sci. 2019, 1, 1–21. [Google Scholar] [CrossRef]
- Maraqa, M.; Al-Zboun, F.; Dhyabat, M.; Zitar, R.A. Recognition of Arabic sign language (ArSL) using recurrent neural networks. J. Intell. Learn. Syst. Appl. 2012, 4, 41–52. [Google Scholar] [CrossRef]
- Alzohairi, R.; Alghonaim, R.; Alshehri, W.; Aloqeely, S. Image based Arabic sign language recognition system. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 185–194. [Google Scholar] [CrossRef]
- Duwairi, R.M.; Halloush, Z.A. Automatic recognition of Arabic alphabets sign language using deep learning. Int. J. Electr. Comput. Eng. (2088-8708) 2022, 12, 2996–3004. [Google Scholar] [CrossRef]
- Hu, Z.; Zhang, Y.; Xing, Y.; Zhao, Y.; Cao, D.; Lv, C. Toward human-centered automated driving: A novel spatial-temporal vision transformer-enabled head tracker. IEEE Veh. Technol. Mag. 2022, 17, 57–64. [Google Scholar] [CrossRef]
- Youssif, A.A.; Aboutabl, A.E.; Ali, H.H. Arabic sign language (arsl) recognition system using hmm. Int. J. Adv. Comput. Sci. Appl. 2011, 2, 45–51. [Google Scholar]
- Abdo, M.; Hamdy, A.; Salem, S.; Saad, E.M. Arabic alphabet and numbers sign language recognition. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 209–214. [Google Scholar]
- El-Bendary, N.; Zawbaa, H.M.; Daoud, M.S.; Hassanien, A.E.; Nakamatsu, K. Arslat: Arabic sign language alphabets translator. In Proceedings of the 2010 International Conference on Computer Information Systems and Industrial Management Applications (CISIM), Krakow, Poland, 8–10 October 2010; pp. 590–595. [Google Scholar]
- ElBadawy, M.; Elons, A.; Shedeed, H.A.; Tolba, M. Arabic sign language recognition with 3d convolutional neural networks. In Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 5–7 December 2017; pp. 66–71. [Google Scholar]
- Hayani, S.; Benaddy, M.; El Meslouhi, O.; Kardouchi, M. Arab sign language recognition with convolutional neural networks. In Proceedings of the 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), Agadir, Morocco, 22–24 July 2019; pp. 1–4. [Google Scholar]
- Kayalibay, B.; Jensen, G.; van der Smagt, P. CNN-based segmentation of medical imaging data. arXiv 2017, arXiv:1701.03056. [Google Scholar]
- Hossain, M.S.; Muhammad, G. Emotion recognition using secure edge and cloud computing. Inf. Sci. 2019, 504, 589–601. [Google Scholar] [CrossRef]
- Kamruzzaman, M. E-crime management system for future smart city. In Data Processing Techniques and Applications for Cyber-Physical Systems (DPTA 2019); Springer: Berlin/Heidelberg, Germany, 2020; pp. 261–271. [Google Scholar]
- Oyedotun, O.K.; Khashman, A. Deep learning in vision-based static hand gesture recognition. Neural Comput. Appl. 2017, 28, 3941–3951. [Google Scholar] [CrossRef]
- Pigou, L.; Dieleman, S.; Kindermans, P.-J.; Schrauwen, B. Sign Language Recognition Using Convolutional Neural Networks; Springer International Publishing: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Hu, Z.; Hu, Y.; Liu, J.; Wu, B.; Han, D.; Kurfess, T. A CRNN module for hand pose estimation. Neurocomputing 2019, 333, 157–168. [Google Scholar] [CrossRef]
- Ahmed, S.; Islam, M.; Hassan, J.; Ahmed, M.U.; Ferdosi, B.J.; Saha, S.; Shopon, M. Hand sign to Bangla speech: A deep learning in vision based system for recognizing hand sign digits and generating Bangla speech. arXiv 2019, arXiv:1901.05613. [Google Scholar] [CrossRef]
- Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef]
- Hu, Z.; Xing, Y.; Lv, C.; Hang, P.; Liu, J. Deep convolutional neural network-based Bernoulli heatmap for head pose estimation. Neurocomputing 2021, 436, 198–209. [Google Scholar] [CrossRef]
- Si, Y.; Chen, S.; Li, M.; Li, S.; Pei, Y.; Guo, X. Flexible strain sensors for wearable hand gesture recognition: From devices to systems. Adv. Intell. Syst. 2022, 4, 2100046. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, Y.; Liu, C.; Liu, H. sEMG based hand gesture recognition with deformable convolutional network. Int. J. Mach. Learn. Cybern. 2022, 13, 1729–1738. [Google Scholar] [CrossRef]
- Alam, M.M.; Islam, M.T.; Rahman, S.M. Unified learning approach for egocentric hand gesture recognition and fingertip detection. Pattern Recognit. 2022, 121, 108200. [Google Scholar] [CrossRef]
- Chenyi, Y.; Yuqing, H.; Junyuan, Z.; Guorong, L. Lightweight neural network hand gesture recognition method for embedded platforms. High Power Laser Particle Beams 2022, 34, 031023. [Google Scholar]
- Joudaki, S.; Rehman, A. Dynamic hand gesture recognition of sign language using geometric features learning. Int. J. Comput. Vis. Robot. 2022, 12, 1–16. [Google Scholar] [CrossRef]
- Tubaiz, N.; Shanableh, T.; Assaleh, K. Glove-based continuous Arabic sign language recognition in user-dependent mode. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 526–533. [Google Scholar] [CrossRef]
- Al-Buraiky, S.M. Arabic Sign Language Recognition Using an Instrumented Glove; King Fahd University of Petroleum and Minerals: Dhahran, Saudi Arabia, 2004. [Google Scholar]
- Hu, Z.; Hu, Y.; Wu, B.; Liu, J.; Han, D.; Kurfess, T. Hand pose estimation with multi-scale network. Appl. Intell. 2018, 48, 2501–2515. [Google Scholar] [CrossRef]
- Halawani, S.M. Arabic sign language translation system on mobile devices. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2008, 8, 251–256. [Google Scholar]
- Mohandes, M.; Deriche, M.; Liu, J. Image-based and sensor-based approaches to Arabic sign language recognition. IEEE Trans. Hum.-Mach. Syst. 2014, 44, 551–557. [Google Scholar] [CrossRef]
- Almasre, M.A.; Al-Nuaim, H. Comparison of four SVM classifiers used with depth sensors to recognize Arabic sign language words. Computers 2017, 6, 20. [Google Scholar] [CrossRef]
- Hu, Z.; Lv, C.; Hang, P.; Huang, C.; Xing, Y. Data-driven estimation of driver attention using calibration-free eye gaze and scene features. IEEE Trans. Ind. Electron. 2021, 69, 1800–1808. [Google Scholar] [CrossRef]
- Alawwad, R.A.; Bchir, O.; Ismail, M.M.B. Arabic Sign Language Recognition using Faster R-CNN. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 692–700. [Google Scholar] [CrossRef]
- Althagafi, A.; Alsubait, G.T.; Alqurash, T. ASLR: Arabic sign language recognition using convolutional neural networks. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2020, 20, 124–129. [Google Scholar]
- Zakariah, M.; Alotaibi, Y.A.; Koundal, D.; Guo, Y.; Mamun Elahi, M. Sign Language Recognition for Arabic Alphabets Using Transfer Learning Technique. Comput. Intell. Neurosci. 2022, 2022, 4567989. [Google Scholar] [CrossRef] [PubMed]
- Latif, G.; Mohammad, N.; AlKhalaf, R.; AlKhalaf, R.; Alghazo, J.; Khan, M. An automatic Arabic sign language recognition system based on deep CNN: An assistive system for the deaf and hard of hearing. Int. J. Comput. Digit. Syst. 2020, 9, 715–724. [Google Scholar] [CrossRef]
- Elsayed, E.K.; Fathy, D.R. Sign language semantic translation system using ontology and deep learning. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 141–147. [Google Scholar] [CrossRef]
- Alani, A.A.; Cosma, G. ArSL-CNN: A convolutional neural network for Arabic sign language gesture recognition. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 1096–1107. [Google Scholar] [CrossRef]
- Khan, Z.A.; Hussain, T.; Ullah, A.; Rho, S.; Lee, M.; Baik, S.W. Towards Efficient Electricity Forecasting in Residential and Commercial Buildings: A Novel Hybrid CNN with a LSTM-AE based Framework. Sensors 2020, 20, 1399. [Google Scholar] [CrossRef]
- Mishra, K.; Basu, S.; Maulik, U. Graft: A graph based time series data mining framework. Eng. Appl. Artif. Intell. 2022, 110, 104695. [Google Scholar] [CrossRef]
- Yar, H.; Hussain, T.; Agarwal, M.; Khan, Z.A.; Gupta, S.K.; Baik, S.W. Optimized Dual Fire Attention Network and Medium-Scale Fire Classification Benchmark. IEEE Trans. Image Process. 2022, 31, 6331–6343. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Yar, H.; Khan, Z.A.; Ullah, F.U.M.; Ullah, W.; Baik, S.W. A modified YOLOv5 architecture for efficient fire detection in smart cities. Expert Syst. Appl. 2023, 231, 120465. [Google Scholar] [CrossRef]
- Khan, S.U.; Khan, N.; Hussain, T.; Muhammad, K.; Hijji, M.; Del Ser, J.; Baik, S.W. Visual Appearance and Soft Biometrics Fusion for Person Re-identification using Deep Learning. IEEE J. Sel. Top. Signal Process. 2023, 17, 3. [Google Scholar] [CrossRef]
- Khan, S.U.; Haq, I.U.; Khan, N.; Ullah, A.; Muhammad, K.; Chen, H.; Baik, S.W.; de Albuquerque, V.H.C. Efficient Person Re-identification for IoT-Assisted Cyber-Physical Systems. IEEE Internet Things J. 2023. [Google Scholar] [CrossRef]
- Muhammad, K.; Ahmad, J.; Lv, Z.; Bellavista, P.; Yang, P.; Baik, S.W. Efficient deep CNN-based fire detection and localization in video surveillance applications. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 1419–1434. [Google Scholar] [CrossRef]
- Avula, S.B.; Badri, S.J.; Reddy, G. A Novel forest fire detection system using fuzzy entropy optimized thresholding and STN-based CNN. In Proceedings of the 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020; pp. 750–755. [Google Scholar]
- Bari, A.; Saini, T.; Kumar, A. Fire detection using deep transfer learning on surveillance videos. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 4–6 February 2021; pp. 1061–1067. [Google Scholar]
- Khan, Z.A.; Hussain, T.; Baik, S.W. Boosting energy harvesting via deep learning-based renewable power generation prediction. J. King Saud Univ.-Sci. 2022, 34, 101815. [Google Scholar] [CrossRef]
- Pao, Y.-H.; Takefuji, Y. Functional-link net computing: Theory, system architecture, and functionalities. Computer 1992, 25, 76–79. [Google Scholar] [CrossRef]
- Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), Budapest, Hungary, 25–29 July 2004; pp. 985–990. [Google Scholar]
- Igelnik, B.; Pao, Y.-H. Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 1995, 6, 1320–1329. [Google Scholar] [CrossRef]
- Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 2019, 24, 394–407. [Google Scholar] [CrossRef]
- Cao, W.; Wang, X.; Ming, Z.; Gao, J. A review on neural networks with random weights. Neurocomputing 2018, 275, 278–287. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 4700–4708. [Google Scholar]
- Huang, G.; Liu, Z.; Pleiss, G.; Van Der Maaten, L.; Weinberger, K. Convolutional networks with dense connectivity. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 44, 8704–8716. [Google Scholar] [CrossRef]
- Yar, H.; Hussain, T.; Khan, Z.A.; Koundal, D.; Lee, M.Y.; Baik, S.W. Vision sensor-based real-time fire detection in resource-constrained IoT environments. Comput. Intell. Neurosci. 2021, 2021, 5195508. [Google Scholar] [CrossRef] [PubMed]
Classes | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
ain | 100 | 100 | 100 | 174 |
al | 100 | 98.05 | 99.06 | 103 |
aleff | 100 | 100 | 100 | 141 |
bb | 99.31 | 98.63 | 98.96 | 146 |
dal | 98.50 | 1 | 99.24 | 132 |
dha | 98.33 | 95.16 | 96.72 | 124 |
dhad | 99.23 | 100 | 99.61 | 129 |
fa | 100 | 98.82 | 99.40 | 170 |
gaaf | 99.18 | 100 | 99.59 | 122 |
ghain | 98.81 | 100 | 99.40 | 167 |
ha | 100 | 98.33 | 99.15 | 120 |
haa | 98.07 | 100 | 99.02 | 102 |
jeem | 99.21 | 99.21 | 99.21 | 127 |
kaaf | 100 | 100 | 100 | 135 |
khaa | 100 | 100 | 100 | 89 |
la | 100 | 100 | 100 | 177 |
laam | 100 | 100 | 100 | 151 |
meem | 100 | 100 | 100 | 140 |
nun | 100 | 100 | 100 | 147 |
ra | 100 | 98.48 | 99.23 | 132 |
saad | 98.71 | 100 | 99.35 | 154 |
seen | 100 | 100 | 100 | 132 |
sheen | 100 | 100 | 100 | 124 |
ta | 96.93 | 99.37 | 98.13 | 159 |
taa | 98.00 | 99.32 | 98.65 | 148 |
thaa | 100 | 99.24 | 99.62 | 133 |
thal | 99.26 | 100 | 99.63 | 135 |
toot | 100 | 99.31 | 99.65 | 145 |
waw | 100 | 99.05 | 99.52 | 106 |
ya | 100 | 100 | 100 | 139 |
yaa | 100 | 100 | 100 | 105 |
zay | 100 | 99.13 | 99.56 | 116 |
Average Accuracy | 99.26 |
Reference | Method | WAUG | WOAUG | Accuracy (%) |
---|---|---|---|---|
Alawwad et al. [40] | Deep learning using RCNN | × | ✓ | 93.40 |
Althagafi et al. [41] | Semantic segmentation CNN | ✓ | × | 88.00 |
Zakariah et al. [42] | EfficientNetB4 | ✓ | × | 95.00 |
Latif et al. [43] | Deep learning CNN | ✓ | × | 97.60 |
Elsayed et al. [44] | Deep learning CNN | ✓ | × | 88.87 |
Alani et al. [45] | ArSL-CNN +SMOTE | × | ✓ | 96.59 |
Alani et al. [45] | ArSL-CNN +SMOTE | ✓ | × | 97.29 |
Duwairi et al. [12] | VGGNET | ✓ | × | 97.00 |
The Proposed model | EfficientNetB3 with encoder and decoder network | × | ✓ | 98.35 |
The Proposed model | EfficientNetB3 with encoder and decoder network | ✓ | × | 99.26 |
Model Details | Model Size | Parameters (Millions) | Solo Baseline CNN | Baseline CNN with Encoder–Decoder Network | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Models | Precision | Recall | F1 | Accuracy | Precision | Recall | F1 | Accuracy | ||
MobileNetV2 | 14 | 3.5 | 97.00 | 95.80 | 96.40 | 96.01 | 99.20 | 98.50 | 98.90 | 98.60 |
DenseNet121 | 33 | 8.1 | 98.40 | 96.30 | 97.30 | 97.13 | 99.10 | 98.40 | 98.70 | 98.45 |
NASNetMobile | 23 | 5.3 | 96.00 | 91.10 | 93.10 | 93.00 | 98.20 | 97.80 | 98.00 | 98.00 |
EfficientNetB0 | 29 | 5.3 | 97.30 | 95.80 | 96.50 | 96.40 | 98.80 | 98.10 | 98.40 | 98.10 |
EfficientNetV2B0 | 29 | 7.2 | 97.40 | 94.30 | 95.60 | 95.50 | 98.50 | 97.70 | 98.10 | 97.90 |
EfficientNetV2B1 | 34 | 8.2 | 95.70 | 92.70 | 94.00 | 94.30 | 98.70 | 98.00 | 98.30 | 98.38 |
Our model | 21 | 5.3 | 98.50 | 96.80 | 97.80 | 97.20 | 99.40 | 98.90 | 99.10 | 99.26 |
Approach | Encoder | Decoder | Accuracy |
---|---|---|---|
Our model | ✓ | × | 98.87 |
× | ✓ | 99.03 | |
✓ | ✓ | 99.26 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Islam, M.; Aloraini, M.; Aladhadh, S.; Habib, S.; Khan, A.; Alabdulatif, A.; Alanazi, T.M. Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition. Sensors 2023, 23, 9068. https://doi.org/10.3390/s23229068
Islam M, Aloraini M, Aladhadh S, Habib S, Khan A, Alabdulatif A, Alanazi TM. Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition. Sensors. 2023; 23(22):9068. https://doi.org/10.3390/s23229068
Chicago/Turabian StyleIslam, Muhammad, Mohammed Aloraini, Suliman Aladhadh, Shabana Habib, Asma Khan, Abduatif Alabdulatif, and Turki M. Alanazi. 2023. "Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition" Sensors 23, no. 22: 9068. https://doi.org/10.3390/s23229068
APA StyleIslam, M., Aloraini, M., Aladhadh, S., Habib, S., Khan, A., Alabdulatif, A., & Alanazi, T. M. (2023). Toward a Vision-Based Intelligent System: A Stacked Encoded Deep Learning Framework for Sign Language Recognition. Sensors, 23(22), 9068. https://doi.org/10.3390/s23229068