Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks
<p>ConvNet structure. C = number of channels; T = number of time points; KE = kernel width; F1, and F2 = number of filters.</p> "> Figure 2
<p>The timing scheme of the BCI IV 2a dataset.</p> "> Figure 3
<p>Overall framework.</p> "> Figure 4
<p>The structure of the FCNNA model.</p> "> Figure 5
<p>CBAM.</p> "> Figure 6
<p>Genetic algorithm applied in our model.</p> "> Figure 7
<p>Confusion matrix of proposed model applied on 4 classes in BCI IV 2a using within-subject strategy.</p> "> Figure 8
<p>Optimal channels were selected based on GA and cross-subject classification after testing each subject individually. The highlighted electrodes indicate the positions of the selected channels for each subject.</p> "> Figure 9
<p>The average number of channels (electrodes) selected for all subjects after applying the genetic algorithm through cross-subject classification.</p> "> Figure 10
<p>A comparison of our work with the state-of-the-art research in terms of accuracy and the number of channels used, as discussed in Hassanpour et al. (2019) [<a href="#B18-sensors-24-03168" class="html-bibr">18</a>], Tiwari et al. (2023) [<a href="#B19-sensors-24-03168" class="html-bibr">19</a>], Mahamune et al. (2023) [<a href="#B21-sensors-24-03168" class="html-bibr">21</a>], and Chen et al. (2020) [<a href="#B22-sensors-24-03168" class="html-bibr">22</a>].</p> "> Figure 11
<p>ROC curves and AUC for each subject across different methods: within-subject with all channels (“Within-subject”), cross-subject, within-subject with fixed channel selection (“Fixed Channels”), and within-subject with variable channel selection (“Variable Channels”). The dotted black lines reflect the performance of a random predictor, serving as a reference for comparing the classification performance of the four methods.</p> "> Figure A1
<p>Confusion matrix of proposed model applied on 2 classes of left and right hand in BCI IV 2a using within-subject strategy.</p> "> Figure A2
<p>Confusion matrix of proposed model applied on 2 classes of both feet and tongue in BCI IV 2a using within-subject strategy.</p> ">
Abstract
:1. Introduction
- Propose a CNN structure that contains two layers of convolutional blocks followed by CBAM attention methods concatenated to better classify two classes and four classes of preprocessed EEG raw data.
- Evaluate the classification performance on a publicly available dataset utilizing two strategies: within-subject strategy and cross-subject strategy. According to our experiments in multiclass and two-class classification, our model exhibits significant improvements over existing state-of-the-art approaches.
- Propose a channel selection mechanism that maintains the performance of the proposed model with less computation cost. In this study, a novel technique is employed to introduce a fixed set of channels for all subjects alongside a variable set of channels.
- Illustrate the enhancement in performance that results from adding channel selection to our model. Moreover, a comparative analysis with state-of-the-art methods is applied which demonstrates an improvement.
2. Related Works
2.1. EEG Signal Deep Learning Classification
2.2. Channel Selection
3. Materials and Methods
3.1. Description of BCI Competition IV 2a Dataset [40]
3.2. Proposed Model Framework
3.3. Preprocessing
3.4. Classification
3.4.1. Convolutional Blocks
3.4.2. Attention Block
3.5. Channel Selection
- Select the Fittest Chromosome: The fitness-proportional roulette wheel approach is used to select three parents from a population. The mathematical formula for this approach can be found in Equation (5). In this approach, parents are selected based on their likelihood of having higher fitness values. From these parents, a new generation is produced.
- Apply Crossover: In order to perform crossover, two fitness parents are divided into halves, and then the genes are switched between them in the manner shown below.
- Apply Mutation
- Generate a new population
4. Results and Discussion
4.1. Classification Strategy
4.2. Performance Metrics
- (True Positives) are the number of correct positive predictions.
- (True Negatives) are the number of correct negative predictions.
- (False Positives) are the number of incorrect positive predictions.
- (False Negatives) are the number of incorrect negative predictions.
4.3. Classification Results
4.3.1. Within-Subject Classification
4.3.2. Cross-Subject Classification
4.4. Channel Selection Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Accuracy | Kappa | Precision | Recall | F1-Score | |
---|---|---|---|---|---|
Subject 1 | 72.56% | 63.42% | 74% | 73% | 73% |
Subject 2 | 53.71% | 38.29% | 55% | 54% | 54% |
Subject 3 | 86.37% | 81.83% | 87% | 86% | 86% |
Subject 4 | 60.82% | 47.63% | 65% | 61% | 61% |
Subject 5 | 59.11% | 45.62% | 62% | 59% | 58% |
Subject 6 | 59.91% | 46.52% | 61% | 60% | 59% |
Subject 7 | 72.63% | 63.52% | 74% | 73% | 73% |
Subject 8 | 81.68% | 75.57% | 82% | 82% | 82% |
Subject 9 | 73.05% | 64.06% | 73% | 73% | 73% |
AVG | 68.87% | 58.50% |
Table 10 Selected Channels According to | |||||||||
---|---|---|---|---|---|---|---|---|---|
Testing Subject 1 | Testing Subject 2 | Testing Subject 3 | Testing Subject 4 | Testing Subject 5 | Testing Subject 6 | Testing Subject 7 | Testing Subject 8 | Testing Subject 9 | |
Accuracy (Avg) | 82.97% | 81.69% | 82.5% | 81.54% | 79.87% | 80.49% | 80.05% | 79.94% | 69.46% |
References
- Ramadan, R.A.; Refat, S.; Elshahed, M.A.; Rasha, A.A. Basics of Brain Computer Interface. In Brain-Computer Interfaces. Intelligent Systems Reference Library; Hassanien, A.E., Azar, A.T., Eds.; Springer: Cham, Switzerland, 2015; Volume 74, pp. 51–95. ISBN 9783319109770. [Google Scholar]
- Pfurtscheller, G.; Neuper, C. Motor Imagery and Direct Brain-Computer Communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
- Baig, M.Z.; Aslam, N.; Shum, H.P.H. Filtering Techniques for Channel Selection in Motor Imagery EEG Applications: A Survey. Artif. Intell. Rev. 2020, 53, 1207–1232. [Google Scholar] [CrossRef]
- Li, H.; Chen, H.; Jia, Z.; Zhang, R.; Yin, F. A Parallel Multi-Scale Time-Frequency Block Convolutional Neural Network Based on Channel Attention Module for Motor Imagery Classification. Biomed. Signal Process. Control 2023, 79, 104066. [Google Scholar] [CrossRef]
- Varsehi, H.; Firoozabadi, S.M.P. An EEG Channel Selection Method for Motor Imagery Based Brain–Computer Interface and Neurofeedback Using Granger Causality. Neural Netw. 2021, 133, 193–206. [Google Scholar] [CrossRef] [PubMed]
- Gao, C.; Liu, W.; Yang, X. Convolutional Neural Network and Riemannian Geometry Hybrid Approach for Motor Imagery Classification. Neurocomputing 2022, 507, 180–190. [Google Scholar] [CrossRef]
- Zancanaro, A.; Cisotto, G.; Paulo, J.R.; Pires, G.; Nunes, U.J. CNN-Based Approaches For Cross-Subject Classification in Motor Imagery: From the State-of-the-Art to DynamicNet. In Proceedings of the 2021 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Melbourne, Australia, 13–15 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
- Gaur, P.; Gupta, H.; Chowdhury, A.; McCreadie, K.; Pachori, R.B.; Wang, H. A Sliding Window Common Spatial Pattern for Enhancing Motor Imagery Classification in EEG-BCI. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
- Jin, J.; Miao, Y.; Daly, I.; Zuo, C.; Hu, D.; Cichocki, A. Correlation-Based Channel Selection and Regularized Feature Optimization for MI-Based BCI. Neural Netw. 2019, 118, 262–270. [Google Scholar] [CrossRef] [PubMed]
- Park, Y.; Chung, W. Optimal Channel Selection Using Correlation Coefficient for CSP Based EEG Classification. IEEE Access 2020, 8, 111514–111521. [Google Scholar] [CrossRef]
- Chen, J.; Yi, W.; Wang, D.; Du, J.; Fu, L.; Li, T. FB-CGANet: Filter Bank Channel Group Attention Network for Multi-Class Motor Imagery Classification. J. Neural Eng. 2022, 19, 016011. [Google Scholar] [CrossRef]
- Varone, G.; Boulila, W.; Driss, M.; Kumari, S.; Khan, M.K.; Gadekallu, T.R.; Hussain, A. Finger Pinching and Imagination Classification: A Fusion of CNN Architectures for IoMT-Enabled BCI Applications. Inf. Fusion 2024, 101, 102006. [Google Scholar] [CrossRef]
- Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep Learning Techniques for Classification of Electroencephalogram (EEG) Motor Imagery (MI) Signals: A Review; Springer: London, UK, 2021; Volume 5, ISBN 0052102106. [Google Scholar]
- Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef] [PubMed]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A Compact Convolutional Network for EEG-Based Brain-Computer Interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Hassanpour, A.; Moradikia, M.; Adeli, H.; Khayami, S.R.; Shamsinejadbabaki, P. A Novel End-to-End Deep Learning Scheme for Classifying Multi-Class Motor Imagery Electroencephalography Signals. Expert. Syst. 2019, 36, e12494. [Google Scholar] [CrossRef]
- Tiwari, A.; Chaturvedi, A. Automatic EEG Channel Selection for Multiclass Brain-Computer Interface Classification Using Multiobjective Improved Firefly Algorithm. Multimed. Tools Appl. 2023, 82, 5405–5433. [Google Scholar] [CrossRef]
- Jindal, K.; Upadhyay, R.; Singh, H.S. A novel EEG channel selection and classification methodology for multi-class motor imagery-based BCI system design. Int. J. Imaging Syst. Technol. 2022, 32, 1318–1337. [Google Scholar] [CrossRef]
- Mahamune, R.; Laskar, S.H. An Automatic Channel Selection Method Based on the Standard Deviation of Wavelet Coefficients for Motor Imagery Based Brain–Computer Interfacing. Int. J. Imaging Syst. Technol. 2023, 33, 714–728. [Google Scholar] [CrossRef]
- Chen, S.; Sun, Y.; Wang, H.; Pang, Z. Channel Selection Based Similarity Measurement for Motor Imagery Classification. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 542–548. [Google Scholar]
- Ingolfsson, T.M.; Hersche, M.; Wang, X.; Kobayashi, N.; Cavigelli, L.; Benini, L. EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain–Machine Interfaces. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2958–2965. [Google Scholar]
- Wu, H.; Niu, Y.; Li, F.; Li, Y.; Fu, B.; Shi, G.; Dong, M. A Parallel Multiscale Filter Bank Convolutional Neural Networks for Motor Imagery EEG Classification. Front. Neurosci. 2019, 13, 1275. [Google Scholar] [CrossRef] [PubMed]
- Musallam, Y.K.; AlFassam, N.I.; Muhammad, G.; Amin, S.U.; Alsulaiman, M.; Abdul, W.; Altaheri, H.; Bencherif, M.A.; Algabri, M. Electroencephalography-Based Motor Imagery Classification Using Temporal Convolutional Network Fusion. Biomed. Signal Process. Control 2021, 69, 102826. [Google Scholar] [CrossRef]
- Liu, X.; Xiong, S.; Wang, X.; Liang, T.; Wang, H.; Liu, X. A Compact Multi-Branch 1D Convolutional Neural Network for EEG-Based Motor Imagery Classification. Biomed. Signal Process. Control 2023, 81, 104456. [Google Scholar] [CrossRef]
- Salami, A.; Andreu-Perez, J.; Gillmeister, H. EEG-ITNet: An Explainable Inception Temporal Convolutional Network for Motor Imagery Classification. IEEE Access 2022, 10, 36672–36685. [Google Scholar] [CrossRef]
- Jia, H.; Yu, S.; Yin, S.; Liu, L.; Yi, C.; Xue, K.; Li, F.; Yao, D.; Xu, P.; Zhang, T. A Model Combining Multi Branch Spectral-Temporal CNN, Efficient Channel Attention, and LightGBM for MI-BCI Classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1311–1320. [Google Scholar] [CrossRef]
- Zhang, X.; Miao, Z.; Menon, C.; Zheng, Y.; Zhao, M.; Ming, D. Priming Cross-Session Motor Imagery Classification with a Universal Deep Domain Adaptation Framework. Neurocomputing 2023, 556, 126659. [Google Scholar] [CrossRef]
- Kim, D.-H.; Shin, D.-H.; Kam, T.-E. Bridging the BCI Illiteracy Gap: A Subject-to-Subject Semantic Style Transfer for EEG-Based Motor Imagery Classification. Front. Hum. Neurosci. 2023, 17. [Google Scholar] [CrossRef]
- Echtioui, A.; Zouch, W.; Ghorbel, M.; Mhiri, C.; Hamam, H. Classification of BCI Multiclass Motor Imagery Task Based on Artificial Neural Network. Clin. EEG Neurosci. 2023, 155005942211482. [Google Scholar] [CrossRef]
- Abdullah; Faye, I.; Islam, M.R. EEG Channel Selection Techniques in Motor Imagery Applications: A Review and New Perspectives. Bioengineering 2022, 9, 726. [Google Scholar] [CrossRef]
- Khan, M.A.; Lali, M.I.U.; Sharif, M.; Javed, K.; Aurangzeb, K.; Haider, S.I.; Altamrah, A.S.; Akram, T. An Optimized Method for Segmentation and Classification of Apple Diseases Based on Strong Correlation and Genetic Algorithm Based Feature Selection. IEEE Access 2019, 7, 46261–46277. [Google Scholar] [CrossRef]
- Liu, Z.; Chang, B.; Cheng, F. An Interactive Filter-Wrapper Multi-Objective Evolutionary Algorithm for Feature Selection. Swarm Evol. Comput. 2021, 65, 100925. [Google Scholar] [CrossRef]
- Maleki, N.; Zeinali, Y.; Niaki, S.T.A. A K-NN Method for Lung Cancer Prognosis with the Use of a Genetic Algorithm for Feature Selection. Expert. Syst. Appl. 2021, 164, 113981. [Google Scholar] [CrossRef]
- Padfield, N.; Ren, J.; Murray, P.; Zhao, H. Sparse Learning of Band Power Features with Genetic Channel Selection for Effective Classification of EEG Signals. Neurocomputing 2021, 463, 566–579. [Google Scholar] [CrossRef]
- Yang, J.; Singh, H.; Hines, E.L.; Schlaghecken, F.; Iliescu, D.D.; Leeson, M.S.; Stocks, N.G. Channel Selection and Classification of Electroencephalogram Signals: An Artificial Neural Network and Genetic Algorithm-Based Approach. Artif. Intell. Med. 2012, 55, 117–126. [Google Scholar] [CrossRef]
- Albasri, A.; Abdali-Mohammadi, F.; Fathi, A. EEG Electrode Selection for Person Identification Thru a Genetic-Algorithm Method. J. Med. Syst. 2019, 43, 297. [Google Scholar] [CrossRef]
- He, L.; Hu, Y.; Li, Y.; Li, D. Channel Selection by Rayleigh Coefficient Maximization Based Genetic Algorithm for Classifying Single-Trial Motor Imagery EEG. Neurocomputing 2013, 121, 423–433. [Google Scholar] [CrossRef]
- Brunner, C.; Leeb, R.; Müller-Putz, G.R.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008--Graz Data Set A. Inst. Knowl. Discov. (Lab. Brain-Comput. Interfaces) Graz Univ. Technol. 2008, 16, 1–6. [Google Scholar]
- Tragoudaras, A.; Antoniadis, C.; Massoud, Y. Enhancing DNN Models for EEG/ECoG BCI With a Novel Data-Driven Offline Optimization Method. IEEE Access 2023, 11, 35888–35900. [Google Scholar] [CrossRef]
- Tragoudaras, A.; Fanaras, K.; Antoniadis, C.; Massoud, Y. Data-Driven Offline Optimization of Deep CNN Models for EEG and ECoG Decoding. In Proceedings of the 2023 IEEE International Symposium on Circuits and Systems (ISCAS), Monterey, CA, USA, 21–25 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
- Li, Y.; Guo, L.; Liu, Y.; Liu, J.; Meng, F. A Temporal-Spectral-Based Squeeze-and- Excitation Feature Fusion Network for Motor Imagery EEG Decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1534–1545. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015. [Google Scholar]
- Chaudhari, S.; Mithal, V.; Polatkan, G.; Ramanath, R. An Attentive Survey of Attention Models. ACM Trans. Intell. Syst. Technol. 2021, 12, 1–32. [Google Scholar] [CrossRef]
- Madhu, G.; Gajapaka, S.M.; Bharadwaj, L. A Simple Attention Block Embedded in Standard CNN for Image Classification. In Proceedings of the 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, 9–11 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 279–284. [Google Scholar]
- Karmakar, P.; Teng, S.W.; Lu, G. Thank You for Attention: A Survey on Attention-Based Artificial Neural Networks for Automatic Speech Recognition. arXiv 2021, arXiv:2102.07259. [Google Scholar]
- Lun, X.; Yu, Z.; Wang, F.; Chen, T.; Hou, Y. A Novel Approach of CNN for Human Motor Imagery Recognition Using the Virtual Electrode Pairs. J. Intell. Fuzzy Syst. 2021, 40, 5275–5288. [Google Scholar] [CrossRef]
- Roots, K.; Muhammad, Y.; Muhammad, N. Fusion Convolutional Neural Network for Cross-Subject Eeg Motor Imagery Classification. Computers 2020, 9, 72. [Google Scholar] [CrossRef]
- Magboo, V.P.C.; Magboo, M.S.A. Machine Learning Classifiers on Breast Cancer Recurrences. Procedia Comput. Sci. 2021, 192, 2742–2752. [Google Scholar] [CrossRef]
- Long, T.; Wan, M.; Jian, W.; Dai, H.; Nie, W.; Xu, J. Application of Multi-Task Transfer Learning: The Combination of EA and Optimized Subband Regularized CSP to Classification of 8-Channel EEG Signals with Small Dataset. Front. Hum. Neurosci. 2023, 17, 1143027. [Google Scholar] [CrossRef]
- Jiang, Q.; Zhang, Y.; Zheng, K. Motor Imagery Classification via Kernel-Based Domain Adaptation on an SPD Manifold. Brain Sci. 2022, 12, 659. [Google Scholar] [CrossRef]
- Dai, G.; Zhou, J.; Huang, J.; Wang, N. HS-CNN: A CNN with Hybrid Convolution Scale for EEG Motor Imagery Classification. J. Neural Eng. 2020, 17, 016025. [Google Scholar] [CrossRef]
- Saputra, M.F.; Setiawan, N.A.; Ardiyanto, I. Deep Learning Methods for EEG Signals Classification of Motor Imagery in BCI. IJITEE (Int. J. Inf. Technol. Electr. Eng.) 2019, 3, 80. [Google Scholar] [CrossRef]
- Santamaria-Vazquez, E.; Martinez-Cagigal, V.; Vaquerizo-Villar, F.; Hornero, R. EEG-Inception: A Novel Deep Convolutional Neural Network for Assistive ERP-Based Brain-Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2773–2782. [Google Scholar] [CrossRef]
Block | Layer | Layer Type | # of Filter | Kernel Size | Output | Option |
---|---|---|---|---|---|---|
First Convolutional Block | ||||||
1 | Input | Input | (C, T) | |||
Reshape | (C, T, 1) | |||||
C1 | Conv2D | F1 | (1, KE1) | (C, T, F1) | padding = ‘same’ | |
BN1 | BatchNorm | (C, T, F1) | ||||
C2 | Depthwise Conv2 | F1 × D | (C, 1) | (1, T, F1 × D) | depth_multiplier = D, max norm = 1 | |
BN2 | BatchNorm | (1, T, F1 × D) | ||||
A1 | Activation | (1, T, F1 × D) | Function = ELU | |||
P1 | AveragePooling2D | (1, 4) | (1, T//4, F1 × D) | |||
D1 | Dropout | p = 0.5 | ||||
2 | C3 | Separable Conv2 | F2 | (1, KE2) | (1, T//4, F2) | |
BN3 | BatchNorm | (1, T//4, F2) | ||||
A2 | Activation | (1, T//4, F2) | Function = ELU | |||
P2 | AveragePooling2D | (1, 8) | (1, T//32, F2) | |||
D2 | Dropout | (1, T//32, F2) | p = 0.5 | |||
AB | Attention Block | (1, T//32, F2first) | ||||
Second Convolutional Block “the same Previous Step” | ||||||
AB | Attention Block | (1, T//32, F2Second) | ||||
Con1 | Concatenation | (1, T//32, F2first+ F2Second) | ||||
FC1 | Flatten | (T//32 × (F2first+ F2Second)) | ||||
Dense | Dense | N | Softmax |
Accuracy | Kappa | Precision | Recall | F1-Score | |
---|---|---|---|---|---|
Subject 1 | 90.04% | 86.71% | 90% | 90% | 90% |
Subject 2 | 75.62% | 67.51% | 75% | 76% | 75% |
Subject 3 | 95.97% | 94.63% | 96% | 96% | 96% |
Subject 4 | 76.32% | 68.38% | 77% | 76% | 76% |
Subject 5 | 78.26% | 70.98% | 79% | 78% | 78% |
Subject 6 | 69.30% | 59.05% | 70% | 69% | 69% |
Subject 7 | 91.70% | 88.93% | 92% | 92% | 92% |
Subject 8 | 88.93% | 85.24% | 89% | 89% | 89% |
Subject 9 | 87.88% | 83.83% | 88% | 88% | 88% |
AVG | 83.78% | 78.36% |
Sub 1 | Sub 2 | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Sub 8 | Sub 9 | AVG | |
---|---|---|---|---|---|---|---|---|---|---|
No Preprocessing Accuracy | 87.54% | 72.08% | 97.07% | 80.26% | 77.54% | 68.84% | 90.97% | 88.19% | 87.50% | 83.33% |
With Preprocessing Accuracy | 90.04% | 75.62% | 95.97% | 76.32% | 78.26% | 69.30% | 91.70% | 88.93% | 87.88% | 83.78% |
Sub 1 | Sub 2 | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Sub 8 | Sub 9 | AVG | |
---|---|---|---|---|---|---|---|---|---|---|
One-Layer | 89.32% | 74.20% | 95.24% | 69.74% | 75.72% | 60.47% | 90.97% | 85.98% | 87.50% | 81.02% |
Two-Layer | 90.04% | 75.62% | 95.97% | 76.32% | 78.26% | 69.30% | 91.70% | 88.93% | 87.88% | 83.78% |
Three-Layer | 90.75% | 74.56% | 95.60% | 74.56% | 77.54% | 68.37% | 90.61% | 88.56% | 88.64% | 83.24% |
Sub 1 | Sub 2 | Sub 3 | Sub 4 | Sub 5 | Sub 6 | Sub 7 | Sub 8 | Sub 9 | AVG | |
---|---|---|---|---|---|---|---|---|---|---|
Class 1 and Class 2 | 88.65% | 88.03% | 98.54% | 90.52% | 97.78% | 89.81% | 91.43% | 100.00% | 93.08% | 93.09% |
Class 3 and Class 4 | 92.14% | 90.07% | 100.00% | 86.61% | 91.49% | 82.24% | 94.16% | 90.51% | 95.52% | 91.42% |
ShallowNet [4] | DeepConvNet [4] | EEGNet [4] | MSFBCNN [4] | EEG-TCNet [23] | TCNet Fusion [25] | Our Model | |
---|---|---|---|---|---|---|---|
Accuracy | 74.31% | 75.64% | 73.39% | 75.12% | 77.35% | 83.73% | 83.78% |
Kappa | 0.66 | 0.67 | 0.65 | 0.67 | 0.70 | 0.78 | 0.78 |
CMO-CNN [26] | SSSTN [30] | SDDA [29] | MTFB-CNN [4] | CRGNet [6] | EEG-ITNet [27] | Echtioui et al. [31] | MBSTCNN-ECA-LightGBM [28] | Our Model | |
---|---|---|---|---|---|---|---|---|---|
Subject 1 | 86.95% | 86.46% | 90.62% | 90.52% | 83.80% | 84.38% | 70.1 7% | 82% | 90.04% |
Subject 2 | 67.47% | 58.33% | 62.84% | 68.10% | 70.60% | 62.85% | 57.75% | 61% | 75.62% |
Subject 3 | 92.69% | 92.57% | 93.40% | 93.97% | 90.80% | 89.93% | 83.62% | 89% | 95.97% |
Subject 4 | 77.21% | 75.35% | 84.02% | 74.14% | 75.60% | 69.10% | 38.79% | 63% | 76.32% |
Subject 5 | 82.78% | 80.90% | 68.05% | 80.17% | 80.60% | 74.31% | 47.41% | 71% | 78.26% |
Subject 6 | 73.73% | 67.01% | 61.80% | 72.41% | 73.00% | 57.64% | 38.96% | 64% | 69.30% |
Subject 7 | 92.52% | 93.06% | 97.20% | 96.55% | 95.80% | 88.54% | 74.82% | 72% | 91.70% |
Subject 8 | 90.43% | 85.76% | 90.97% | 91.38% | 89.20% | 83.68% | 69.65% | 79% | 88.93% |
Subject 9 | 91.47% | 86.46% | 89.23% | 93.10% | 79.30% | 80.21% | 51.03% | 84% | 87.88% |
AVG | 83.92% | 80.66% | 82.01% | 84.48% | 82.10% | 76.74% | 59.13% | 74% | 83.78% |
HS-CNN [54] | SW-LCR [8] | CSP+ DBN [55] | CSP+ LSTM [55] | KMDA [53] | VFB-RCSP [52] | MBSTCNN-ECA-LightGBM [28] | Our Model | |
---|---|---|---|---|---|---|---|---|
Subject 1 | 90.07% | 86.81% | 48.15% | 48.15% | 79.01% | 86.11% | 88% | 88.65% |
Subject 2 | 80.28% | 64.58% | 51.85% | 51.85% | 72.52% | 70.83% | 78% | 88.03% |
Subject 3 | 97.08% | 95.83% | 48.15% | 51.85% | 90.25% | 94.44% | 87% | 98.54% |
Subject 4 | 89.66% | 67.36% | 50.00% | 50.00% | 70.25% | 73.61% | 76% | 90.52% |
Subject 5 | 97.04% | 68.06% | 50.00% | 50.00% | 68.55% | 61.11% | 93% | 97.78% |
Subject 6 | 87.04% | 67.36% | 52.38% | 47.62% | 71.02% | 70.83% | 77% | 89.81% |
Subject 7 | 92.14% | 80.56% | 50.00% | 50.00% | 88.29% | 63.89% | 87% | 91.43% |
Subject 8 | 98.51% | 97.22% | 50.00% | 50.00% | 90.71% | 93.06% | 94% | 100.00% |
Subject 9 | 92.31% | 92.36% | 52.63% | 47.37% | 90.66% | 88.19% | 78% | 93.08% |
AVG | 91.57% | 80.02% | 50.35% | 49.65% | 80.14% | 78.01% | 84% | 93.09% |
CMO-CNN [26] | EEG-ITNet [27] | EEG Inception [27] | EEGNet 8,2 [27] | EEG-TCNet [27] | Our Model | |
---|---|---|---|---|---|---|
Subject 1 | 68.75% | 71.88% | 66.32% | 68.75% | 69.10% | 72.56% |
Subject 2 | 44.44% | 62.85% | 48.26% | 50.00% | 52.08% | 53.71% |
Subject 3 | 78.47% | 81.94% | 73.61% | 80.21% | 81.94% | 86.37% |
Subject 4 | 55.90% | 65.62% | 56.60% | 59.38% | 61.81% | 60.82% |
Subject 5 | 53.12% | 63.19% | 65.62% | 64.24% | 60.42% | 59.11% |
Subject 6 | 51.56% | 56.25% | 56.25% | 48.26% | 51.39% | 59.91% |
Subject 7 | 67.70% | 80.21% | 73.61% | 72.57% | 76.39% | 72.63% |
Subject 8 | 76.38% | 78.12% | 70.49% | 77.43% | 74.31% | 81.68% |
Subject 9 | 73.78% | 64.93% | 61.11% | 55.56% | 58.68% | 73.05% |
AVG | 63.34% | 69.44% | 63.54% | 64.04% | 65.12% | 68.87% |
Selected Channels | # of Channels | Accuracy | |
---|---|---|---|
Subject 1 in Testing | [2, 3, 8, 9, 12, 15, 16, 19, 21, 22] | 10 | 76.90% |
Subject 2 in Testing | [2, 5, 7, 10, 14, 16, 17, 18, 21, 22] | 10 | 52.98% |
Subject 3 in Testing | [2, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 18, 22] | 13 | 85.82% |
Subject 4 in Testing | [2, 6, 7, 10, 12, 15, 17, 18, 19, 20, 21, 22] | 12 | 61.22% |
Subject 5 in Testing | [1, 2, 3, 5, 8, 9, 13, 14, 15, 16, 17, 20, 21] | 13 | 57.81% |
Subject 6 in Testing | [3, 4, 5, 7, 8, 10, 12, 15, 17, 19] | 10 | 58.06% |
Subject 7 in Testing | [4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 17, 19] | 12 | 68.25% |
Subject 8 in Testing | [3, 7, 9, 10, 13, 15, 17, 18, 21, 22] | 10 | 80.56% |
Subject 9 in Testing | [5, 9, 13, 17, 18] | 5 | 70.66% |
Optimal Channels of Subject 1 in Testing [2, 3, 8, 9, 12, 15, 16, 19, 21, 22] | Optimal Channels of Subject 3 in Testing [2, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 18, 22] | All Channels Classification [1,2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] | ||||
---|---|---|---|---|---|---|
Accuracy | Kappa | Accuracy | Kappa | Accuracy | Kappa | |
Subject 1 | 87.90% | 83.86% | 90.75% | 87.67% | 90.04% | 86.71% |
Subject 2 | 77.39% | 69.86% | 67.84% | 57.16% | 75.62% | 67.51% |
Subject 3 | 95.60% | 94.14% | 94.51% | 92.67% | 95.97% | 94.63% |
Subject 4 | 75.88% | 67.77% | 77.19% | 69.50% | 76.32% | 68.38% |
Subject 5 | 73.19% | 64.30% | 77.90% | 70.51% | 78.26% | 70.98% |
Subject 6 | 73.02% | 64.03% | 70.23% | 60.29% | 69.30% | 59.05% |
Subject 7 | 88.81% | 85.09% | 89.53% | 86.05% | 91.70% | 88.93% |
Subject 8 | 87.82% | 83.76% | 89.67% | 86.22% | 88.93% | 85.24% |
Subject 9 | 87.12% | 82.81% | 84.85% | 79.77% | 87.88% | 83.83% |
AVG | 82.97% | 82.50% | 83.78% | 78.36% | ||
Time Duration | 2 h: 36 min | 3 h: 04 min | 4 h: 38 min |
Accuracy | Kappa | Variable Optimal Channels | Belong to Which Cross-Subject | Time Duration | |
---|---|---|---|---|---|
Subject 1 | 90.75% | 87.67% | [2, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 18, 22] | Testing Subject 3 | 3:04 h |
Subject 2 | 77.39% | 69.86% | [2, 3, 8, 9, 12, 15, 16, 19, 21, 22] | Testing Subject 1 | 2:36 h |
Subject 3 | 96.34% | 95.12% | [2, 5, 7, 10, 14, 16, 17, 18, 21, 22] | Testing Subject 2 | 2:34 h |
Subject 4 | 77.19% | 69.50% | [2, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 18, 22] | Testing Subject 3 | 3:04 h |
Subject 5 | 77.90% | 70.51% | [2, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 18, 22] | Testing Subject 3 | 3:04 h |
Subject 6 | 73.02% | 64.03% | [2, 3, 8, 9, 12, 15, 16, 19, 21, 22] | Testing Subject 1 | 2:36 h |
Subject 7 | 89.53% | 86.05% | [2, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 18, 22] | Testing Subject 3 | 3:04 h |
Subject 8 | 89.67% | 86.22% | [2, 6, 7, 10, 12, 15, 17, 18, 19, 20, 21, 22] | Testing Subject 4 | 2:51 h |
Subject 9 | 89.02% | 89.02% | [2, 6, 7, 10, 12, 15, 17, 18, 19, 20, 21, 22] | Testing Subject 4 | 2:51 h |
AVG | 84.53% | 79.78% | 2:51 h |
Accuracy | #of Channels | Use the Same Number of Channels | Use the Same Channels? | Strategy | |
---|---|---|---|---|---|
(Mahamune et al. 2023) [21] | 75.03% | 17.22 | No | No | Within-subject |
(Tiwari et al. 2023) [19] | 83.97% | 6.44 | No | No | One vs. One |
(Hassanpour et al. 2019) [18] (SSAE model) | 71.31% | 19.44 | No | No | One vs. Rest |
(Hassanpour et al. 2019) [18] (DBN model) | 68.63% | 19.44 | No | No | One vs. Rest |
(Chen et al. 2020) [22] algorithm1 | 75.72% | 14 | Yes | No | Within-subject in classification One vs. Rest in feature extraction |
(Chen et al. 2020) [22] algorithm2 | 77.82% | 15.22 | No | No | Within-subject in classification One vs. Rest in feature extraction |
Proposed Methodology Fixed Channels | 82.97% | 10 | Yes | Yes | Cross-subject in channel selection Within-subject in classification |
Proposed Methodology Variable Channels | 84.53% | 11.78 | No | No | Cross-subject in channel selection Within-subject in classification |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khabti, J.; AlAhmadi, S.; Soudani, A. Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks. Sensors 2024, 24, 3168. https://doi.org/10.3390/s24103168
Khabti J, AlAhmadi S, Soudani A. Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks. Sensors. 2024; 24(10):3168. https://doi.org/10.3390/s24103168
Chicago/Turabian StyleKhabti, Joharah, Saad AlAhmadi, and Adel Soudani. 2024. "Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks" Sensors 24, no. 10: 3168. https://doi.org/10.3390/s24103168
APA StyleKhabti, J., AlAhmadi, S., & Soudani, A. (2024). Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks. Sensors, 24(10), 3168. https://doi.org/10.3390/s24103168