MSEI-ENet: A Multi-Scale EEG-Inception Integrated Encoder Network for Motor Imagery EEG Decoding
<p>Motor imagery paradigm for BCIIV2a.</p> "> Figure 2
<p>Overall framework of the proposed model for MI-EEG classification.</p> "> Figure 3
<p>Overall architecture of the multi-scale EEG-inception module.</p> "> Figure 4
<p>Schematic illustration of the Main scale.</p> "> Figure 5
<p>Structure of ECA.</p> "> Figure 6
<p>Schematic illustration of the encoder module: (<b>a</b>) its layer and (<b>b</b>) multi-head attention mechanism.</p> "> Figure 7
<p>Training and validation loss trends during training epochs of the proposed model on BCIIV 2a for (<b>a</b>) left/right-hand binary classification and (<b>b</b>) multiple classification.</p> "> Figure 8
<p>Visualization with <span class="html-italic">t</span>-SNE on the test set of the BCIIV2a dataset: (<b>a</b>) raw data; (<b>b</b>) features extracted by the MSEI module; (<b>c</b>) features extracted by the encoder module.</p> "> Figure 9
<p>Performance comparison among Variant 1, Variant 2, and Variant 3 for binary classification on BCIIV 2a.</p> "> Figure 10
<p>Performance comparison among Variant 1, Variant 2, and Variant 3 for multiple classification on BCIIV 2a.</p> "> Figure 11
<p>Confusion matrices corresponding to the comparison results for binary classification.</p> "> Figure 12
<p>Confusion matrices corresponding to the comparison results for multiple classification.</p> "> Figure 13
<p>Results of the optimization process of EEG-inception on the test set.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Dataset and Preprocessing
2.1.1. BCI Competition IV Dataset 2a
2.1.2. Physionet Dataset
2.1.3. Preprocessing
2.2. MSEI-ENet
2.2.1. Multi-Scale EEG-Inception Module
2.2.2. Encoder Module
2.2.3. Output Module
2.3. Evaluation Metrics
3. Experiments and Results
3.1. Implementation Details
3.2. Results
3.2.1. Experimental Results of MI-EEG Decoding
3.2.2. Ablation Experiment
3.2.3. Comparison of MSEI-ENet with Other Models
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Biasiucci, A.; Leeb, R.; Iturrate, I.; Perdikis, S.; Al-Khodairy, A.; Corbet, T.; Schnider, A.; Schmidlin, T.; Zhang, H.; Bassolino, M.; et al. Brain-actuated functional electrical stimulation elicits lasting arm motor recovery after stroke. Nat. Commun. 2018, 9, 2421. [Google Scholar] [CrossRef] [PubMed]
- Gao, S.H.; Yang, J.; Shen, T.; Jiang, W. A Parallel Feature Fusion Network Combining GRU and CNN for Motor Imagery EEG Decoding. Brain Sci. 2022, 12, 1233. [Google Scholar] [CrossRef] [PubMed]
- Padfield, N.; Zabalza, J.; Zhao, H.M.; Masero, V.; Ren, J.C. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed]
- Hassan, M.; Wendling, F. Aiming for high resolution of brain networks in time and space Electroencephalography Source Connectivity. IEEE Signal Process. Mag. 2018, 35, 81–96. [Google Scholar] [CrossRef]
- Sengupta, P.; Lakshminarayanan, K. Cortical activation and BCI performance during brief tactile imagery: A comparative study with motor imagery. Behav. Brain Res. 2024, 459, 114760. [Google Scholar] [CrossRef]
- Shahlaei, F.; Bagh, N.; Zambare, M.S.; Reddy, M.R. Quantification of event related brain patterns for the motor imagery tasks using inter-trial variance technique. Eng. Appl. Artif. Intell. 2023, 126, 106863. [Google Scholar] [CrossRef]
- Lotte, F.; Guan, C. Regularizing Common Spatial Patterns to Improve BCI Designs: Unified Theory and New Algorithms. IEEE Trans. Biomed. Eng. 2011, 58, 355–362. [Google Scholar] [CrossRef]
- Thomas, K.P.; Guan, C.; Tong, L.C.; Vinod, A.P. Discriminative FilterBank selection and EEG information fusion for Brain Computer Interface. In Proceedings of the 2009 IEEE International Symposium on Circuits and Systems, Taipei, Taiwan, 24–27 May 2009; pp. 1469–1472. [Google Scholar]
- Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar]
- Novi, Q.; Guan, C.; Dat, T.H.; Xue, P. Sub-band Common Spatial Pattern (SBCSP) for Brain-Computer Interface. In Proceedings of the 2007 3rd International IEEE/EMBS Conference on Neural Engineering, Kohala Coast, HI, USA, 2–5 May 2007; pp. 204–207. [Google Scholar]
- Blanco-Díaz, C.F.; Guerrero-Mendez, C.D.; Delisle-Rodriguez, D.; Jaramillo-Isaza, S.; Ruiz-Olaya, A.F.; Frizera-Neto, A.; Ferreira de Souza, A.; Bastos-Filho, T. Evaluation of temporal, spatial and spectral filtering in CSP-based methods for decoding pedaling-based motor tasks using EEG signals. Biomed. Phys. Eng. Express 2024, 10, 035003. [Google Scholar] [CrossRef]
- Blanco-Diaz, C.F.; Antelis, J.M.; Ruiz-Olaya, A.F. Comparative analysis of spectral and temporal combinations in CSP-based methods for decoding hand motor imagery tasks. J. Neurosci. Methods 2022, 371, 109495. [Google Scholar] [CrossRef]
- Zhang, S.R.; Zhu, Z.B.; Zhang, B.X.; Feng, B.; Yu, T.Y.; Li, Z. The CSP-Based New Features Plus Non-Convex Log Sparse Feature Selection for Motor Imagery EEG Classification. Sensors 2020, 20, 4749. [Google Scholar] [CrossRef]
- Jin, J.; Xiao, R.; Daly, I.; Miao, Y.; Wang, X.; Cichocki, A. Internal Feature Selection Method of CSP Based on L1-Norm and Dempster–Shafer Theory. IEEE Trans. Neural Netw. Learn. 2021, 32, 4814–4825. [Google Scholar] [CrossRef] [PubMed]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Kim, Y.-K.; Eskandarian, A. EEG-inception: An accurate and robust end-to-end neural network for EEG-based motor imagery classification. J. Neural Eng. 2021, 18, 046014. [Google Scholar] [CrossRef] [PubMed]
- Riyad, M.; Khalil, M.; Adib, A. MI-EEGNET: A novel convolutional neural network for motor imagery classification. J. Neurosci. Methods 2021, 353, 109037. [Google Scholar] [CrossRef]
- Wang, H.; Yu, H.; Wang, H. EEG_GENet: A feature-level graph embedding method for motor imagery classification based on EEG signals. Biocybern. Biomed. Eng. 2022, 42, 1023–1040. [Google Scholar] [CrossRef]
- Ingolfsson, T.M.; Hersche, M.; Wang, X.; Kobayashi, N.; Cavigelli, L.; Benini, L. EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain–Machine Interfaces. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 2958–2965. [Google Scholar]
- Salami, A.; Andreu-Perez, J.; Gillmeister, H. EEG-ITNet: An Explainable Inception Temporal Convolutional Network for Motor Imagery Classification. IEEE Access 2022, 10, 36672–36685. [Google Scholar] [CrossRef]
- Jia, Z.; Lin, Y.; Wang, J.; Yang, K.; Liu, T.; Zhang, X. MMCNN: A Multi-branch Multi-scale Convolutional Neural Network for Motor Imagery Classification. In Proceedings of the Machine Learning and Knowledge Discovery in Databases, Bilbao, Spain, 13–17 September 2021; pp. 736–751. [Google Scholar]
- Tang, X.; Yang, C.; Sun, X.; Zou, M.; Wang, H. Motor Imagery EEG Decoding Based on Multi-Scale Hybrid Networks and Feature Enhancement. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1208–1218. [Google Scholar] [CrossRef]
- Song, Y.; Zheng, Q.; Liu, B.; Gao, X. EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 710–719. [Google Scholar] [CrossRef]
- Ding, Y.; Li, Y.; Sun, H.; Liu, R.; Tong, C.; Liu, C.; Zhou, X.; Guan, C. EEG-Deformer: A Dense Convolutional Transformer for Brain-Computer Interfaces. IEEE J. Biomed. Health Inform. 2024, 1–10. [Google Scholar] [CrossRef]
- Luo, J.; Wang, Y.J.; Xia, S.X.; Lu, N.; Ren, X.Y.; Shi, Z.H.; Hei, X.H. A shallow mirror transformer for subject-independent motor imagery BCI. Comput. Biol. Med. 2023, 164, 107254. [Google Scholar] [CrossRef]
- Saibene, A.; Ghaemi, H.; Dagdevir, E. Deep learning in motor imagery EEG signal decoding: A Systematic Review. Neurocomputing 2024, 610, 128577. [Google Scholar] [CrossRef]
- Fan, C.-C.; Yang, H.; Hou, Z.-G.; Ni, Z.-L.; Chen, S.; Fang, Z. Bilinear neural network with 3-D attention for brain decoding of motor imagery movements from the human EEG. Cogn. Neurodyn. 2021, 15, 181–189. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Liu, D.; Chen, W.H.; Pei, Z.C.; Wang, J.H. Deep Convolutional Neural Network for EEG-Based Motor Decoding. Micromachines 2022, 13, 1485. [Google Scholar] [CrossRef] [PubMed]
- Kwon, O.Y.; Lee, M.H.; Guan, C.T.; Lee, S.W. Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3839–3852. [Google Scholar] [CrossRef] [PubMed]
- Chevallier, S.; Carrara, I.; Aristimunha, B.; Guetschel, P.; Sedlar, S.; Lopes, B.; Velut, S.; Khazem, S.; Moreau, T.J.A. The largest EEG-based BCI reproducibility study for open science: The MOABB benchmark. arXiv 2024, arXiv:2404.15319. [Google Scholar]
- Tangermann, M.; Müller, K.-R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.J.; Mueller-Putz, G.; et al. Review of the BCI Competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar] [CrossRef]
- Schalk, G.; McFarland, D.J.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A general-purpose, brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 2004, 51, 1034–1043. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.E.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.J.I.C.o.C.V.; Recognition, P. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 2014; pp. 1–9. [Google Scholar]
- Santamaría-Vázquez, E.; Martínez-Cagigal, V.; Vaquerizo-Villar, F.; Hornero, R. EEG-Inception: A Novel Deep Convolutional Neural Network for Assistive ERP-Based Brain-Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2773–2782. [Google Scholar] [CrossRef]
- Baikun, W.; Yangang, L.; Dong, M.; Hongzhi, Q.; Yizhong, W.; Rui, Z. Feature recognition of multi-class imaginary movements in brain-computer interface. In Proceedings of the 2009 IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurements Systems, Hong Kong, 11–13 May 2009; pp. 250–254. [Google Scholar]
- Soler, A.; Giraldo, E.; Molinas, M. EEG source imaging of hand movement-related areas: An evaluation of the reconstruction and classification accuracy with optimized channels. Brain Inform. 2024, 11, 11. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar]
- Ferrero, L.; Quiles, V.; Ortiz, M.; Iáñez, E.; Gil-Agudo, Á.; Azorín, J.M. Brain-computer interface enhanced by virtual reality training for controlling a lower limb exoskeleton. iScience 2023, 26, 106675. [Google Scholar] [CrossRef]
- Kline, A.; Ghiroaga, C.G.; Pittman, D.; Goodyear, B.; Ronsky, J. EEG differentiates left and right imagined Lower Limb movement. Gait Posture 2021, 84, 148–154. [Google Scholar] [CrossRef] [PubMed]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the North American Chapter of the Association for Computational Linguistics, Minneapollis, MN, USA, 2–7 June 2019. [Google Scholar]
- Jiang, M.; Wang, Z.Y.; Kong, J.; Zhuang, D.F. MCFusion: Infrared and visible image fusion based multiscale receptive field and cross-modal enhanced attention mechanism. J. Electron. Imaging 2024, 33, 013039. [Google Scholar] [CrossRef]
- Shang, H.B.; Sun, C.; Liu, J.X.; Chen, X.F.; Yan, R.Q. Defect-aware transformer network for intelligent visual surface defect detection. Adv. Eng. Inform. 2023, 55, 101882. [Google Scholar] [CrossRef]
- Jiang, J.L.; Xu, H.X.; Xu, X.L.; Cui, Y.; Wu, J.T. Transformer-Based Fused Attention Combined with CNNs for Image Classification. Neural Process. Lett. 2023, 55, 11905–11919. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Experiments | Datasets | Labels of MI Tasks | |||||
---|---|---|---|---|---|---|---|
L | R | F/H | T/F | Total | |||
global-CV | BCIIV2a | Train and Val | 8502 | 8502 | 8502 | 8502 | 34,008 |
Test | 259 | 259 | 259 | 259 | 1036 | ||
Physionet | Train and Val | 2757 | 2757 | 2757 | 2757 | 11,028 | |
Test | 84 | 84 | 84 | 84 | 336 | ||
LOSO-CV | BCIIV2a | Train and Val | 9445 | 9445 | 9445 | 9445 | 37,780 |
Test | 144 | 144 | 144 | 144 | 576 |
Branch | Layer | Filters | Depth | Size | Stride | Padding | Dropout Rate | Input | Output |
---|---|---|---|---|---|---|---|---|---|
Branch1 | Conv2D block | 8 | - | (500, 1) | 1 | same | 0.25 | X | |
Dw2D block | - | 2 | (1, 3) | 1 | - | 0.25 | |||
Branch2 | Conv2D block | 8 | - | (250, 1) | 1 | same | 0.25 | X | |
Dw2D block | - | 2 | (1, 3) | 1 | - | 0.25 | |||
Branch3 | Conv2D block | 8 | - | (125, 1) | 1 | same | 0.25 | X | |
Dw2D block | - | 2 | (1, 3) | 1 | - | 0.25 | |||
Concatenate | - | - | - | - | - | - | , , | - | |
AvgPool2D | - | - | (4, 1) | - | - | - | - | x | |
Branch1 | Conv2D block | 8 | - | (500/4, 1) | 1 | same | 0.25 | x | |
Branch2 | Conv2D block | 8 | - | (250/4, 1) | 1 | same | 0.25 | x | |
Branch3 | Conv2D block | 8 | - | (125/4, 1) | 1 | same | 0.25 | x | |
Concatenate | - | - | - | - | - | - | , , | - | |
AvgPool2D | - | - | (2, 1) | - | - | - | - | e1 | |
ECA block | - | - | - | - | - | - | e1 | E1 |
Branch | Layer | Filters | Size | Input | Output |
---|---|---|---|---|---|
Branch1 | Conv2D block | 4 | (125, 1) | X | |
Dw2D block | - | (1, 3) | |||
Branch2 | Conv2D block | 4 | (64, 1) | X | |
Dw2D block | - | (1, 3) | |||
Branch3 | Conv2D block | 4 | (32, 1) | X | |
Dw2D block | - | (1, 3) | |||
Concatenate | - | - | b1, , b3 | - | |
AvgPool2D | - | (8, 1) | - | e2 | |
ECA Block | - | - | e2 | E2 |
Layer | Input Size | Output Size | Parameters |
---|---|---|---|
Main scale | 1000 × 3 × 1 | 125 × 1 × 24 | 90,764 |
Aux(1) scale | 1000 × 3 × 1 | 125 × 1 × 24 | 1116 |
Aux(2) scale | 1000 × 3 × 1 | 125 × 1 × 24 | 456 |
MSEI module | - | 125 × 1 × 72 | 92,336 |
Encoder module | 72 × 125 | 72 × 125 | 345,750 |
Output module | 125 × 1 × 72 | 4 | 25,780 |
Total parameters: 463,866 |
Predicted Labels | Per-Class Metrics | ||||
---|---|---|---|---|---|
L | R | PR | RE | F1 | |
L | 254 | 5 | 98.45 | 98.07 | 98.26 |
R | 4 | 255 | 98.08 | 98.46 | 98.27 |
Predicted Labels | Per-Class Metrics | ||||||
---|---|---|---|---|---|---|---|
L | R | F | T | PR | RE | F1 | |
L | 242 | 8 | 6 | 3 | 94.53 | 93.44 | 93.98 |
R | 4 | 247 | 4 | 4 | 92.86 | 95.37 | 94.10 |
F | 4 | 4 | 248 | 3 | 93.94 | 95.75 | 94.84 |
T | 6 | 7 | 6 | 240 | 96.00 | 92.66 | 94.30 |
Predicted Labels | Per-Class Metrics | ||||||
---|---|---|---|---|---|---|---|
L | R | H | F | PR | RE | F1 | |
L | 75 | 7 | 2 | 0 | 84.27 | 89.29 | 86.71 |
R | 8 | 74 | 2 | 0 | 84.09 | 88.10 | 86.05 |
H | 6 | 6 | 72 | 0 | 94.74 | 85.71 | 89.99 |
F | 0 | 1 | 0 | 83 | 100 | 98.81 | 99.40 |
Variant | Model |
---|---|
Variant 1 | without the Aux(1) and Aux(2) scales of MSEI and the encoder module |
Variant 2 | without the encoder module |
Variant 3 | with the convolution kernels of the Main and Aux(1) exchanged |
Variant 4 | with the convolution kernels of the Main and Aux(2) exchanged |
Method | BCIIV 2a | Physionet | ||
---|---|---|---|---|
ACC (%) | Kappa | ACC (%) | Kappa | |
EEGNet [15] | 57.66 | 0.44 | 55.36 | 0.40 |
EEG-inception [34] | 61.67 | 0.49 | 60.83 | 0.48 |
MMCNN [21] | 80.10 | 0.73 | 77.05 | 0.69 |
SMTransformer [25] | 85.02 | 0.81 | 80.92 | 0.74 |
Conformer [23] | 91.96 | 0.89 | 82.85 | 0.77 |
Deformer [24] | 93.06 | 0.91 | 87.62 | 0.83 |
MSEI-ENet | 94.30 | 0.92 | 90.48 | 0.87 |
Method | S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | S9 | Average |
---|---|---|---|---|---|---|---|---|---|---|
EEG-inception [34] | 60.60 | 33.35 | 67.72 | 44.28 | 50.36 | 49.49 | 69.98 | 55.05 | 62.51 | 54.82 |
MMCNN [21] | 68.08 | 39.78 | 69.12 | 50.89 | 51.06 | 48.81 | 54.36 | 59.05 | 68.94 | 56.68 |
Conformer [23] | 58.01 | 45.34 | 68.08 | 53.15 | 60.27 | 57.67 | 66.17 | 60.44 | 64.61 | 59.30 |
Deformer [24] | 61.66 | 46.38 | 71.38 | 54.72 | 57.32 | 52.63 | 66.87 | 60.62 | 72.25 | 60.43 |
MSEI-ENet | 68.84 | 51.22 | 72.05 | 48.44 | 54.51 | 61.81 | 67.36 | 62.85 | 71.79 | 62.10 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, P.; Fei, K.; Chen, B.; Pan, L. MSEI-ENet: A Multi-Scale EEG-Inception Integrated Encoder Network for Motor Imagery EEG Decoding. Brain Sci. 2025, 15, 129. https://doi.org/10.3390/brainsci15020129
Wu P, Fei K, Chen B, Pan L. MSEI-ENet: A Multi-Scale EEG-Inception Integrated Encoder Network for Motor Imagery EEG Decoding. Brain Sciences. 2025; 15(2):129. https://doi.org/10.3390/brainsci15020129
Chicago/Turabian StyleWu, Pengcheng, Keling Fei, Baohong Chen, and Lizheng Pan. 2025. "MSEI-ENet: A Multi-Scale EEG-Inception Integrated Encoder Network for Motor Imagery EEG Decoding" Brain Sciences 15, no. 2: 129. https://doi.org/10.3390/brainsci15020129
APA StyleWu, P., Fei, K., Chen, B., & Pan, L. (2025). MSEI-ENet: A Multi-Scale EEG-Inception Integrated Encoder Network for Motor Imagery EEG Decoding. Brain Sciences, 15(2), 129. https://doi.org/10.3390/brainsci15020129