Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity
<p>EEGnet main sketch. First column: input EEG. Second column: temporal convolution (filter bank). Third and fourth columns: spatial filtering convolution. Fifth and sixth columns: temporal summary. Last column: output label prediction.</p> "> Figure 2
<p>Guideline of the proposed framework for enhanced post-hoc interpretability of MI neural responses using connectivity measures extracted from EEGNet CAMs and clustering subjects, according to the EEG MI performance.</p> "> Figure 3
<p>GigaScience database timeline of the evaluated motor imagery paradigm.</p> "> Figure 4
<p>Topographic map for EEG representation. Besides, it highglights in color the main parts of the brain (Frontal, Central right, Posterior right, Posterior, Posterior left, Central left).</p> "> Figure 5
<p>Subject-dependent MI discrimination results. (<b>Up</b>): Classification accuracy achieved for Bi-class MI tasks (left-hand and right-hand). Note: the red line at 70% level shows frequently used for fixing the poor MI coordination skill threshold under the subjects are considered as worse-performing. (<b>Bottom</b>): Obtained <span class="html-italic">p</span>-value per subject using the Nemenyi post-hoc test (square marker represents a <span class="html-italic">p</span>-value < 0.05).</p> "> Figure 6
<p>Connectivity maps are estimated for the best-performing (<b>left side</b>) and poorest-performing (<b>right side</b>) subjects. Stronger correlations between nodes are represented by darker edges linking two EEG channels. The right plot shows the brain regions (i.e., Temporal, Frontal, Occipital, Parietal, and Central) colored differently to improve spatial interpretation.</p> "> Figure 7
<p>Connectivity matrix after two-sample Kolmogorov-Smirnov test obtained by the best (<b>left side</b>) and poorest-performing (<b>right side</b>) subjects. The pictured GFC matrices include both MI tasks and are computed at 90-percentile of normalized relevance weights.</p> "> Figure 8
<p>K−means clustering visualization of the EEG-Net classifier performance metrics and the Kolmogorov test values extracted from connectivity information. Left-side scatter plots of individuals (data points) estimated for the three considered groups of motor skills. Note that the point size represents the subject’s accuracy. The central plot shows the clustering metrics of the Silhouette coefficient, while the right-side plot displays the classifier performance, averaged across all subjects of each partition.</p> "> Figure 9
<p>Clustering of motor skills obtained by the CNN-based approach with connectivity features for each MI task. (<b>Left side</b>): Subject partitions are colored for each subject group: GI (green squares), GII (yellow), and GIII (red). (<b>Right side</b>): Detailed analysis of the worst-performing GIII by splitting all the participants into subgroups, fixing <math display="inline"><semantics> <mrow> <mi>k</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>6</mn> </mrow> </semantics></math>.</p> "> Figure 10
<p>Graphical representation of the brain Gaussian Functional Connectivity estimated across each cluster of individual motor skills. Connectograms of each MI task (<b>top and middle rows</b>) are computed without the Komolgorov-Smirnov test and conducting the hypothesis test (<b>bottom row</b>). The GFC set is calculated at the 90 percentile of normalized relevance weights.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. EEGnet-Based Classification of Motor Imagery Tasks
2.2. Score-Weighted Visual Class Activation Maps from EEG-Net
2.3. Pruned Gaussian Functional Connectivity from Score-CAM
2.4. Post-Hoc Grouping of Subject Motor Imagery Skills
2.5. GigaScience Database
3. Experimental Set-Up
3.1. Parameter Setting of Trained CNN Framework
3.2. Quality Assessment
4. Results and Discussion
4.1. Classification Results of CAM-Based EEGnet Masks
4.2. Clustering of Motor Imagery Neural Responses Using Individual GFC Measures
4.3. Enhanced Interpretability from GFC Patterns According to Clusterized Motor Skills
5. Concluding Remarks
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Moran, A.P.; O’shea, H. Motor Imagery Practice and Cognitive Processes. Front. Psychol. 2020, 11, 394. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bahmani, M.; Babak, M.; Land, W.; Howard, J.; Diekfuss, J.; Abdollahipour, R. Children’s motor imagery modality dominance modulates the role of attentional focus in motor skill learning. Hum. Mov. Sci. 2021, 75, 102742. [Google Scholar] [CrossRef] [PubMed]
- Behrendt, F.; Zumbrunnen, V.; Brem, L.; Suica, Z.; Gäumann, S.; Ziller, C.; Gerth, U.; Schuster-Amft, C. Effect of motor imagery training on motor learning in children and adolescents: A systematic review and meta-analysis. Int. J. Environ. Res. Public Health 2021, 18, 9467. [Google Scholar] [CrossRef] [PubMed]
- Said, R.R.; Heyat, M.B.B.; Song, K.; Tian, C.; Wu, Z. A Systematic Review of Virtual Reality and Robot Therapy as Recent Rehabilitation Technologies Using EEG-Brain–Computer Interface Based on Movement-Related Cortical Potentials. Biosensors 2022, 12, 1134. [Google Scholar] [CrossRef] [PubMed]
- Alharbi, H. Identifying Thematics in a Brain-Computer Interface Research. Comput. Intell. Neurosci. 2023, 2023, 1–15. [Google Scholar] [CrossRef]
- Souto, D.; Cruz, T.; Fontes, P.; Batista, R.; Haase, V. Motor Imagery Development in Children: Changes in Speed and Accuracy with Increasing Age. Front. Pediatr. 2020, 8, 100. [Google Scholar] [CrossRef] [Green Version]
- Frau-Meigs, D. Media Education. A Kit for Teachers, Students, Parents and Professionals; Unesco: Paris, France, 2007. [Google Scholar]
- Lyu, X.; Ding, P.; Li, S.; Dong, Y.; Su, L.; Zhao, L.; Gong, A.; Fu, Y. Human factors engineering of BCI: An evaluation for satisfaction of BCI based on motor imagery. Cogn. Neurodynamics 2022, 17, 1–14. [Google Scholar] [CrossRef]
- Jeong, J.; Cho, J.; Lee, Y.; Lee, S.; Shin, G.; Kweon, Y.; Millán, J.; Müller, K.; Lee, S. 2020 International brain-computer interface competition: A review. Front. Hum. Neurosci. 2022, 16, 898300. [Google Scholar] [CrossRef]
- Värbu, K.; Muhammad, N.; Muhammad, Y. Past, present, and future of EEG-based BCI applications. Sensors 2022, 22, 3331. [Google Scholar] [CrossRef]
- Thompson, M.C. Critiquing the Concept of BCI Illiteracy. Sci. Eng. Ethics 2019, 25, 1217–1233. [Google Scholar] [CrossRef]
- Becker, S.; Dhindsa, K.; Mousapour, L.; Al Dabagh, Y. BCI Illiteracy: It’s Us, Not Them. Optimizing BCIs for Individual Brains. In Proceedings of the 2022 10th International Winter Conference on Brain-Computer Interface (BCI), IEEE, Gangwon-do, Republic of Korea, 21–23 February 2022; pp. 1–3. [Google Scholar]
- Vavoulis, A.; Figueiredo, P.; Vourvopoulos, A. A Review of Online Classification Performance in Motor Imagery-Based Brain–Computer Interfaces for Stroke Neurorehabilitation. Signals 2023, 4, 73–86. [Google Scholar] [CrossRef]
- Shi, T.W.; Chang, G.M.; Qiang, J.F.; Ren, L.; Cui, W.H. Brain computer interface system based on monocular vision and motor imagery for UAV indoor space target searching. Biomed. Signal Process. Control 2023, 79, 104114. [Google Scholar] [CrossRef]
- Naser, M.; Bhattacharya, S. Towards Practical BCI-Driven Wheelchairs: A Systematic Review Study. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1030–1044. [Google Scholar] [CrossRef]
- Tao, T.; Jia, Y.; Xu, G.; Liang, R.; Zhang, Q.; Chen, L.; Gao, Y.; Chen, R.; Zheng, X.; Yu, Y. Enhancement of motor imagery training efficiency by an online adaptive training paradigm integrated with error related potential. J. Neural Eng. 2023, 20, 016029. [Google Scholar] [CrossRef]
- Tao, L.; Cao, T.; Wang, Q.; Liu, D.; Sun, J. Distribution Adaptation and Classification Framework Based on Multiple Kernel Learning for Motor Imagery BCI Illiteracy. Sensors 2022, 22, 6572. [Google Scholar] [CrossRef]
- Ma, Y.; Gong, A.; Nan, W.; Ding, P.; Wang, F.; Fu, Y. Personalized Brain–Computer Interface and Its Applications. J. Pers. Med. 2023, 13, 46. [Google Scholar] [CrossRef]
- Subasi, A. Artificial Intelligence in Brain Computer Interface. In Proceedings of the 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 9–11 June 2022; pp. 1–7. [Google Scholar]
- Jiao, Y.; Zhou, T.; Yao, L.; Zhou, G.; Wang, X.; Zhang, Y. Multi-View Multi-Scale Optimization of Feature Representation for EEG Classification Improvement. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2589–2597. [Google Scholar] [CrossRef]
- Wang, T.; Du, S.; Dong, E. A novel method to reduce the motor imagery BCI illiteracy. Med. Biol. Eng. Comput. 2021, 59, 1–13. [Google Scholar] [CrossRef]
- Mahmud, M.; Kaiser, M.; Hussain, A.; Vassanelli, S. Applications of Deep Learning and Reinforcement Learning to Biological Data. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2063–2079. [Google Scholar] [CrossRef] [Green Version]
- Guragai, B.; AlShorman, O.; Masadeh, M.; Heyat, M.B.B. A survey on deep learning classification algorithms for motor imagery. In Proceedings of the 2020 32nd international conference on microelectronics (ICM), IEEE, Aqaba, Jordan, 14–17 December 2020; pp. 1–4. [Google Scholar]
- Tibrewal, N.; Leeuwis, N.; Alimardani, M. Classification of motor imagery EEG using deep learning increases performance in inefficient BCI users. PLoS ONE 2022, 17, e0268880. [Google Scholar] [CrossRef]
- Bang, J.; Lee, S. Interpretable Convolutional Neural Networks for Subject-Independent Motor Imagery Classification. In Proceedings of the 2022 10th International Winter Conference on Brain-Computer Interface (BCI), Gangwon-do, Republic of Korea, 21–23 February 2022; pp. 1–5. [Google Scholar]
- Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain computer interface: A review. Array 2019, 1, 100003. [Google Scholar] [CrossRef]
- Collazos-Huertas, D.; Álvarez-Meza, A.; Acosta-Medina, C.; Castaño-Duque, G.; Castellanos-Dominguez, G. CNN-based framework using spatial dropping for enhanced interpretation of neural activity in motor imagery classification. Brain Inform. 2020, 7, 8. [Google Scholar] [CrossRef] [PubMed]
- Kim, S.; Lee, D.; Lee, S. Rethinking CNN Architecture for Enhancing Decoding Performance of Motor Imagery-Based EEG Signals. IEEE Access 2022, 10, 96984–96996. [Google Scholar] [CrossRef]
- Garg, D.; Verma, G.; Singh, A. A review of Deep Learning based methods for Affect Analysis using Physiological Signals. Multimed. Tools Appl. 2023, 1–46. [Google Scholar] [CrossRef]
- Xiaoguang, L.; Shicheng, X.; Xiaodong, W.; Tie, L.; Hongrui, W.; Xiuling, L. A compact multi-branch 1D convolutional neural network for EEG-based motor imagery classification. Biomed. Signal Process. Control 2023, 81, 104456. [Google Scholar]
- Collazos-Huertas, D.; Álvarez-Meza, A.; Castellanos-Dominguez, G. Image-Based Learning Using Gradient Class Activation Maps for Enhanced Physiological Interpretability of Motor Imagery Skills. Appl. Sci. 2022, 12, 1695. [Google Scholar] [CrossRef]
- Fujiwara, Y.; Ushiba, J. Deep Residual Convolutional Neural Networks for Brain–Computer Interface to Visualize Neural Processing of Hand Movements in the Human Brain. Front. Comput. Neurosci. 2022, 16, 882290. [Google Scholar] [CrossRef]
- Izzuddin, T.; Safri, N.; Othman, M. Compact convolutional neural network (CNN) based on SincNet for end-to-end motor imagery decoding and analysis. Biocybern. Biomed. Eng. 2021, 41, 1629–1645. [Google Scholar] [CrossRef]
- Kumar, N.; Michmizos, K. A neurophysiologically interpretable deep neural network predicts complex movement components from brain activity. Sci. Rep. 2022, 12, 1101. [Google Scholar] [CrossRef]
- Tobón-Henao, M.; Álvarez-Meza, A.; Castellanos-Domínguez, G. Subject-Dependent Artifact Removal for Enhancing Motor Imagery Classifier Performance under Poor Skills. Sensors 2022, 22, 5771. [Google Scholar] [CrossRef]
- Rahman, M.; Mahmood, U.; Lewis, N.; Gazula, H.; Fedorov, A.; Fu, Z.; Calhoun, V.; Plis, S. Interpreting models interpreting brain dynamics. Sci. Rep. 2022, 12, 12023. [Google Scholar] [CrossRef] [PubMed]
- Borra, D.; Fantozzi, S.; Magosso, E. Interpretable and lightweight convolutional neural network for EEG decoding: Application to movement execution and imagination. Neural Netw. 2020, 129, 55–74. [Google Scholar] [CrossRef] [PubMed]
- Caicedo-Acosta, J.; Castaño, G.A.; Acosta-Medina, C.; Alvarez-Meza, A.; Castellanos-Dominguez, G. Deep neural regression prediction of motor imagery skills using EEG functional connectivity indicators. Sensors 2021, 21, 1932. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 24–25. [Google Scholar]
- Zeiler, M.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part I 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 818–833. [Google Scholar]
- Wackernagel, H. Multivariate Geostatistics: An Introduction with Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- García-Murillo, D.G.; Álvarez-Meza, A.M.; Castellanos-Domínguez, G. Single-Trial Kernel-Based Functional Connectivity for Enhanced Feature Extraction in Motor-Related Tasks. Sensors 2021, 21, 2750. [Google Scholar] [CrossRef] [PubMed]
- Álvarez-Meza, A.M.; Cárdenas-Pena, D.; Castellanos-Dominguez, G. Unsupervised kernel function building using maximization of information potential variability. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2014; pp. 335–342. [Google Scholar]
- Gu, L.; Yu, Z.; Ma, T.; Wang, H.; Li, Z.; Fan, H. Random matrix theory for analysing the brain functional network in lower limb motor imagery. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), IEEE, Montreal, QC, Canada, 20–24 July 2020; pp. 506–509. [Google Scholar]
- Li, C.; Qin, C.; Fang, J. Motor-imagery classification model for brain-computer interface: A sparse group filter bank representation model. arXiv 2021, arXiv:2108.12295. [Google Scholar]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2022. [Google Scholar]
- Peterson, V.; Merk, T.; Bush, A.; Nikulin, V.; Kühn, A.A.; Neumann, W.J.; Richardson, R.M. Movement decoding using spatio-spectral features of cortical and subcortical local field potentials. Exp. Neurol. 2023, 359, 114261. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z.; Huang, S.; Wang, W.; Ming, D. EEG characteristic investigation of the sixth-finger motor imagery and optimal channel selection for classification. J. Neural Eng. 2022, 19, 016001. [Google Scholar] [CrossRef]
- Putzolu, M.; Samogin, J.; Cosentino, C.; Mezzarobba, S.; Bonassi, G.; Lagravinese, G.; Vato, A.; Mantini, D.; Avanzino, L.; Pelosin, E. Neural oscillations during motor imagery of complex gait: An HdEEG study. Sci. Rep. 2022, 12, 4314. [Google Scholar] [CrossRef]
- Strypsteen, T.; BertrandSenior Member, A. Bandwidth-efficient distributed neural network architectures with application to neuro-sensor networks. IEEE J. Biomed. Health Inform. 2022, 27, 1–12. [Google Scholar] [CrossRef]
- Nergård, K.; Endestad, T.; Torresen, J. Effect of Hand Dominance When Decoding Motor Imagery Grasping Tasks. In Proceedings of the Computational Neuroscience: Third Latin American Workshop, LAWCN 2021, São Luís, Brazil, 8–10 December 2021; Revised Selected Papers. Springer: Berlin/Heidelberg, Germany, 2022; pp. 233–249. [Google Scholar]
- Sadiq, M.T.; Aziz, M.Z.; Almogren, A.; Yousaf, A.; Siuly, S.; Rehman, A.U. Exploiting pretrained CNN models for the development of an EEG-based robust BCI framework. Comput. Biol. Med. 2022, 143, 105242. [Google Scholar] [CrossRef] [PubMed]
- Chacon-Murguia, M.; Rivas-Posada, E. A CNN-based modular classification scheme for motor imagery using a novel EEG sampling protocol suitable for IoT healthcare systems. Neural Comput. Appl. 2022, 1–22. [Google Scholar] [CrossRef]
- George, O.; Smith, R.; Madiraju, P.; Yahyasoltani, N.; Ahamed, S.I. Data augmentation strategies for EEG-based motor imagery decoding. Heliyon 2022, 8, e10240. [Google Scholar] [CrossRef]
- Kumar, S.; Sharma, A.; Tsunoda, T. Brain wave classification using long short-term memory network based OPTICAL predictor. Sci. Rep. 2019, 9, 9153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xu, L.; Xu, M.; Ma, Z.; Wang, K.; Jung, T.; Ming, D. Enhancing transfer performance across datasets for brain-computer interfaces using a combination of alignment strategies and adaptive batch normalization. J. Neural Eng. 2021, 18, 0460e5. [Google Scholar] [CrossRef]
- Zhao, X.; Zhao, J.; Liu, C.; Cai, W. Deep neural network with joint distribution matching for cross-subject motor imagery brain-computer interfaces. BioMed Res. Int. 2020, 2020, 7285057. [Google Scholar] [CrossRef] [Green Version]
- Jeon, E.; Ko, W.; Yoon, J.; Suk, H. Mutual Information-Driven Subject-Invariant and Class-Relevant Deep Representation Learning in BCI. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 1–11. [Google Scholar] [CrossRef]
- Deng, X.; Zhang, B.; Yu, N.n.; Liu, K.; Sun, K. Advanced TSGL-EEGNet for Motor Imagery EEG-Based Brain-Computer Interfaces. IEEE Access 2021, 9, 25118–25130. [Google Scholar] [CrossRef]
- Li, D.; Yang, B.; Ma, J.; Qiu, W. Three-Class Motor Imagery Classification Based on SELU-EEGNet. In Proceedings of the 8th International Conference on Computing and Artificial Intelligence ICCAI ’22, Tianjin, China, 18–21 March 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 522–527. [Google Scholar]
- Riyad, M.; Khalil, M.; Adib, A. MR-EEGNet: An Efficient ConvNets for Motor Imagery Classification. In Advanced Intelligent Systems for Sustainable Development (AI2SD2020) Volume 1; Springer: Berlin/Heidelberg, Germany, 2022; pp. 722–729. [Google Scholar]
- Cui, J.; Lan, Z.; Liu, Y.; Li, R.; Li, F.; Sourina, O.; Müller-Wittig, W. A compact and interpretable convolutional neural network for cross-subject driver drowsiness detection from single-channel EEG. Methods 2022, 202, 173–184. [Google Scholar] [CrossRef]
- Niu, X.; Lu, N.; Kang, J.; Cui, Z. Knowledge-driven feature component interpretable network for motor imagery classification. J. Neural Eng. 2022, 19, 016032. [Google Scholar] [CrossRef]
- Suhail, T.; Indiradevi, K.; Suhara, E.; Poovathinal, S.; Ayyappan, A. Distinguishing cognitive states using electroencephalography local activation and functional connectivity patterns. Biomed. Signal Process. Control 2022, 77, 103742. [Google Scholar] [CrossRef]
- Hossain, K.M.; Islam, M.A.; Hossain, S.; Nijholt, A.; Ahad, M.A.R. Status of deep learning for EEG-based brain-computer interface applications. Front. Comput. Neurosci. 2023, 16, 1006763. [Google Scholar] [CrossRef] [PubMed]
Layer | Conv2D | Depthwise | Separable | Flatten | Dense |
---|---|---|---|---|---|
Name | Conv2D | Conv2D | |||
# filters | |||||
Size | () | () | () | ||
# params | |||||
Output | () | () | () | () | N |
Options | Activation = Linear | Activation = Linear | Activation = Linear | ||
Mode = same | Mode = same | Mode = same | |||
Depth = D | |||||
max_norm = 1 | |||||
BatchNorm = True | BatchNorm = True | BatchNorm = True | |||
Activation = ELU | Activation = ELU | ||||
AvgPool2D = () | AvgPool2D = () | ||||
Dropout* − | Dropout* − | ||||
or | or |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Collazos-Huertas, D.F.; Álvarez-Meza, A.M.; Cárdenas-Peña, D.A.; Castaño-Duque, G.A.; Castellanos-Domínguez, C.G. Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity. Sensors 2023, 23, 2750. https://doi.org/10.3390/s23052750
Collazos-Huertas DF, Álvarez-Meza AM, Cárdenas-Peña DA, Castaño-Duque GA, Castellanos-Domínguez CG. Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity. Sensors. 2023; 23(5):2750. https://doi.org/10.3390/s23052750
Chicago/Turabian StyleCollazos-Huertas, Diego Fabian, Andrés Marino Álvarez-Meza, David Augusto Cárdenas-Peña, Germán Albeiro Castaño-Duque, and César Germán Castellanos-Domínguez. 2023. "Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity" Sensors 23, no. 5: 2750. https://doi.org/10.3390/s23052750
APA StyleCollazos-Huertas, D. F., Álvarez-Meza, A. M., Cárdenas-Peña, D. A., Castaño-Duque, G. A., & Castellanos-Domínguez, C. G. (2023). Posthoc Interpretability of Neural Responses by Grouping Subject Motor Imagery Skills Using CNN-Based Connectivity. Sensors, 23(5), 2750. https://doi.org/10.3390/s23052750