[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = BCI Game

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 3376 KiB  
Article
Utilizing Dry Electrode Electroencephalography and AI Robotics for Cognitive Stress Monitoring in Video Gaming
by Aseel A. Alrasheedi, Alyah Z. Alrabeah, Fatemah J. Almuhareb, Noureyah M. Y. Alras, Shaymaa N. Alduaij, Abdullah S. Karar, Sherif Said, Karim Youssef and Samer Al Kork
Appl. Syst. Innov. 2024, 7(4), 68; https://doi.org/10.3390/asi7040068 - 31 Jul 2024
Cited by 1 | Viewed by 1501
Abstract
This research explores the integration of the Dry Sensor Interface-24 (DSI-24) EEG headset with a ChatGPT-enabled Furhat robot to monitor cognitive stress in video gaming environments. The DSI-24, a cutting-edge, wireless EEG device, is adept at rapidly capturing brainwave activity, making it particularly [...] Read more.
This research explores the integration of the Dry Sensor Interface-24 (DSI-24) EEG headset with a ChatGPT-enabled Furhat robot to monitor cognitive stress in video gaming environments. The DSI-24, a cutting-edge, wireless EEG device, is adept at rapidly capturing brainwave activity, making it particularly suitable for dynamic settings such as gaming. Our study leverages this technology to detect cognitive stress indicators in players by analyzing EEG data. The collected data are then interfaced with a ChatGPT-powered Furhat robot, which performs dual roles: guiding players through the data collection process and prompting breaks when elevated stress levels are detected. The core of our methodology is the real-time processing of EEG signals to determine players’ focus levels, using a mental focusing feature extracted from the EEG data. The work presented here discusses how technology, data analysis methods and their combined effects can improve player satisfaction and enhance gaming experiences. It also explores the obstacles and future possibilities of using EEG for monitoring video gaming environments. Full article
Show Figures

Figure 1

Figure 1
<p>Number of Scopus-indexed conference and journal papers on ‘Video Gaming’ compared to those specifically on ‘EEG and Video Gaming’ over the academic years 2010–2023.</p>
Full article ">Figure 2
<p>Experimental setup for EEG-based cognitive stress monitoring in video gaming environments.</p>
Full article ">Figure 3
<p>The 10-20 electrode placement system with the DSI-24 electrodes highlighted.</p>
Full article ">Figure 4
<p>A sensor electrode from the DSI-24 EEG device.</p>
Full article ">Figure 5
<p>Experimental setup: (<b>a</b>) Player interaction in <span class="html-italic">FIFA</span> gaming environment, (<b>b</b>) ChatGPT-enabled Furhat robot with expressive capabilities.</p>
Full article ">Figure 6
<p>Comparative brain heatmaps: (<b>a</b>) normal brain activity, (<b>b</b>) brain activity under stress with active frontal lobe. The generated heatmaps were calculated from the time domain root mean square (RMS) value of the EEG signal over a 1 s window.</p>
Full article ">Figure 7
<p>The baseline model and the Local Binary Pattern Histogram (LBPH) approach are exemplified using a sample brain topology measurement.</p>
Full article ">
25 pages, 951 KiB  
Systematic Review
Social Robots and Brain–Computer Interface Video Games for Dealing with Attention Deficit Hyperactivity Disorder: A Systematic Review
by José-Antonio Cervantes, Sonia López, Salvador Cervantes, Aribei Hernández and Heiler Duarte
Brain Sci. 2023, 13(8), 1172; https://doi.org/10.3390/brainsci13081172 - 7 Aug 2023
Cited by 6 | Viewed by 4840
Abstract
Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity that affects a large number of young people in the world. The current treatments for children living with ADHD combine different approaches, such as pharmacological, behavioral, cognitive, and [...] Read more.
Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity that affects a large number of young people in the world. The current treatments for children living with ADHD combine different approaches, such as pharmacological, behavioral, cognitive, and psychological treatment. However, the computer science research community has been working on developing non-pharmacological treatments based on novel technologies for dealing with ADHD. For instance, social robots are physically embodied agents with some autonomy and social interaction capabilities. Nowadays, these social robots are used in therapy sessions as a mediator between therapists and children living with ADHD. Another novel technology for dealing with ADHD is serious video games based on a brain–computer interface (BCI). These BCI video games can offer cognitive and neurofeedback training to children living with ADHD. This paper presents a systematic review of the current state of the art of these two technologies. As a result of this review, we identified the maturation level of systems based on these technologies and how they have been evaluated. Additionally, we have highlighted ethical and technological challenges that must be faced to improve these recently introduced technologies in healthcare. Full article
(This article belongs to the Special Issue Advances in ADHD)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram of robots in ADHD care.</p>
Full article ">Figure 2
<p>PRISMA flow diagram of BCI video games in ADHD care.</p>
Full article ">
17 pages, 926 KiB  
Article
Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery
by Georgios Prapas, Kosmas Glavas, Katerina D. Tzimourta, Alexandros T. Tzallas and Markos G. Tsipouras
Information 2023, 14(7), 354; https://doi.org/10.3390/info14070354 - 21 Jun 2023
Cited by 5 | Viewed by 3515
Abstract
Brain-computer interfaces (BCIs) are becoming an increasingly popular technology, used in a variety of fields such as medical, gaming, and lifestyle. This paper describes a 3D non-invasive BCI game that uses a Muse 2 EEG headband to acquire electroencephalogram (EEG) data and OpenViBE [...] Read more.
Brain-computer interfaces (BCIs) are becoming an increasingly popular technology, used in a variety of fields such as medical, gaming, and lifestyle. This paper describes a 3D non-invasive BCI game that uses a Muse 2 EEG headband to acquire electroencephalogram (EEG) data and OpenViBE platform for processing the signals and classifying them into three different mental states: left and right motor imagery and eye blink. The game is developed to assess user adjustment and improvement in BCI environment after training. The classification algorithm used is Multi-Layer Perceptron (MLP), with 96.94% accuracy. A total of 33 subjects participated in the experiment and successfully controlled an avatar using mental commands to collect coins. The online metrics employed for this BCI system are the average game score, the average number of clusters and average user improvement. Full article
Show Figures

Figure 1

Figure 1
<p>The Muse 2 headband is presented (<b>right</b> image). A screenshot from one recording is shown. The four EEG channels of the Muse 2 headband and their waves are presented (<b>left</b> image).</p>
Full article ">Figure 2
<p>This flowchart illustrates the two-step process of the proposed system. Initially, an offline processing phase is employed to train a classifier using EEG data. Following this, an online processing phase is used to take mental commands from the user, which are subsequently translated into in-game movement through the use of the trained classifier.</p>
Full article ">Figure 3
<p>Offline processing scenario to train the classifier. The first step of the scenario involves importing EEG recordings using a CSV file reader. Three separate boxes are used to handle the three different EEG recordings (Left, Right and Blink) of the BCI system. The signals are then filtered to remove frequencies outside the range of 8–40 Hz. The filtered signals are epoched in time windows of 3 s and EEG waves are calculated (Alpha, Beta 1, Beta 2 Gamma 1, Gamma 2). In the next step, the energy of the signals is calculated and a feature vector of all frequency bands energies is formulated in logarithmic scale. Finally, all feature vectors from all the different EGG recordings are used for the training of the classifier.</p>
Full article ">Figure 4
<p>Online Scenario for the proposed BCI system. Acquisition client connects with the LSL stream from BlueMuse in a specific port and the real time data processing starts. With the channel selector only the four EEG channels are included (TP9, TP10, AF7 AF8) and then the same process with the offline scenario is applied. The signals are filtered, EEG waves and energy are calculated and then this features are fed in the classifier to classify the mental commads. Finally, the LSL stream is employed to transmit the classifier’s results to the game. This was accomplished through the implementation of the LSL stream box, which facilitated the communication of data between OpenViBE and the game.</p>
Full article ">Figure 5
<p>Screenshot from the game-play. The avatar is moving on the platform depending on the user’s mental commands.</p>
Full article ">Figure 6
<p>Histogram that presents the different groups of users depending on their performance while testing the BCI game.</p>
Full article ">Figure 7
<p>The average user improvement for MI commands before and after playing the game.</p>
Full article ">
22 pages, 1035 KiB  
Review
Machine-Learning Methods for Speech and Handwriting Detection Using Neural Signals: A Review
by Ovishake Sen, Anna M. Sheehan, Pranay R. Raman, Kabir S. Khara, Adam Khalifa and Baibhab Chatterjee
Sensors 2023, 23(12), 5575; https://doi.org/10.3390/s23125575 - 14 Jun 2023
Cited by 3 | Viewed by 4512
Abstract
Brain–Computer Interfaces (BCIs) have become increasingly popular in recent years due to their potential applications in diverse fields, ranging from the medical sector (people with motor and/or communication disabilities), cognitive training, gaming, and Augmented Reality/Virtual Reality (AR/VR), among other areas. BCI which can [...] Read more.
Brain–Computer Interfaces (BCIs) have become increasingly popular in recent years due to their potential applications in diverse fields, ranging from the medical sector (people with motor and/or communication disabilities), cognitive training, gaming, and Augmented Reality/Virtual Reality (AR/VR), among other areas. BCI which can decode and recognize neural signals involved in speech and handwriting has the potential to greatly assist individuals with severe motor impairments in their communication and interaction needs. Innovative and cutting-edge advancements in this field have the potential to develop a highly accessible and interactive communication platform for these people. The purpose of this review paper is to analyze the existing research on handwriting and speech recognition from neural signals. So that the new researchers who are interested in this field can gain thorough knowledge in this research area. The current research on neural signal-based recognition of handwriting and speech has been categorized into two main types: invasive and non-invasive studies. We have examined the latest papers on converting speech-activity-based neural signals and handwriting-activity-based neural signals into text data. The methods of extracting data from the brain have also been discussed in this review. Additionally, this review includes a brief summary of the datasets, preprocessing techniques, and methods used in these studies, which were published between 2014 and 2022. This review aims to provide a comprehensive summary of the methodologies used in the current literature on neural signal-based recognition of handwriting and speech. In essence, this article is intended to serve as a valuable resource for future researchers who wish to investigate neural signal-based machine-learning methods in their work. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

Figure 1
<p>Key regions of the brain that are fundamentally responsible for speech production and initiating motor movements for generating handwriting. Wernicke’s area is responsible for speech production. The parietal lobe, Visual cortex, and Cingulate cortex are responsible for handwriting generation. The primary motor cortex and Broca’s area are responsible for both speech production and handwriting generation.</p>
Full article ">Figure 2
<p>Existing technologies like EEG Sensors, ECoG Arrays, and Microelectrode Arrays that are used to acquire neural signals with their acquired signal characteristics including amplitude and frequency bands [<a href="#B47-sensors-23-05575" class="html-bibr">47</a>]. The amplitudes of neural signals acquired from ECoG arrays and the frequency of neural signals acquired from microelectrode arrays are typically higher than other existing technologies.</p>
Full article ">Figure 3
<p>Existing methods of collecting neural signals from brain. (<b>a</b>) Data processing flow diagram, advantages, and disadvantages of invasive process of collecting neural signals from the brain. Though invasive process requires surgery and high cost, neural signals that are extracted from invasive process provide accurate results and higher SNR. (<b>b</b>) Data processing flow diagram, advantages, and disadvantages of non-invasive process of collecting neural signals from the brain. The non-invasive process requires no surgery and low cost, but the neural signals acquired from the non-invasive process provide less accurate results and lower SNR.</p>
Full article ">Figure 4
<p>Summary of the existing articles on speech and handwritten character recognition with invasive and non-invasive neural signal acquisition including methods, datasets, electrodes specification and publication details of the individual articles [<a href="#B2-sensors-23-05575" class="html-bibr">2</a>,<a href="#B35-sensors-23-05575" class="html-bibr">35</a>,<a href="#B67-sensors-23-05575" class="html-bibr">67</a>,<a href="#B68-sensors-23-05575" class="html-bibr">68</a>,<a href="#B69-sensors-23-05575" class="html-bibr">69</a>,<a href="#B70-sensors-23-05575" class="html-bibr">70</a>,<a href="#B72-sensors-23-05575" class="html-bibr">72</a>,<a href="#B73-sensors-23-05575" class="html-bibr">73</a>,<a href="#B74-sensors-23-05575" class="html-bibr">74</a>,<a href="#B75-sensors-23-05575" class="html-bibr">75</a>,<a href="#B77-sensors-23-05575" class="html-bibr">77</a>,<a href="#B78-sensors-23-05575" class="html-bibr">78</a>,<a href="#B79-sensors-23-05575" class="html-bibr">79</a>,<a href="#B80-sensors-23-05575" class="html-bibr">80</a>,<a href="#B81-sensors-23-05575" class="html-bibr">81</a>].</p>
Full article ">Figure 5
<p>Diagram of data processing and machine learning methods used for decoding neural signals (each block corresponds to one step of the whole process).</p>
Full article ">Figure 6
<p>In this review, the machine learning methods that have been used in the existing research are divided into Classical Classification methods and Deep Learning methods to illustrate the existing research more clearly. SVM, LDA, RF, HMM, and GMM fall under classical classification methods, and CNN, RNN, GRU, and LSTM fall under deep learning methods.</p>
Full article ">Figure 7
<p>Pie charts of deep learning and classical methods used in existing research for speech and handwriting detection from neural signals. (<b>a</b>) Pie chart of deep learning methods used. This pie chart visualizes that GRU is dominating than CNN, RNN, and LSTM in this research field. (<b>b</b>) Pie chart of classical classification methods used. This pie chart visualizes that HMM is dominating in this research field as a classical classification method.</p>
Full article ">Figure 8
<p>Chronological analysis of techniques used in neural data processing from 2014 to 2022. Classical classification methods were used to dominate in the early stages of this research area but nowadays deep learning methods are dominating in this research field.</p>
Full article ">
9 pages, 1037 KiB  
Article
Peer Verbal Encouragement Enhances Offensive Performance Indicators in Handball Small-Sided Games
by Faten Sahli, Hajer Sahli, Omar Trabelsi, Nidhal Jebabli, Makram Zghibi and Monoem Haddad
Children 2023, 10(4), 680; https://doi.org/10.3390/children10040680 - 3 Apr 2023
Cited by 3 | Viewed by 1764
Abstract
Objective: This study aimed at assessing the effects of two verbal encouragement modalities on the different offensive and defensive performance indicators in handball small-sided games practiced in physical education settings. Methods: A total of 14 untrained secondary school male students, aged 17 to [...] Read more.
Objective: This study aimed at assessing the effects of two verbal encouragement modalities on the different offensive and defensive performance indicators in handball small-sided games practiced in physical education settings. Methods: A total of 14 untrained secondary school male students, aged 17 to 18, took part in a three-session practical intervention. Students were divided into two teams of seven players (four field players, a goalkeeper, and two substitutes). During each experimental session, each team played one 8 min period under teacher verbal encouragement (TeacherEN) and another under peer verbal encouragement (PeerEN). All sessions were videotaped for later analysis using a specific grid focusing on the balls played, balls won, balls lost, shots on goal, goals scored, as well as the ball conservation index (BCI), and the defensive efficiency index (DEI). Results: The findings showed no significant differences in favor of TeacherEN in all the performance indicators that were measured, whereas significant differences in favor of PeerEN were observed in balls played and shots on goal. Conclusions: When implemented in handball small-sided games, peer verbal encouragement can produce greater positive effects than teacher verbal encouragement in terms of offensive performance. Full article
(This article belongs to the Special Issue Physical Activity and Physical Fitness among Children and Adolescent)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The effects of two different modalities of verbal encouragement on ball conservation index (BCI); (<b>b</b>) the effects of two different modalities of verbal encouragement on defensive efficiency index (DEI).</p>
Full article ">
20 pages, 5141 KiB  
Article
Real-Time Navigation in Google Street View® Using a Motor Imagery-Based BCI
by Liuyin Yang and Marc M. Van Hulle
Sensors 2023, 23(3), 1704; https://doi.org/10.3390/s23031704 - 3 Feb 2023
Cited by 7 | Viewed by 3341
Abstract
Navigation in virtual worlds is ubiquitous in games and other virtual reality (VR) applications and mainly relies on external controllers. As brain–computer interfaces (BCI)s rely on mental control, bypassing traditional neural pathways, they provide to paralyzed users an alternative way to navigate. However, [...] Read more.
Navigation in virtual worlds is ubiquitous in games and other virtual reality (VR) applications and mainly relies on external controllers. As brain–computer interfaces (BCI)s rely on mental control, bypassing traditional neural pathways, they provide to paralyzed users an alternative way to navigate. However, the majority of BCI-based navigation studies adopt cue-based visual paradigms, and the evoked brain responses are encoded into navigation commands. Although robust and accurate, these paradigms are less intuitive and comfortable for navigation compared to imagining limb movements (motor imagery, MI). However, decoding motor imagery from EEG activity is notoriously challenging. Typically, wet electrodes are used to improve EEG signal quality, including a large number of them to discriminate between movements of different limbs, and a cuedbased paradigm is used instead of a self-paced one to maximize decoding performance. Motor BCI applications primarily focus on typing applications or on navigating a wheelchair—the latter raises safety concerns—thereby calling for sensors scanning the environment for obstacles and potentially hazardous scenarios. With the help of new technologies such as virtual reality (VR), vivid graphics can be rendered, providing the user with a safe and immersive experience; and they could be used for navigation purposes, a topic that has yet to be fully explored in the BCI community. In this study, we propose a novel MI-BCI application based on an 8-dry-electrode EEG setup, with which users can explore and navigate in Google Street View®. We pay attention to system design to address the lower performance of the MI decoder due to the dry electrodes’ lower signal quality and the small number of electrodes. Specifically, we restricted the number of navigation commands by using a novel middle-level control scheme and avoided decoder mistakes by introducing eye blinks as a control signal in different navigation stages. Both offline and online experiments were conducted with 20 healthy subjects. The results showed acceptable performance, even given the limitations of the EEG set-up, which we attribute to the design of the BCI application. The study suggests the use of MI-BCI in future games and VR applications for consumers and patients temporarily or permanently devoid of muscle control. Full article
(This article belongs to the Special Issue Computational Intelligence Based-Brain-Body Machine Interface)
Show Figures

Figure 1

Figure 1
<p>System Overview: <b>left</b> panel: offline training session, <b>right</b> panel: online real time navigation control. L-IM stands for the left-hand imagined motor task, R-IM for the right-hand imagined motor task, F-IM for the flexion imagined motor task. See text for explanation. (The participant has given the consent to use her blurred image).</p>
Full article ">Figure 2
<p>Electrodes’ locations.</p>
Full article ">Figure 3
<p>Timing of the training session.</p>
Full article ">Figure 4
<p>EEG processing pipeline: the output shape after each step is listed in the above table.</p>
Full article ">Figure 5
<p>Neural network architecture: the dashed dropout layer is only used in the training phase.</p>
Full article ">Figure 6
<p>Navigation interface. Panel A: the main navigation window, Panel B: the state bar, Panel C: the four navigation actions.</p>
Full article ">Figure 7
<p>Navigation Control Diagram. (<b>a</b>): low- and middle-level control modes, (<b>b</b>): error control strategies.</p>
Full article ">Figure 8
<p>Low level navigation task, the Rijksmuseum on the <b>left</b>, the Uffizi Museum on the <b>right</b>. The layout maps are from <a href="https://www.visituffizi.org/museum/uffizi-floor-plans/" target="_blank">https://www.visituffizi.org/museum/uffizi-floor-plans/</a> and <a href="https://kalden.home.xs4all.nl/mann/Mannheimer-inrijksmuseum.html" target="_blank">https://kalden.home.xs4all.nl/mann/Mannheimer-inrijksmuseum.html</a> (accessed on 20 November 2022) See text for explaination.</p>
Full article ">Figure 9
<p>Spatial patterns corresponding to the 8–12 and 18–22 Hz bands (top 2 rows and bottom 2 rows), averaged for all participants (Average) and for 3 randomly selected subjects (subjects a, b, and c) (arranged columnwise), for imagined left-hand clenching (<b>Left</b>) and imagined right-hand clenching (<b>Right</b>) (arranged top and bottom row in each row-pair).</p>
Full article ">Figure 10
<p>Offline decoder’s performance.</p>
Full article ">Figure 11
<p>Online decoder’s performance.</p>
Full article ">Figure 12
<p>Task completion overview. See text for explanation.</p>
Full article ">Figure 13
<p>Time to complete the middle-level experiment with DC and EC strategies.</p>
Full article ">Figure 14
<p>Time to issue a correct navigation command. Ford stands for moving forward, Rot for rotation, dc for double confirmation strategy, EC for error correction strategy, and NE for no error control strategy.</p>
Full article ">Figure A1
<p>Description of the decoder architecture used in our study. The table was automatically generated by MATLAB’s analyzeNetwork function.</p>
Full article ">
27 pages, 14892 KiB  
Article
Measuring Brain Activation Patterns from Raw Single-Channel EEG during Exergaming: A Pilot Study
by Gianluca Amprimo, Irene Rechichi, Claudia Ferraris and Gabriella Olmo
Electronics 2023, 12(3), 623; https://doi.org/10.3390/electronics12030623 - 26 Jan 2023
Cited by 6 | Viewed by 2929
Abstract
Physical and cognitive rehabilitation is deemed crucial to attenuate symptoms and to improve the quality of life in people with neurodegenerative disorders, such as Parkinson’s Disease. Among rehabilitation strategies, a novel and popular approach relies on exergaming: the patient performs a motor or [...] Read more.
Physical and cognitive rehabilitation is deemed crucial to attenuate symptoms and to improve the quality of life in people with neurodegenerative disorders, such as Parkinson’s Disease. Among rehabilitation strategies, a novel and popular approach relies on exergaming: the patient performs a motor or cognitive task within an interactive videogame in a virtual environment. These strategies may widely benefit from being tailored to the patient’s needs and engagement patterns. In this pilot study, we investigated the ability of a low-cost BCI based on single-channel EEG to measure the user’s engagement during an exergame. As a first step, healthy subjects were recruited to assess the system’s capability to distinguish between (1) rest and gaming conditions and (2) gaming at different complexity levels, through Machine Learning supervised models. Both EEG and eye-blink features were employed. The results indicate the ability of the exergame to stimulate engagement and the capability of the supervised classification models to distinguish resting stage from game-play (accuracy > 95%). Finally, different clusters of subject responses throughout the game were identified, which could help define models of engagement trends. This result is a starting point in developing an effectively subject-tailored exergaming system. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental data acquisition protocol: an EEG baseline is computed from a 3-min resting phase, then the subject is instructed about the game, plays the game and finally answers the NASA and TRESCA questionnaires.</p>
Full article ">Figure 2
<p>Offline data processing and analysis flowchart.</p>
Full article ">Figure 3
<p>Game scenario during Level 4 of the GDD exergame. The user has to select the red sphere, but the instruction is written in green. The box is moving and the time is expiring (appreciable from the red background and the 1 second left to complete the task).</p>
Full article ">Figure 4
<p>Acquisition system setup for the experimental sessions.</p>
Full article ">Figure 5
<p>(<b>A</b>) Dreem 2 EEG Headset employed for the data collection and sensors location. (<b>B</b>) EEG electrodes available on the headset, 10–20 standard (green: channel selected for the study).</p>
Full article ">Figure 6
<p>Exergame validation through administered questionnaries. (<b>A</b>) NASA-TLX raw score per each subject; (<b>B</b>) TRESCA Effort score per each subject; (<b>C</b>) TRESCA Engagement score per each subject.</p>
Full article ">Figure 7
<p>Distribution of the first three positions in the ranking of the most distracting game elements.</p>
Full article ">Figure 8
<p>Violin plot of the Blink Relative Frequency across the four levels.</p>
Full article ">Figure 9
<p>Importance scores of the selected features, computed through the ReliefF algorithm.</p>
Full article ">Figure 10
<p>Inter-level variation clusters, along with the number of included subjects.</p>
Full article ">Figure 11
<p>Engagement Index (EI) across the four levels, in the three most populated clusters.</p>
Full article ">
14 pages, 826 KiB  
Article
Evaluation of the User Adaptation in a BCI Game Environment
by Kosmas Glavas, Georgios Prapas, Katerina D. Tzimourta, Nikolaos Giannakeas and Markos G. Tsipouras
Appl. Sci. 2022, 12(24), 12722; https://doi.org/10.3390/app122412722 - 12 Dec 2022
Cited by 7 | Viewed by 2091
Abstract
Brain-computer interface (BCI) technology is a developing field of study with numerous applications. The purpose of this paper is to discuss the use of brain signals as a direct communication pathway to an external device. In this work, Zombie Jumper is developed, which [...] Read more.
Brain-computer interface (BCI) technology is a developing field of study with numerous applications. The purpose of this paper is to discuss the use of brain signals as a direct communication pathway to an external device. In this work, Zombie Jumper is developed, which consists of 2 brain commands, imagining moving forward and blinking. The goal of the game is to jump over static or moving “zombie” characters in order to complete the level. To record the raw EEG data, a Muse 2 headband is used, and the OpenViBE platform is employed to process and classify the brain signals. The Unity engine is used to build the game, and the lab streaming layer (LSL) protocol is the connective link between Muse 2, OpenViBE and the Unity engine for this BCI-controlled game. A total of 37 subjects tested the game and played it at least 20 times. The average classification accuracy was 98.74%, ranging from 97.06% to 99.72%. Finally, playing the game for longer periods of time resulted in greater control. Full article
Show Figures

Figure 1

Figure 1
<p>Major components of a BCI system.</p>
Full article ">Figure 2
<p>Muse 2 headband with the corresponding electrodes.</p>
Full article ">Figure 3
<p>Flowchart diagram of the proposed system. The left side presents the offline processing that trains the classifier. The right side presents the online processing that uses the trained classifier to translate the mental commands into in-game movement.</p>
Full article ">Figure 4
<p>Offline processing scenario to train the classifier.</p>
Full article ">Figure 5
<p>Real-time classification scenario.</p>
Full article ">Figure 6
<p>Snapshot from BCI-controlled game.</p>
Full article ">
16 pages, 2217 KiB  
Article
Implementing Performance Accommodation Mechanisms in Online BCI for Stroke Rehabilitation: A Study on Perceived Control and Frustration
by Mads Jochumsen, Bastian Ilsø Hougaard, Mathias Sand Kristensen and Hendrik Knoche
Sensors 2022, 22(23), 9051; https://doi.org/10.3390/s22239051 - 22 Nov 2022
Cited by 5 | Viewed by 2356
Abstract
Brain–computer interfaces (BCIs) are successfully used for stroke rehabilitation, but the training is repetitive and patients can lose the motivation to train. Moreover, controlling the BCI may be difficult, which causes frustration and leads to even worse control. Patients might not adhere to [...] Read more.
Brain–computer interfaces (BCIs) are successfully used for stroke rehabilitation, but the training is repetitive and patients can lose the motivation to train. Moreover, controlling the BCI may be difficult, which causes frustration and leads to even worse control. Patients might not adhere to the regimen due to frustration and lack of motivation/engagement. The aim of this study was to implement three performance accommodation mechanisms (PAMs) in an online motor imagery-based BCI to aid people and evaluate their perceived control and frustration. Nineteen healthy participants controlled a fishing game with a BCI in four conditions: (1) no help, (2) augmented success (augmented successful BCI-attempt), (3) mitigated failure (turn unsuccessful BCI-attempt into neutral output), and (4) override input (turn unsuccessful BCI-attempt into successful output). Each condition was followed-up and assessed with Likert-scale questionnaires and a post-experiment interview. Perceived control and frustration were best predicted by the amount of positive feedback the participant received. PAM-help increased perceived control for poor BCI-users but decreased it for good BCI-users. The input override PAM frustrated the users the most, and they differed in how they wanted to be helped. By using PAMs, developers have more freedom to create engaging stroke rehabilitation games. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Data flow from the BCI cap to the fishing game developed in Unity. The BCI only controls the game when the black cursor is within the input window, marked by the green area on a bar displayed in the fishing game.</p>
Full article ">Figure 2
<p>In the fishing game, participants control a fisherman reeling fish. Participants use arrow keys to move the hook up and down between three lanes. A fish may appear in a random lane from either left or right side and may swim into the participant’s hook. The BCI input window then begins and the participant may then perform MI when the black cursor is within the green area.</p>
Full article ">Figure 3
<p>Each condition consisted of 20 trials. In the helped conditions, help trials with predefined outcomes (blue) were shuffled with normal (no PAM) trials (gray) to provide users with 30% help. Forced rejections (red) were inserted when people were succeeding above the 70% target control rate.</p>
Full article ">Figure 4
<p>Each participant in the experiment (1) underwent BCI setup and BCI calibration, (2) played a fishing game in four conditions, starting with the normal condition, followed by (3) three helped conditions in a shuffled order. Participants were then debriefed about their experiences.</p>
Full article ">Figure 5
<p>The relationship between perceived control and positive feedback is shown in the top row of each of the four conditions, while the relationship between frustration and positive feedback is shown in the middle row. In the bottom row, the relationship between frustration and perceived control is shown. AS: augmented success, IO: input override, MF: mitigated failure, and NO: normal condition without PAM help. Each data point represents the rating of a single participant.</p>
Full article ">Figure 5 Cont.
<p>The relationship between perceived control and positive feedback is shown in the top row of each of the four conditions, while the relationship between frustration and positive feedback is shown in the middle row. In the bottom row, the relationship between frustration and perceived control is shown. AS: augmented success, IO: input override, MF: mitigated failure, and NO: normal condition without PAM help. Each data point represents the rating of a single participant.</p>
Full article ">
25 pages, 7224 KiB  
Article
Exploration of Brain-Computer Interaction for Supporting Children’s Attention Training: A Multimodal Design Based on Attention Network and Gamification Design
by Danni Chang, Yan Xiang, Jing Zhao, Yuning Qian and Fan Li
Int. J. Environ. Res. Public Health 2022, 19(22), 15046; https://doi.org/10.3390/ijerph192215046 - 15 Nov 2022
Cited by 4 | Viewed by 2910
Abstract
Recent developments in brain–computer interface (BCI) technology have shown great potential in terms of estimating users’ mental state and supporting children’s attention training. However, existing training tasks are relatively simple and lack a reliable task-generation process. Moreover, the training experience has not been [...] Read more.
Recent developments in brain–computer interface (BCI) technology have shown great potential in terms of estimating users’ mental state and supporting children’s attention training. However, existing training tasks are relatively simple and lack a reliable task-generation process. Moreover, the training experience has not been deeply studied, and the empirical validation of the training effect is still insufficient. This study thusly proposed a BCI training system for children’s attention improvement. In particular, to achieve a systematic training process, the attention network was referred to generate the training games for alerting, orienting and executive attentions, and to improve the training experience and adherence, the gamification design theory was introduced to derive attractive training tasks. A preliminary experiment was conducted to set and modify the training parameters. Subsequently, a series of contrasting user experiments were organized to examine the impact of BCI training. To test the training effect of the proposed system, a hypothesis-testing approach was adopted. The results revealed that the proposed BCI gamification attention training system can significantly improve the participants’ attention behaviors and concentration ability. Moreover, an immersive, inspiring and smooth training process can be created, and a pleasant user experience can be achieved. Generally, this work is promising in terms of providing a valuable reference for related practices, especially for how to generate BCI attention training tasks using attention networks and how to improve training adherence by integrating multimodal gamification elements. Full article
(This article belongs to the Section Children's Health)
Show Figures

Figure 1

Figure 1
<p>The overall framework of the training system (authors’ proposal).</p>
Full article ">Figure 2
<p>The detailed development procedures of the training system (authors’ proposal).</p>
Full article ">Figure 3
<p>Consumer-grade brain–computer device Emotiv Epoc X.</p>
Full article ">Figure 4
<p>The data analysis software of EmotivBCI.</p>
Full article ">Figure 5
<p>The structure of the game system (authors’ proposal).</p>
Full article ">Figure 6
<p>The sequence of events of the alerting game (authors’ proposal).</p>
Full article ">Figure 7
<p>Higher difficulty of the alerting game (authors’ proposal).</p>
Full article ">Figure 8
<p>Attention value thresholds for different difficulty levels.</p>
Full article ">Figure 9
<p>Basic ANT–based game mechanics of the executive and orienting games (authors’ proposal).</p>
Full article ">Figure 10
<p>Difficulty levels of the operation interaction modes.</p>
Full article ">Figure 11
<p>Visual interfaces for different game scenarios. (<bold>a</bold>) Visual interface of the alerting game. (<bold>b</bold>) Visual interface of the orientation game. (<bold>c</bold>) Visual interface of the executive control game. (Authors’ proposal).</p>
Full article ">Figure 12
<p>Examples of the training incentives and feedback (authors’ proposal).</p>
Full article ">Figure 13
<p>Examples of the feedback animation after one round of training (authors’ proposal).</p>
Full article ">Figure 14
<p>EEG command training.</p>
Full article ">Figure 15
<p>Recording time of Focus value measurement.</p>
Full article ">Figure 16
<p>The game training process of the experimental group.</p>
Full article ">Figure 17
<p>Experiment process.</p>
Full article ">Figure 18
<p>The rate of change of the attention network parameters.</p>
Full article ">
19 pages, 3157 KiB  
Review
EOG-Based Human–Computer Interface: 2000–2020 Review
by Chama Belkhiria, Atlal Boudir, Christophe Hurter and Vsevolod Peysakhovich
Sensors 2022, 22(13), 4914; https://doi.org/10.3390/s22134914 - 29 Jun 2022
Cited by 13 | Viewed by 6449
Abstract
Electro-oculography (EOG)-based brain–computer interface (BCI) is a relevant technology influencing physical medicine, daily life, gaming and even the aeronautics field. EOG-based BCI systems record activity related to users’ intention, perception and motor decisions. It converts the bio-physiological signals into commands for external hardware, [...] Read more.
Electro-oculography (EOG)-based brain–computer interface (BCI) is a relevant technology influencing physical medicine, daily life, gaming and even the aeronautics field. EOG-based BCI systems record activity related to users’ intention, perception and motor decisions. It converts the bio-physiological signals into commands for external hardware, and it executes the operation expected by the user through the output device. EOG signal is used for identifying and classifying eye movements through active or passive interaction. Both types of interaction have the potential for controlling the output device by performing the user’s communication with the environment. In the aeronautical field, investigations of EOG-BCI systems are being explored as a relevant tool to replace the manual command and as a communicative tool dedicated to accelerating the user’s intention. This paper reviews the last two decades of EOG-based BCI studies and provides a structured design space with a large set of representative papers. Our purpose is to introduce the existing BCI systems based on EOG signals and to inspire the design of new ones. First, we highlight the basic components of EOG-based BCI studies, including EOG signal acquisition, EOG device particularity, extracted features, translation algorithms, and interaction commands. Second, we provide an overview of EOG-based BCI applications in the real and virtual environment along with the aeronautical application. We conclude with a discussion of the actual limits of EOG devices regarding existing systems. Finally, we provide suggestions to gain insight for future design inquiries. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>General scheme of an EOG-based BCI. First step, EOG acquisition: EOG is recorded using vertical and horizontal electrodes by an amplified ADC. Second step, Signal processing: after pre-processing, computer processing extracts the most relevant features for identifying the subject’s intentions. Third step, Application area: When the command is recognized and the intention is classified, the instruction is sent to an external device (e.g., web browser, wheelchair, or text display). Feedback informs the user of the results of their actions to allow them to prepare for the next command.</p>
Full article ">Figure 2
<p>The PRISMA flow diagram of the review process. Please note that, given the work organisation during the project, the review of the databases was paralleled, and, therefore, the removal of duplicate records was done after the non-relevant records were removed from each database list separately.</p>
Full article ">Figure 3
<p>The number of publications per 5-year period.</p>
Full article ">Figure 4
<p>The percentage of references per 5-year period according the (<b>A</b>) used device, (<b>B</b>) used features, (<b>C</b>) used algorithms and (<b>D</b>) used interaction.</p>
Full article ">Figure 5
<p>The percentage of each type of active and passive interaction in the included EOG-based BCI publications during the last 20 years.</p>
Full article ">
24 pages, 2718 KiB  
Article
A Multivariate Randomized Controlled Experiment about the Effects of Mindfulness Priming on EEG Neurofeedback Self-Regulation Serious Games
by Nuno M. C. da Costa, Estela Bicho, Flora Ferreira, Estela Vilhena and Nuno S. Dias
Appl. Sci. 2021, 11(16), 7725; https://doi.org/10.3390/app11167725 - 22 Aug 2021
Cited by 4 | Viewed by 3213
Abstract
Neurofeedback training (NFT) is a technique often proposed to train brain activity SR with promising results. However, some criticism has been raised due to the lack of evaluation, reliability, and validation of its learning effects. The current work evaluates the hypothesis that SR [...] Read more.
Neurofeedback training (NFT) is a technique often proposed to train brain activity SR with promising results. However, some criticism has been raised due to the lack of evaluation, reliability, and validation of its learning effects. The current work evaluates the hypothesis that SR learning may be improved by priming the subject before NFT with guided mindfulness meditation (MM). The proposed framework was tested in a two-way parallel-group randomized controlled intervention with a single session alpha NFT, in a simplistic serious game design. Sixty-two healthy naïve subjects, aged between 18 and 43 years, were divided into MM priming and no-priming groups. Although both the EG and CG successfully attained the up-regulation of alpha rhythms (F(1,59) = 20.67, p < 0.001, ηp2 = 0.26), the EG showed a significantly enhanced ability (t(29) = 4.38, p < 0.001) to control brain activity, compared to the CG (t(29) = 1.18, p > 0.1). Furthermore, EG superior performance on NFT seems to be explained by the subject’s lack of awareness at pre-intervention, less vigour at post-intervention, increased task engagement, and a relaxed non-judgemental attitude towards the NFT tasks. This study is a preliminary validation of the proposed assisted priming framework, advancing some implicit and explicit metrics about its efficacy on NFT performance, and a promising tool for improving naïve “users” self-regulation ability. Full article
(This article belongs to the Special Issue Serious Games and Mixed Reality Applications for Healthcare)
Show Figures

Figure 1

Figure 1
<p>Consort flow diagram of the randomized controlled intervention. Of the 121 participants eligible for inclusion, 38 declined to participate, and 21 did not meet the inclusion criteria. Sixty-two participants were randomized and allocated to the priming and no-priming group. There were no dropouts, and all the subjects completed the tasks. During analysis, missing data from subjects in EEG and HRV were detected, and one CG subject with outlier EEG data was removed.</p>
Full article ">Figure 2
<p>Experiment Block Mockup. Time flows from left to right, top to bottom. In a single session, first, the subject fills the traits self-reports. Then, the training starts. There are 6 blocks and 14 tasks in total. Block in and Block out each begins with rest state with eyes closed then eyes open, followed by alpha NFT. From block 1 to 4, in the EG first is the PRIME, then NFT. In the control group PRIME is substituted by REST. PRIME stimuli are randomized between IM and BM with two PS, PS1 and PS2. Moreover, from blocks 1 to 4, eyes closed and eyes open are randomized between blocks with two ES, ES1, and ES2. In the diagram, the “or” signal is represented by “|”. It is used to separate the task for each group or the randomizations of ES (EO|EC) between blocks and the randomizations of PS (BM|IM).</p>
Full article ">Figure 3
<p>Objective diagram. The external mindfulness stimuli prime the subject to facilitate/scaffold the transition to the target brain activity alpha (α) in the Pz channel during NFT. The EEG spectrum physiological change is also represented.</p>
Full article ">Figure 4
<p>EEG power spectra at Bin and Bout. Estimated marginal means are log-transformed absolute power (µV<sup>2</sup>) with 95% confidence intervals. During REST EC, both groups show reductions in alpha, CG also has reductions in theta, while EG increases SMR. While for the REST EO, both groups show up-regulation of alpha, similarly to the NFT EO task.</p>
Full article ">Figure 5
<p>Z-transformed EEG power at intervention blocks. Alpha z-transformed power over the baseline (restBin) and NFT tasks for EO condition and EC at intervention blocks (nft1 and nft2). Three regression slopes are presented separately for CG and EG. Additionally, the regression equations are depicted as well as the regression lines for each group are indicated by thinner lines. The regression slopes at intervention blocks show a significant alpha increase for the EG in the EO condition. In contrast, the EC condition shows a similar downregulation of alpha in both groups.</p>
Full article ">
31 pages, 3360 KiB  
Systematic Review
Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain–Computer Interfaces: A Systematic Literature Review
by Nuraini Jamil, Abdelkader Nasreddine Belkacem, Sofia Ouhbi and Abderrahmane Lakas
Sensors 2021, 21(14), 4754; https://doi.org/10.3390/s21144754 - 12 Jul 2021
Cited by 64 | Viewed by 11685
Abstract
Humans interact with computers through various devices. Such interactions may not require any physical movement, thus aiding people with severe motor disabilities in communicating with external devices. The brain–computer interface (BCI) has turned into a field involving new elements for assistive and rehabilitative [...] Read more.
Humans interact with computers through various devices. Such interactions may not require any physical movement, thus aiding people with severe motor disabilities in communicating with external devices. The brain–computer interface (BCI) has turned into a field involving new elements for assistive and rehabilitative technologies. This systematic literature review (SLR) aims to help BCI investigator and investors to decide which devices to select or which studies to support based on the current market examination. This examination of noninvasive EEG devices is based on published BCI studies in different research areas. In this SLR, the research area of noninvasive BCIs using electroencephalography (EEG) was analyzed by examining the types of equipment used for assistive, adaptive, and rehabilitative BCIs. For this SLR, candidate studies were selected from the IEEE digital library, PubMed, Scopus, and ScienceDirect. The inclusion criteria (IC) were limited to studies focusing on applications and devices of the BCI technology. The data used herein were selected using IC and exclusion criteria to ensure quality assessment. The selected articles were divided into four main research areas: education, engineering, entertainment, and medicine. Overall, 238 papers were selected based on IC. Moreover, 28 companies were identified that developed wired and wireless equipment as means of BCI assistive technology. The findings of this review indicate that the implications of using BCIs for assistive, adaptive, and rehabilitative technologies are encouraging for people with severe motor disabilities and healthy people. With an increasing number of healthy people using BCIs, other research areas, such as the motivation of players when participating in games or the security of soldiers when observing certain areas, can be studied and collaborated using the BCI technology. However, such BCI systems must be simple (wearable), convenient (sensor fabrics and self-adjusting abilities), and inexpensive. Full article
(This article belongs to the Special Issue Brain–Computer Interfaces: Advances and Challenges)
Show Figures

Figure 1

Figure 1
<p>Brain–computer interface system.</p>
Full article ">Figure 2
<p>Examples of the commercial noninvasive EEG equipment based on the BCI technology. EEGSmart, Nihon Kohden, and Cognixion refer to the future noninvasive EEG designs.</p>
Full article ">Figure 3
<p>Flow process using the PRISMA method.</p>
Full article ">Figure 4
<p>Proportion chart of published article research areas based on noninvasive BCI technology.</p>
Full article ">Figure 5
<p>Mapping between the companies and research areas identified in the selected articles. Different colors represent the proportions of the reviewed studies, as shown by the indicators.</p>
Full article ">Figure 6
<p>Proportion of companies that used noninvasive-wired equipment.</p>
Full article ">Figure 7
<p>Proportion chart of companies that used noninvasive wireless equipment.</p>
Full article ">
11 pages, 1535 KiB  
Article
Detecting Attention Levels in ADHD Children with a Video Game and the Measurement of Brain Activity with a Single-Channel BCI Headset
by Almudena Serrano-Barroso, Roma Siugzdaite, Jaime Guerrero-Cubero, Alberto J. Molina-Cantero, Isabel M. Gomez-Gonzalez, Juan Carlos Lopez and Juan Pedro Vargas
Sensors 2021, 21(9), 3221; https://doi.org/10.3390/s21093221 - 6 May 2021
Cited by 31 | Viewed by 6631
Abstract
Attentional biomarkers in attention deficit hyperactivity disorder are difficult to detect using only behavioural testing. We explored whether attention measured by a low-cost EEG system might be helpful to detect a possible disorder at its earliest stages. The GokEvolution application was designed to [...] Read more.
Attentional biomarkers in attention deficit hyperactivity disorder are difficult to detect using only behavioural testing. We explored whether attention measured by a low-cost EEG system might be helpful to detect a possible disorder at its earliest stages. The GokEvolution application was designed to train attention and to provide a measure to identify attentional problems in children early on. Attention changes registered with NeuroSky MindWave in combination with the CARAS-R psychological test were used to characterise the attentional profiles of 52 non-ADHD and 23 ADHD children aged 7 to 12 years old. The analyses revealed that the GokEvolution was valuable in measuring attention through its use of EEG–BCI technology. The ADHD group showed lower levels of attention and more variability in brain attentional responses when compared to the control group. The application was able to map the low attention profiles of the ADHD group when compared to the control group and could distinguish between participants who completed the task and those who did not. Therefore, this system could potentially be used in clinical settings as a screening tool for early detection of attentional traits in order to prevent their development. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Screenshot of the GokEvolution application. The bars at the top indicate the level of attention recorded by the NeuroSky using the EEG sensor (top) and the achievement on the current level (bottom). The figure represents the character at level 2 (out of 4). If the attention level is higher than 50%, the character is “recharging energy” and the progress on the level increases. When level progress reaches the maximum (the whole bar) the game increases the level, changing the appearance of the character.</p>
Full article ">Figure 2
<p>Mean attention values versus completion time for each level. As expected, these two variables follow an inverse relationship in each level.</p>
Full article ">Figure 3
<p>(<b>A</b>) Comparison of mean attention values between the ADHD and control group in each game level. (<b>B</b>) Comparison of mean attention values between controls that completed the five levels of the game and controls that did not complete the five levels of the game. (<b>C</b>) Comparison of mean attention values between controls that scored impulsivity at ICI index and non-impulsive controls.</p>
Full article ">
11 pages, 2609 KiB  
Article
A Pilot Study of Game Design in the Unity Environment as an Example of the Use of Neurogaming on the Basis of Brain–Computer Interface Technology to Improve Concentration
by Szczepan Paszkiel, Ryszard Rojek, Ningrong Lei and Maria António Castro
NeuroSci 2021, 2(2), 109-119; https://doi.org/10.3390/neurosci2020007 - 19 Apr 2021
Cited by 9 | Viewed by 5075
Abstract
The article describes the practical use of Unity technology in neurogaming. For this purpose, the article describes Unity technology and brain–computer interface (BCI) technology based on the Emotiv EPOC + NeuroHeadset device. The process of creating the game world and the test results [...] Read more.
The article describes the practical use of Unity technology in neurogaming. For this purpose, the article describes Unity technology and brain–computer interface (BCI) technology based on the Emotiv EPOC + NeuroHeadset device. The process of creating the game world and the test results for the use of a device based on the BCI as a control interface for the created game are also presented. The game was created in the Unity graphics engine and the Visual Studio environment in C#. The game presented in the article is called “NeuroBall” due to the player’s object, which is a big red ball. The game will require full focus to make the ball move. The game will aim to improve the concentration and training of the user’s brain in a user-friendly environment. Through neurogaming, it will be possible to exercise and train a healthy brain, as well as diagnose and treat various symptoms of brain disorders. The project was entirely created in the Unity graphics engine in Unity version 2020.1. Full article
(This article belongs to the Special Issue Brain – Computer Interfaces: Challenges and Applications)
Show Figures

Figure 1

Figure 1
<p>Emotiv EPOC+ NeuroHeadset.</p>
Full article ">Figure 2
<p>Game views: (<b>a</b>) view of the board; (<b>b</b>) view of the mountains; (<b>c</b>) view of the fence; (<b>d</b>) view of the vegetation; (<b>e</b>) view of the boulders; (<b>f</b>) the first two levels; (<b>g</b>) the third level of the game; (<b>h</b>) the fourth level of the game; (<b>i</b>) player object; (<b>j</b>) light in the game.</p>
Full article ">Figure 3
<p>Emotiv Epoc Control Panel.</p>
Full article ">Figure 4
<p>Emotiv Epoc Control Panel with Cognitive Suite Tab.</p>
Full article ">Figure 5
<p>Emotiv EmoKey actions.</p>
Full article ">
Back to TopTop