Abstract
Active Brain Computer Interfaces (BCIs) allow people to exert voluntary control over a computer system: brain signals are captured and imagined actions (movements, concepts) are recognized after a training phase (from 10 min to 2 months). BCIs are confined in labs, with only a few dozen people using them outside regularly (e.g. assistance for impairments). We propose a “Co-learning BCI” (CLBCI) that reduces the amount of training and makes BCIs more suitable for recreational applications. We replicate an existing experiment where the BCI controls a drone and compare CLBCI to their Operant Conditioning (OC) protocol over three durations of practice (1 day, 1 week, 1 month). We find that OC works at 80 % after a month practice, but the performance is between 60 and 70 % any earlier. In a week of practice, CLBCI reaches a performance of around 75 %. We conclude that CLBCI is better suited for recreational use. OC should be reserved for users for whom performance is the main concern.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Active Brain Computer Interfaces (BCIs) allow people to exert direct voluntary control over a computer system: their brain signals are captured and the system recognizes specific imagined actions (movement, images, concepts). Active BCIs and their users must undergo training (between 10 min and 2 months). This skill is also known as BCI training. This makes the signals easier to recognize by the system. This acquisition can take from 10 min up to 2 months. BCIs can thus be applied to many control and interaction scenarios of our everyday lives, especially in relation to entertainment [1].
BCIs have mostly been used in laboratories and in medical applications (e.g. spellers for locked-in syndrome patients [1]). A few dozen people at most use BCIs at home, as exhibited by the BNCI FP7 roadmap report [2]. Moreover, as an emerging and new interaction technology, they offer a good prospect of inspiring people’s imaginations and of providing a new and enjoyable way of interaction [3].
For example, personal drones are a popular phenomenon that takes an increasing role in our daily lives. There are “selfie drones”Footnote 1 that take off from a user’s wrist to take a “selfie” photo, such drones can be controlled by BCIs. Recent press coverage of a BCI system that controls a drone quadcopter [4] has shown a strong interest in the general public of experiencing BCI control.
In this particular drone piloting application, the training protocol for the BCI is called Operant Conditioning (OC), needs to teach users to harness their brain signals and adapt them so that they can be recognized by the computer, which requires one to two months of rigorous training. Alternatives to OC for BCIs are training protocols based on machine learning that record typical brain signals for a set of mental actions or states and build classifiers to recognize those actions or states. Although training is shorter (10–20 min sessions [5]), the process requires a continuous focus and concentration from users, is more error-prone than OC and involves feedback to users [6] that is not appealing.
None of the two training paradigms are entirely adequate to let people test the technology in a way that would make them want to try again or even adopt it for enjoyment.
In this work we want to gauge the possibility of producing a BCI trained interactively in short sessions and that provides a viable alternative to OC and standard training protocols.
We introduce a “Co-learning BCI” (CLBCI) that reduces the initial amount of training required before the BCI is functional and that allows an incremental and interactive training process. We want to know if it is better to do one long training session with OC before using the BCI altogether or to have a short training session with CLBCI before every use of the BCI (a balance of quality versus time versus degree of control). We are particularly interested in what happens in the long run by evaluating whether OC and CLBCI are compatible with an overly positive user experience. Moreover, we want to know when OC should be favored over CLBCI and vice versa.
To that end, we perform an experiment over three training durations: 1 day, 1 week, 1 month. We train users in two groups, for all training durations: one group with CLBCI and the other using OC training. We want to see whether CLBCI reaches an acceptable performance for shorter training periods while maintaining the effect of signal variability to a minimum. We also asked users to fill informal questionnaires to share their experiences.
With this experiment we want verify the following hypotheses:
-
(H.1): For duration of below one month, using CLBCI leads to better task performance compared to OC.
-
(H.2): Beyond the one month duration, OC leads to better task performance.
-
(H.3): Despite the better performance for OC after a month training, users prefer to train for a shorter period whenever they want to use the BCI, rather than spend a full month training once.
For the evaluation, we apply our system to the drone piloting task used in LaFleur et al. [4] to pilot an AR.Drone (Fig. 1) and we evaluate the task performance of CLBCI compared to OC. The task and the OC implementation are a replication of LaFleur et al’s experimental protocol. We use the same evaluation measures and training protocols.
We first present background information on BCIs and their application and then some related work pertaining to control applications and co-adaptive BCI learning. Then, we present the CLBCI system and continue with the experimental protocol and the analysis and discussion of the results.
2 Background Information and Related Work
In this section we will present background information about BCI systems and then go on to study the state of the art and relevant BCI research for human computer interaction and control applications.
2.1 Brain Computer Interfaces
A BCI system \( B \) must assign a brain signal \( S_{t} \) of a fixed duration (an epoch; e.g. 1 s) at time \( t \), to a class \( Cl_{i} \) from a set of \( N \) classes \( Cl \), that correspond to a set of brain activity states \( BS_{i} \) to recognize. A machine learning classifier \( C \) is trained to that effect with a set of training examples \( T(Cl_{i} ) \) for each class \( Cl_{i} \). A training example is a signal epoch of the same duration as \( S_{t} \) that was recorded when the user was in the desired state \( BS_{i} \) for class \( Cl_{i} \) (Training or Calibration phase). The recording of such a training example and the associated protocol is called a training trial. For general matters on the processes associated with BCIs, we refer the reader to the state of the art presented by Nicolas-Alonso et al. [7]. A recent and extensive survey on BCI classification techniques is presented by Lotte et al. [8].
There is a further distinction between synchronous and asynchronous systems. In synchronous systems, stimuli are always shown to the user at fixed intervals of time during training, given that activations in the brain usually occur with a consistent delay after the stimulus onset. During online use (after training) one must wait for stimulation time, before the BCI generates an output command [8]. In an asynchronous system, there is no synchronization of the stimuli during online use and the BCI can be used at any moment without having to wait for the stimulus to begin. The additional difficulty lies in separating the target activity from all the rest of the brain activity [9]. There are many BCI paradigms, however in this work we use Motor Imagery (MI). MI requires users to imagine a motor action (moving their arm left/right, tapping their feet, etc.) without actually moving. This elicits similar activations as though they had moved and can be detected by a BCI. MI is an appropriate modality that can be applied to direct control applications, as the imagined actions match with the directions of the control (left/right hand for turning left/right, tapping feet to go down, etc.) [1] and thus with the inherent semantics of the control task.
2.2 BCIs for Control and Recreational Applications
The first area of application of BCIs was to use them for direct control in HCI systems [1], but also to combine them in a multimodal setting [10], with more common interaction modalities (e.g. Eye-tracking [11]). There are many practical applications: video games, e.g. World of Warcraft, serious games [12–14], robotics & prosthetics [15, 16]; virtual reality applications [17, 18].
There are two types of possible control paradigms: event-based control, where the BCI is used to trigger discrete events in an interface; continuous control, where the BCI is used to directly control the movement of an object or element of the interface (e.g. the movements of a pointer).
One particular continuous control application for BCIs of interest for this article is 3D navigation for the purpose of piloting a drone. Royer et al. [19] first evaluate the feasibility of the 2D control of an helicopter in a virtual environment, followed by Doud et al. for 3D control [20] with the motivation of achieving a means for telepresence. Finally LaFleur et al. [4] apply this technique for the 3D control of a real AR.Drone, with good success by using an Operant Conditioning training for 2 months on 5 users. More recently, a practical 2D control BCI system was demonstrated [21, 22]. The system allows to control an AR.Drone and that is operational within minutes (turning left/right or taking off and landing). This type of system is asynchronous and uses co-adaptation techniques between the classifier and the user, which are the main current focus of research efforts that aim at bringing BCIs out of the lab. The system presented in this work also falls in the category and we will thus review some of the related work on BCI co-adaptation.
2.3 Co-adaptation for Asynchronous BCIs
Scherer et al [23] propose an asynchronous BCI for the control of a virtual environment with three MI classes + a non-control state. They found it difficult to obtain a reliable classification in the asynchronous context. To remediate this problem several approaches have been proposed. One is to add a form of feature selection process to the BCI by performing co-adaptation from the EEG signals, for example the work of [24, 25] that extends Scherer et al. An initial synchronous calibration phase captures artifact free trials from each class whereby one feature out of six possible features is selected to train a classifier. Subsequently during the online phase, feedback is provided to enable periodical recalibrations when new artifact-free trials become available.
To the contrary of other state of the art approaches, in our system, CLBCI, the co-adaptation is driven by the user through a feedback loop rather than by the system through automatic adaptation (we call this co-learning rather than co-adaptation). Given the BCI also relies on machine learning and the feedback is based on incrementally capturing new training examples for the classifier from the user’s signal. As such CLBCI lies at the intersection of BCI co-adaption and interactive machine learning [26], where the interaction drives the training of a classifier. The motivation for this approach is to provide a more engaging training period to users, as a long monotonous session of merely following instructions at fixed intervals lead to boredom and to a loss of attention and concentration. This view is inspired by the application of educational science research to BCIs [12] and in improving training protocols for BCIs [6].While other state of the art co-learning approaches tackle the reduction of training time and the minimization of errors the user still remains passive: they do not rely on the ability of the human to learn to use the BCI.
In light of the fact that what we want to evaluate in the experiments is the human learning component, we chose not to make a comparison to these methods in our experiment.
3 CLBCI System and Architecture
The architecture of the CLBCI system is based on minimum distance classification. The Minimum Distance Classifier (MDC) is a simple classification technique that stems from the pattern recognition literature, where it is used extensively (e.g. image recognition) [27]. It was among the first classifiers applied for BCIs but was mostly supplanted by classifiers such as LDA Linear Discriminant Analysis (LDA) or Support Vector Machine (SVM) [8]. The weakness of MDCs is that they are sensitive to noise and when signal sources are not well separated, however they were successfully applied to BCIs in combination with divergences based on Riemannian geometry and was shown to reach state of the art performance [28]. In this work we use Independent Component Analysis (ICA) in its FastICA implementation to separate signal sources in an unsupervised setting and then apply the distance measures on the identified independent components [29].
3.1 Signal Processing and Acquisition
We applied the following signal processing pipeline in the raw signals in order:
-
The Butterworth filter allows selecting the appropriate frequencies for Motor Imagery and discarding unwanted frequencies. We selected a 8–25 Hz pass band with a filter of magnitude 4 and with a ripple of 0.5 db.
-
We make 1 s epochs from the band passed signals with a .75 s overlap (.25 s sliding window), we average them 5 by 5 so as to obtain 2 average epochs/s.
-
Then FastICA is computed on each average epoch in an unsupervised manner in order to separate noise from target activity and make the distance-based classifier’s job easier. The FastICA [29] algorithm projects the signal data in a space where data points are maximally independent, essentially separating task related sources from noise sources and other interference. The difference with other ICA algorithms is that FastICA uses a fast algorithm based on fixed-point calculations. We applied the variant of FastICA with symmetric orthogonalization and using a hyperbolic tangent contrast function. We computed 10 components. We used a GPL Java implementation of the original algorithm as described by Hyvärinen and Oja [29].
-
We then produce average epochs that allow us to smooth the signal and remove some of the variability. The system thus produces an average epoch every second, which is then used for feature extraction and for classification. Thus, the classifier will yield one classification per second. Given that ICA is rather costly to compute, anything less than one second led to sub real-time performance on the machine we performed the processing on (2012 MacBook Air, i7@2.9 GHz).
-
Given the sensitivity to noise and to variability in minimum distance based classification, ICA or other similar processes separate noise source from authentic signals and make it easier for the distance measure to accurately capture relevant differences in EEG patterns. Our classifier only requires minimal training data to start functioning, as our aim was to reduce that training time to a single calibration trial per class. In our system, reference average epochs are captured for each BCI class.
3.2 Feature Extraction and Classification
For each classification, we take the current average epoch and compute the distance between this current epoch and the reference signals for each of the class references. The classification outcome is where the distance to the current epoch is the shortest. However our distance measures are not stable under a noisy signal. Right after calibration, there is a single reference signal per class, however when feedback is given, more reference signals are added for each class. When there are several references per class, there will be several distance measurements, in which case the classification will be decided by a majority vote on classification outputs resulting from the individual distance measurements.
Similarly, given that several EEG channels are used, if we use single variable distance measures (as opposed to multivariate measures), we obtain one distance value per channel, which is handled the same way as in the multiple reference setting. In fact, when there are more than one distance measurements, the minimum distance classifier becomes similar to a k nearest neighbors (kNN) classifier. We can also consider using several different distance measures at the same time, much to the same effect. This work extends the work [30]. All the details about the system, its interface and its implementation are available in [30].
4 Experiments
The objective of the task is of user to fly an AR.Drone and make it pass through large rings continuously in a 5 min session (Fig. 2). We want to reproduce the experiments of LaFleur et al. [4] in order to compare the performance of OC (good performance, slow training) to CLBCI. We want to observe how user learning and performance evolve over increasingly long training durations (1 day, 1 week, 1 month) for both approaches to training. We consider that flying a quadcopter, all the more so with a BCI, is an entertaining activity that is appreciated by users as shown by the affluence to previous demos and exhibitions [21, 22].
4.1 Experimental Setup
We used a g.tec USBAmp EEG amplifierFootnote 2 with 16 electrodes over the motor cortex, with an acquisition rate of 512 Hz. The electrodes were placed over the channels: FCz, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6, Pz (Fig. 3). We used the TOBI TiAFootnote 3 signal server on a Win. XP VM and then connected into our java BCI application trough our own Java implementation of the TiA protocol.
We keep the same BCI paradigm based on Motor Imagery used LaFleur et al. [4], with the same controls:
-
Rise up/Top Target (for 1D/2D cursor tasks): Both hands imagined movement;
-
Go down/Bottom target: Both Feet imagined movement;
-
Go Left/Left Target: Left hand imagined movement;
-
Go Right/Right Target: Right hand imagined movement;
-
Resting state: constant forward motion.
For left and right turns, the drone was programmed to make a 90 degree turn while moving forward at constant speed (to achieve a smooth turn).
The operator makes the drone takes off at the start of the experiment and land at the end of the experiment. Figure 4 illustrates the commands for the droneFootnote 4.
4.2 Protocol
We compare user learning (progression in performance over multiple sessions) for both an OC training setting inspired by LaFleur et al [4] and our CLBCI architecture over three different durations: 1 day, 1 week and 1 month. As such we are not precisely replicating the experiments of LaFleur et al. [4] as we are considering shorter durations.
With this experiment we want verify the following hypotheses:
-
(H.1): For duration of below one month, using CLBCI leads to better task performance compared to OC.
-
(H.2): Beyond the one-month duration, OC leads to better task performance.
-
(H.3): Despite the better performance for OC after a month training, users prefer to train for 5 min prior to every time they pilot the drone rather than spend a full month before being able to pilot the drone for the first time.
We made groups of users who follow the same training protocol over three different durations for OC and CLBCI (6 groups). 24 healthy subjects aged between 23 and 44 and novices with BCIs participated in the experiments, and thus we distributed them in groups of 4 for each Duration x System pair:
-
1 day training (1.d): (1.d.CLBCI) CLBCI – 4 subjects; (1.d.oc) OC – 4 subjects;
-
1 week training (1.w): (1.w.CLBCI) CLBCI – 4 subjects; (1.w.oc) OC – 4 subjects;
-
1 month training (1.m): (1.m.CLBCI) CLBCI – 4 subjects; (1.m.oc) OC – 4 subjects.
In their paper, for operant conditioning, LaFleur et al. followed a precise training protocol. First users performed 1D cursor tasks for the left/right directions and the top down directions, until they could achieve a performance of at least 80 %. Then users performed a 2D cursor task for left, right, top and down combined until 80 % performance was achieved. Then their users were performed training sessions on a drone simulator, before actually flying the drone. Their training sessions were spread over 2 months and could last up to 50 min. The rationale for choosing 80 % is that it is a task performance that clearly lies above the performance and acceptable task performance of 60 %. The rationale for 60 % being considered as usable is that it is well above the empirical random classification performance by at least 10 % (45 % for 4 classes, like in this paper, Müller-Putz et al. [31]).
For our experiment, for the OC condition, we replicate the same type of training sessions where users must complete 1D Left/Right cursor (1D L/R) and Up/Down cursor (1D U/D) sessions, 2D cursor sessions, drone simulator sessions (DS) and a drone piloting session through a ring course (RS).
With CLBCI, instead of following the progressive training paradigm for OC, we ask users to perform a single training session over 4 trials for all classes at once and then follow with a test session. The test session is the phase that directly includes user involvement:
-
We explain to the user how to cycle through the three distance measures: so that they can determine the measure that leads to the largest amount of perceived self-control by informally evaluating the resulting classification accuracy.
-
We offer the possibility of adjusting the decision margin of the classifier in the same fashion to find equilibrium between classification speed and accuracy.
-
If some of the training trials are faulty (the users report they were distracted or that they moved), we can remove the individual training trails in real-time.
Once these settings are determined in the first few sessions, users remember them and start off with a customized BCI.
After the training sessions, the users still perform the simulator sessions (DS) and the ring course sessions (RS). We fixed all session lengths to 15 min and the piloting session length to 5 min. Within the experimental durations (1 day, 1 week, 1 month) we distributed each session type evenly with equal numbers of sessions, following the order: 1D L/R, 1D U/B, 2D, DS, RS for OC and the interface only session for CLBCI followed by DS and RS. The exact schedule for the experimental sessions is presented in Fig. 5.
4.3 Evaluation
We evaluate the performance of each user intrinsically like La Fleur et al. through task related measures: Number of rings acquired (number of rings the drone has successfully passed through) – RA; Number of wall collisions, when the drone collides with the walls of the rooms or with objects other than the rings – WC; Number of ring collision, RC; Flight time, the time between ring acquisition – FT; Session length, the total flight time during a session – SL. From the above measurements we can derive several performance indices that will allow us to evaluate different aspects of task performance.
-
Average Rings per maximum flight or ARMF, the average number of rings acquired during one session by users. This allows us to measure the absolute task performance without directly considering errors. The higher the ARMF the better.
-
Average ring acquisition Time or ARAT in ms, the average of FT across a group. The lower the ARAT, the better the performance.
-
Percent Total Correct or PTC, the percentage of ring acquisition compared to the sum of the number of ring collisions, the number of wall collisions and the number of ring acquisitions. PTC = RA/(RA + RC + WC).
-
Percent Valid Correct or PVC, the percentage of Ring Acquisition compared to the sum of the number of wall collisions and ring acquisitions. Thus a valid ring acquisition is a ring acquisition that was not directly preceded by a wall collision. PVC = RA/(RA + WC).
-
Percent Partial Correct or PPC. Here we consider that a ring collision is a partial ring acquisition, and thus PPC is the percentage of the sum of the number of ring acquisition and of ring collisions over the sum of the number of ring collisions, wall collisions and ring acquisition. PPC = (RA + RC)/(RA + RC + WC).
4.4 BCI Validation
We performed a simple validation that consisted classifying a set of unlabeled signals and comparing them to a set of reference signals for each class using 10-fold cross-validation. We used the three distance measures that can be used with our system in order to see what is the best distance, if there is one. The analysis was done over the signals of two subjects captured over the course of 10 sessions. We can now look at the cross-validation results from the off-line analysis to validate the BCI system depending on the distance measure used in Table 1. In bold is the best result for each measure and each subject. We did not find that any of the distances had an absolute advantage over the others: it varies from subject to subject. Thus we decided to add a control that allows dynamically switching the distance measure used during the online phase so that it could be easily adapted to each user.
Moreover, we compute ICA dipole activation maps for the ICA components that explained 99 % of the variance, to check that activations are not due to artifacts (Fig. 6). We can see that components 1–2 correspond to left-hand MI, while 4–6 correspond to right-hand MI, although components 5 and 6 seem slightly contaminated by eye movement artifacts, however the cumulative percentage of variance explained for both of the is only 5 %.
4.5 Results
Given the number of subjects for each condition, we used the Shapiro-Wilk test and found a p value of p = 0.234, which means there is insufficient evidence to accept that the null hypothesis of the normality of the distribution of the data is valid. Consequently, we use the Kriskal-Wallis test to measure for significant group effects and then use the Mann-Whitney-U test for post hoc pairwise analysis with an FDR p value adjustment for multiple comparisons.
Figure 7 shows the results in the form of a bar chart for each of the five indicators. The error bars represent the sample standard variance. The general trend for all three percentage indicators is that CLBCI is better than OC. Overall for training durations of one day and one week CLBCI was better by about 8 %. However, OC performs better after a one-month training (~3 % difference, 87 % for OC and 84 for CLBCI). Although the difference is small, more training (e.g. 1–2 months) could give edge to OC, however, it is likely that performance will not increase significantly. We observe a similar trend for ARAT (lower ARAT for CLBCI for 1D and 1 W and then a lower ARAT for OC) and ARMF. A single day of training leads to low PVC and PTC but acceptable PPC for CLBCI. This means that the system is not very usable. After one week of training the performance improves for OC and CLBCI by the same amount, so that they maintain an 8 % difference. With above 75 % task performance, CLBCI is already usable after a week.
We asked our users informally what their impression of the experience was. Most found the experiment and experience interesting and enjoyable: it made them very motivated to perform well. One user said: “It was really a lot of fun! The control was not perfect, but even the part where I bumped into things was quite entertaining and thrilling to say the least.” The main concern raised is that the BCI classification rate is slow and makes a smooth movement of the quadcopter difficult to achieve, moreover due to the automatic balancing systems of the drone user felt that the lateral movements of the drone were sometimes sudden, “the drone acted strangely sometimes”. Users from the OC groups complained that the training tasks were somewhat boring and that they would have liked to be able to pilot the drone sooner. They found it very difficult to concentrate towards the end of training sessions. On the other hand users from the CLBCI group were quite surprised that they could pilot the drone so quickly and pointed out that it motivated them to go further and to improve their performance.
5 General Public CLBCI Demonstration
The CLBCI system has been demonstrated publicly and tested by over 60 people during several events with the objective of introducing new technologies and interaction modalities to users. The session performed by each user was around 10–15 min in total (explanation, installation, training, piloting). Most people achieved control and enjoyed themselves greatly even for those that had performed the test several times. Even users who did not achieve control had a lot of fun trying to control it and understand how it works. This attracted media attention and showed that people are inclined to having access to this sort of technology.
6 Discussion
According to all indicators, CLBCI performs better for training durations up to a month (H1), while the OC training slightly outperforms CLBCI after a month of training (H2). This validated both hypotheses 1 and 2 within the confines of our experimental setting. However, in a broader scope of actual usage by users, we do not have sufficient evidence to suggest whether for longer training periods, the performance of OC is likely to remain better than with a continuous use of CLBCI or whether CLBCI overtakes OC for much longer durations. This means that only a very long-term study in situ is required to really ascertain the best training practices.
CLBCI reaches a usable performance after about a week of use on average, however, by increasing the amount of training for each session when the user starts using the system, we can achieve a better performance for immediate use.
In consequence, we can further hypothesize that over time, if we are satisfied with the classification performance for a particular application, we can reduce the training time proportionally to the increase in performance in order to obtain a constant performance with an increasing comfort use due to the shortening of the training times. In this regard, the integration of implicit error potential detection from other state of the art co-adaptation techniques would be beneficial and complimentary with our own approach. Naturally, we can never go below a single training trial, which amount to a few seconds of training. This point is fundamental because it implies that CLBCI and OC could potentially converge towards the same signal modulation in users. The difference of course lies in the long training period, where CLBCI would allow having a usable system already whereas with OC the user would have to wait until the end of the long period.
In critical application areas (reeducation, prosthesis control, etc.) it would be unacceptable to have anything less than the best achievable performance, and thus a system such as CLBCI would not be robust before it converged to the same level of modulation in users as with OC. However for non critical tasks where user experience, comfort and enjoyment are criteria, then systems like CLBCI are doubtless preferable. Based on informal user feedback from both CLBCI and OC groups, we have some evidence that our third hypothesis (H.3) is true, however a definite proof of its correctness would have required a within-subjects experimental design and a formal questionnaire evaluation, that are incompatible with a between-subjects design aimed at evaluating the temporal evolution of performance.
7 Limitations and Future Work
The main limitation is signal variability. With CLBCI we need supplemental filtering to minimize noise: our training is shorter than for supervised systems. The processing is costly: we can classify twice per second, which is limiting for continuous control. The limitation on training time constrains us to around four actions for realistically usable systems. Additionally, the low count of electrodes is not ideal for ICA, further studies should have a full 10–20 electrode coverage are required to better perform the validation of the CLBCI system.
Building signal databases (database of EEG signals built offline from a large number of subject for a given paradigm that can serve to produce training-free BCIs) [32] may help to obtain BCI systems that require no training.
Moving on from technical limitations, we have discussed that this work is preliminary in the sense that the evaluation is in vitro (controlled environment) and over a relatively short duration. Experiments in situ over long periods of time are required in order to truly determine whether the hypothesis of the convergence of CLBCI and OC training is actually possible. Another limitation is the absence of the study of the quantitative user experience through a formal questionnaire. Follow-up in situ studies should include a detailed questionnaire to precisely gauge how users perceive the training protocols. Finally, recruiting more subjects and a comparison to synchronous supervised systems would be beneficial in future experiments.
8 Conclusion
We propose a “Co-learning BCI” (CLBCI) that reduces the initial amount of training and makes BCIs more suitable for recreational applications. We replicate an existing experiment where the BCI controls a drone and compare CLBCI to their protocol (OC) over three durations of practice (1 day, 1 week, 1 month). We find that OC works at 80 % after a month practice, but the performance is between 60 and 70 % any earlier. In a week of practice, CLBCI reaches a performance of around 75 %. We conclude that CLBCI is better suited for recreational use. OC should be reserved for users for whom performance is the main concern. The experiment is performed in a controlled environment over a relatively short-term period, we need to carry out further studies in situ in the long term (1+ years) in order to have a more accurate picture. Given our observations, it is likely that CLBCI (but more generally co-adaptive asynchronous BCIs) and OC eventually converge to the same performance where users have learned to modulate their signals correctly. In summary, the challenges and approaches presented and discussed in this paper show that there are many opportunities for further research. We have identified promising directions and actionable ideas (shorter initial training, co-learning) for researchers in this field. We thereby hope to inspire work that will unlock the full potential of BCIs in everyday applications.
Notes
- 1.
- 2.
- 3.
- 4.
The figure is taken from the Minnesotta paper by Lafleur et al. 2012 [4] and is made available in the open access article under a Creative Commons Attribution 3.0 license.
References
Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G., Vaughan, T.M.: Brain-computer interfaces for communication and control. Clin. Neurophysiol. 113, 767–791 (2002)
Future BNCI: A Roadmap for Future Directions in Brain/Neuronal Computer Interaction Research. Future BNCI Program under the European Union Seventh Framework Programme, FP7/2007–2013, grant 248320 (2012). http://future-bnci.org/images/stories/Future_BNCI_Roadmap.pdf
Mancini, C., Rogers, Y., Bandara, A.K., Coe, T., Jedrzejczyk, L., Joinson, A.N., Price, B.A., Thomas, K., Nuseibeh, B.: Contravision: exploring users’ reactions to futuristic technology. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 153–162. ACM, New York (2010)
LaFleur, K., Cassady, K., Doud, A., Shades, K., Rogin, E., He, B.: Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. J. Neural Eng. 10, 046003 (2013)
Blankertz, B., Losch, F., Krauledat, M., Dornhege, G., Curio, G., Müller, K.R.: The Berlin brain - computer interface: Accurate performance from first-session in BCI-naïve subjects. IEEE Trans. Biomed. Eng. 55, 2452–2462 (2008)
Lotte, F., Larrue, F., Mühl, C.: Flaws in current human training protocols for spontaneous brain-computer interfaces: lessons learned from instructional design. Front. Hum. Neurosci. 7, 568 (2013)
Nicolas-Alonso, L.F., Gomez-Gil, J.: Brain computer interfaces, a review. Sensors 12, 1211–1279 (2012)
Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B.: A review of classification algorithms for EEG-based brain-computer interfaces. J. Neural Eng. 4, R1–R13 (2007)
Nooh, A., Yunus, J., Daud, S.: A review of asynchronous electroencephalogram-based brain computer interface systems. Int. Conf. Biomed. Eng. Technol. 11, 55–59 (2011)
Nijholt, A., Reuderink, B., Oude Bos, D.: Turning shortcomings into challenges: brain-computer interfaces for games. In: Nijholt, A., Reidsma, D., Hondorp, H. (eds.) INTETAIN 2009. LNICST, vol. 9, pp. 153–168. Springer, Heidelberg (2009)
Kosmyna, N., Tarpin-Bernard, F.: Evaluation and comparison of a multimodal combination of BCI paradigms and eye tracking with affordable consumer-grade hardware in a gaming context. IEEE Trans. Comput. Intell. AI Games 5, 150–154 (2013)
Kos’myna, N., Tarpin-Bernard, F., Rivet, B.: Towards a general architecture for a co-learning of brain computer interfaces. In: 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 1054–1057. IEEE (2013)
Sung, Y., Cho, K., Um, K.: A development architecture for serious games using BCI (brain computer interface) sensors. Sensors (Basel) 12, 15671–15688 (2012)
Wang, Q., Sourina, O., Nguyen, M.K.: EEG-based “serious” games design for medical applications. In: 2010 International Conference on Cyberworlds, pp. 270–276 (2010)
Bell, C., Shenoy, P., Chalodhorn, R., Rao, C.: Control of a humanoid robot by a noninvasive brain-computer interface in humans. J. Neural Eng. 5, 214–220 (2008)
Hochberg, L.R., Serruya, M.D., Friehs, G.M., Mukand, J.A., Saleh, M., Caplan, A.H., Branner, A., Chen, D., Penn, R.D., Donoghue, J.P.: Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164–171 (2006)
Lécuyer, A., Lotte, F., Reilly, R., Leeb, R.: Brain-computer interfaces, virtual reality, and videogames. Comput. (Long. Beach. Calif) 42, 66–72 (2008)
Guger, C., Holzner, C., Groenegress, C.: Brain computer interface for virtual reality control. In: Proceedings of the 7th European Symposium on Artificial Neural Networks, pp. 443–448 (2009)
Royer, A., Doud, A., Rose, M., He, B.: EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Trans. Neural Syst. Rehabil. Eng. 18, 581–589 (2010)
Doud, A., Lucas, J., Pisansky, M., He, B.: Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface. PLoS One 6, e26322 (2011)
Kosmyna, N., Tarpin-Bernard, F., Rivet, B.: Bidirectional feedback in motor imagery bcis: learn to control a drone within 5 min. In: CHI 2014 Extended Abstracts on Human Factors in Computing Systems, pp. 479–482. ACM, New York (2014)
Kosmyna, N., Tarpin-Bernard, F., Rivet, B.: Drone,your brain, ring course – accept the challenge and prevail! In: UBICOMP 2014 ADJUNCT, pp. 243–246. ACM, New York (2014)
Scherer, R., Lee, F., Schlogl, A., Leeb, R., Bischof, H., Pfurtscheller, G.: Toward self-paced brain-computer communication: navigation through virtual worlds. IEEE Trans. Biomed. Eng. 55, 675–682 (2008)
Faller, J., Scherer, R., Costa, U., Opisso, E., Medina, J., Müller-Putz, G.R.: A co-adaptive brain-computer interface for end users with severe motor impairment. PLoS ONE 9, e101168 (2014)
Faller, J., Vidaurre, C., Solis-Escalante, T., Neuper, C., Scherer, R.: Autocalibration and recurrent adaptation: Towards a plug and play online ERD-BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 20, 313–319 (2012)
Fails, J.A., Olsen Jr., D.R.: Interactive machine learning. In: Proceedings of the 8th International Conference on Intelligent User Interfaces, pp. 39–45. ACM, New York (2003)
Duda, R., Hart, P., Stok, D.: Pattern Recognition, 2nd edn. Wiley-Interscience, Hoboken (2001)
Barachant, A., Bonnet, S., Congedo, M., Jutten, C.: Multiclass brain-computer interface classification by Riemannian geometry. IEEE Trans. Biomed. Eng. 59, 920–928 (2012)
Hyvärinen, A., Oja, E.: A fast fixed-point algorithm for independent component analysis. Neural Comput. 9, 1483–1492 (1997)
Kosmyna, N., Tarpin-Bernard, F., Rivet, B.: Adding human learning in brain computer interfaces (BCIs): towards a practical control modality. ACM Trans. Comput. Interact. ACM SIGCHI (2015, to appear)
Müller-Putz, G.R., Scherer, R.: Better than random? A closer look on BCI results. Int. J. Bioelectromagn. 10, 52–55 (2008)
Congedo, M., Goyat, M., Tarrin, N., Varnet., L., Rivet, B., Ionescu, G., Jrad, N., Phlypo, R., Acquadro, M., Jutten, C.: “Brain Invaders”: a prototype of an open-source P300-based video game working with the OpenViBE platform. 5th International BCI Conference, Graz, Austria, 280–283 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 IFIP International Federation for Information Processing
About this paper
Cite this paper
Kosmyna, N., Tarpin-Bernard, F., Rivet, B. (2015). Towards Brain Computer Interfaces for Recreational Activities: Piloting a Drone. In: Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2015. INTERACT 2015. Lecture Notes in Computer Science(), vol 9296. Springer, Cham. https://doi.org/10.1007/978-3-319-22701-6_37
Download citation
DOI: https://doi.org/10.1007/978-3-319-22701-6_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-22700-9
Online ISBN: 978-3-319-22701-6
eBook Packages: Computer ScienceComputer Science (R0)