WO2014069996A1 - Method and system for a brain-computer interface - Google Patents
Method and system for a brain-computer interface Download PDFInfo
- Publication number
- WO2014069996A1 WO2014069996A1 PCT/NL2013/050762 NL2013050762W WO2014069996A1 WO 2014069996 A1 WO2014069996 A1 WO 2014069996A1 NL 2013050762 W NL2013050762 W NL 2013050762W WO 2014069996 A1 WO2014069996 A1 WO 2014069996A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input signal
- whitening
- classification model
- parameters
- brain
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 230000002087 whitening effect Effects 0.000 claims abstract description 35
- 238000013145 classification model Methods 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 31
- 230000001537 neural effect Effects 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 16
- 230000000694 effects Effects 0.000 claims abstract description 11
- 230000002123 temporal effect Effects 0.000 claims abstract description 11
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 9
- 210000004556 brain Anatomy 0.000 claims description 7
- 230000006978 adaptation Effects 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 4
- 238000000537 electroencephalography Methods 0.000 description 25
- 238000012706 support-vector machine Methods 0.000 description 16
- 230000007177 brain activity Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 6
- 230000010355 oscillation Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 230000002596 correlated effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000036992 cognitive tasks Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 208000010340 Sleep Deprivation Diseases 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 235000000332 black box Nutrition 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000037023 motor activity Effects 0.000 description 1
- 210000000337 motor cortex Anatomy 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the present invention relates to a method and system for a brain-computer interface (BCI), more particularly to a method for providing a brain-computer interface, comprising obtaining a classification model as part of a processing pipeline for processing an input signal, the input signal comprising a neural signature to be detected by the classification model.
- BCI brain-computer interface
- a key element in a BCI system is the software or classification model that classifies the neural signature of specific brain activity that the user produces with a cognitive task used to interact with the BCI.
- Creating a BCI classifier that uses a different cognitive tasks is a laborious task, which typically requires extensive neuroscientific knowledge, and in addition requires months to even years to fine-tune parameters for a delicate series of signal processing and machine learning steps.
- This complex pipeline can then be calibrated to work for a specific user. For this reason, perhaps, the number of distinct types of brain activity that can be reliably used for BCI remains limited to date.
- EEG electroencephalography
- BCI detectors that can be separated in methods for detecting specific neural signatures, such as the P300 ERP [3], and methods for detecting novel or anomalous data, wherein deviations from a "normal" resting state are interpreted as intentions to control the BCI [10].
- the article 'Real Coded GA-Based SVM for Motor Imagery Classification in a Brain-Computer Interface' by A. Bamdadian et al., 2011 9th IEEE International Conference on Control and Automation (ICC A), Santiago, Chile, December 19-21, 2011 discloses a motor imagery-based Brain Computer Interface implementation.
- the article discloses the use of a real-coded generic algorithm to determine the free kernel parameters of the Support Vector Machine (SVM).
- SVM Support Vector Machine
- the method uses a Common Spatial Pattern (CSP) algorithm which is known as such.
- CSP Common Spatial Pattern
- the implementation as disclosed can be seen as a spatial whitening specifically and only aimed at removing spatial correlations (disregarding any temporal correlations).
- an EEG signal is frequency filtered, subsequently the CSP algorithm is applied, after which the variance is calculated (i.e. a non-linear transformation).
- the output is then fed to a SVM for classification.
- the present invention seeks to provide an improved, more robust BCI method, which is less dependent on specific parameter choices, especially user-dependent parameter selections.
- obtaining the classification model comprises training the classification model using the input signal and assigning one or more labels to the input signal at different time points, wherein each label indicates whether the input signal at the associated time point of the label should be classified as target activity by the classification model, further comprising processing steps for
- training the classification model comprises determining weights to be used in the classifying step using a numerical optimization procedure.
- the input signal used is for a BCI application of biological origin, e.g. a (sub)set of EEG signals.
- the method further comprises executing the classification model using the determined weights and an input signal (without the offset vector) to detect the specific neural signature trained for.
- the detection may be a presence/absence detector, or more complex prediction or classification schemes.
- the present invention allows to avoid the laborious tuning of classifiers commonly required by prior art methods.
- a unified method is provided called
- the word “detector” is used to describe a classifier that discriminates data having a particular neural signature from all other data.
- non-linear classifier in the present invention allows to implicitly use the information in the EEG signal related to the power of the EEG signal. In combination with the further features of the present independent claim embodiment, this allows to obtain an optimal filtering maintaining spatial, temporal and functional information in the processed signals.
- the few remaining free parameters directly control aspects of the behaviour of the classifier (e.g. the rate of false positives), or only bias the classifier to specific solutions as opposed to imposing hard limits.
- the method of the present invention can learn well- performing detectors despite a badly chosen bias.
- Such a black-box method for BCI development allows non-experts to design BCIs, even for novel user tasks with currently unknown neural signatures. It is expected that the associated faster development cycles and the increased number of researchers that can design BCIs will further accelerate the progress in BCI research.
- the polynomial kernel is a non-linear polynomial kernel, i.e. the polynomial kernel is at least a second degree polynomial kernel.
- the input signal is a sampled input signal (vector) in time space
- quadratic expansion allows the detection of power in different frequency bands
- cubic expansion reveals cross-frequency coupling between power and phase
- a fourth degree polynomial would allow to find power-power cross frequency coupling.
- the numerical optimization procedure is executed for a kernel machine, e.g. using a one-class support vector machine.
- a kernel machine in various implementations is well known in the art and can advantageously be used in the present invention embodiments.
- whitening the input signal comprises an adaptive whitening step for removing correlations in the input signal. This assures that the feature(s) to be detected are uncorrected and have unity variance.
- the traces in the input signal (which are e.g. EEG channel traces) are less correlated and represent the activity local to the associated sensor.
- the whitening comprises adaptive sensor covariance estimation having a number of predetermined parameters to select: a rate of adaptation defined by a half-life of less than a desired value (e.g. 60 seconds, 50 seconds or 30 seconds) or a forgetting factor ( ⁇ ) between zero and one, i.e. ⁇ G [0,1], which determines the rate of adaptation.
- a rate of adaptation defined by a half-life of less than a desired value (e.g. 60 seconds, 50 seconds or 30 seconds) or a forgetting factor ( ⁇ ) between zero and one, i.e. ⁇ G [0,1], which determines the rate of adaptation.
- the whitening in a further embodiment comprises filtering the input signal using a high- pass filter with a cut-off frequency of less than 5 Hz, e.g. 1 Hz. This ensures that the neural signature to be trained and detected is not filtered out from the input signal and that correlations over time (covariance over time) are reduced in the input signal.
- the processing pipeline further comprises, after the whitening, windowing the input signal using predetermined windowing parameters to bias use of primarily the newest parts of the input signal (as in general for
- the windowing in one embodiment comprises a variance biasing step, using a function that specifies the fraction of variance for a sample age.
- this relates to simple predetermined windowing parameters, which are user independent.
- the present invention relates to a brain computer interface comprising an input unit for receiving (and optionally pre-processing) an input signal, and a processing unit connected to the input unit, the processing unit being arranged for executing the method according to any one of the present invention method
- the present invention relates to a computer program product comprising computer executable instructions, which when loaded on a processing system (arranged to receive an input signal) provide the processing system with the functionality of the method according to any one of the present invention embodiments.
- the present invention method embodiments may be advantageously used in a number of specific applications, such as an entertainment application (e.g. a computer game, for operating a further apparatus (e.g. a machine, an appliance, etc), or for training applications (e.g. in user feedback training).
- an entertainment application e.g. a computer game
- a further apparatus e.g. a machine, an appliance, etc
- training applications e.g. in user feedback training.
- Fig. 1 shows a schematic view of an embodiment of the method according to the present invention
- Fig. 2 shows a schematic view of a further embodiment of the method according to the present invention
- Fig. 3 a shows a graph of an exemplary input signal as used in the present invention embodiments
- Fig. 3b shows a graph of a quadratic expansion of the time trace of Fig.3a
- Fig. 3c shows a graph of the weights of a linear classifier that is trained using the present invention for detecting oscillations with an ERD in the input signal of Fig. 3a;
- Fig. 4 shows a graphical representation of a function specifying the fraction of variance for a sample age as applied in an embodiment of the present invention
- Fig. 5a shows a graph of an input signal as can be used with the present invention embodiments.
- Fig. 5b shows a graph of the input signal of Fig. 5a, after execution of the whitening according to an embodiment of the present invention.
- the present invention aims to provide a method of learning detectors for characteristic brain activity fully from data.
- BCIs brain- computer interfaces
- creating brain- computer interfaces (BCIs) typically requires extensive knowledge of the brain and of signal processing methods. This slows scientific progress and dissemination of BCI technology.
- a layperson can record his/her own brain signals and detect at which point in time he/she recorded brain activity of specific interest. Based on only these recorded signals and a label for each time point, the method of the present invention can create software for detecting the presence of certain brain activity in real-time. Effectively, the method of the present invention enables anyone to build BCIs.
- the present invention embodiments relate to a method for providing a brain- computer interface, comprising obtaining a classification model 13 as part of a processing pipeline 10 for processing an input signal E, the input signal E comprising a neural signature to be detected by the classification model (or detector) 13.
- a processing pipeline 10 for the brain-computer interface is shown in general form in the schematic drawing of Fig. 1.
- the input signal 11 is a signal of biological origin, e.g. an electro-physiological signal such as an EEG signal E (having multiple traces from the associated multiple sensors).
- EEG signal E having multiple traces from the associated multiple sensors.
- these types of signals are sampled over time, and may have undergone pre-processing (e.g. amplification).
- the input signal is processed in a whitening block 12 and an optional windowing block 17, before being fed to the classification model 13.
- the output of the classification model 13 is a detection or prediction 14 of the actual occurrence of a neural signature of interest.
- the classification model 13 comprises application of a polynomial kernel 15 and classifying with a kernel method 16.
- obtaining the classification model 13 comprises training the classification model 13 using the input signal and assigning (one or more) labels to the input signal at different time points, wherein each label indicates whether the input signal at the associated time point of the label should be classified as target activity by the classification model 13, and the method comprises processing steps for
- training the classification model 13 comprises determining weights to be used in the classifying step using a numerical optimization procedure.
- Matrices are indicated with capitals (e.g. A), where / is the identity matrix.
- Column vectors are indicated with a harpoon (e.g. x).
- Lower case variables denote scalars (e.g. s).
- the Euclidean norm is indicated with
- the transpose of is indicated with M T .
- a strict linear relation between a and b is indicated with a ⁇ b.
- the set of integers is indicated with ⁇ .
- a ⁇ i-dimensional real- valued vectors are G E d .
- a classification model g ⁇ 3 ⁇ 4 ⁇ E is learned based on example data of brain activity comprising a neural signature of interest.
- the classification model g(w T ⁇ x ) is automatically found based on features *i derived from past samples of E that detect the neural signature for that detector 13, but does not respond to other feature distributions.
- the classification model is learned from examples or training data by using a numerical optimization procedure that minimizes a loss (cost) function that is defined by the classification method.
- a loss function I: (MP ' , E) ⁇ E is chosen containing a term penalizing a modelling error e and a term penalizing model complexity with a so-called regularizer r:
- the prediction model m is parametrized with w that linearly combines a numerical feature description.
- a representation of an object to be classified is mapped from an input space X to a feature space through a map ⁇ :
- the numerical optimization procedure is executed for a kernel machine.
- the numerical optimization procedure is executed for a one-class support vector machine (SVM) [11, 13], which works well with high-dimensional feature spaces.
- SVM support vector machine
- the numerical optimization procedure is executed for other types of kernel machines [12].
- Equation (6) has the same functional form but now introduces a linear transformation ⁇ _ ⁇ . Therefore, a transformation of the feature space ⁇ ( ⁇ ) with ⁇ _ ⁇ of an £ 2 regularized classifier can be used to simulate Tikhonov regularization.
- a generalized feature space is used to eliminate the choice between methods based on signal amplitude (ERP) and methods based on signal power (ERD). That is, the generalized feature space can capture even more complex signal characteristics, such as cross-frequency coupling (CFC).
- ERP signal amplitude
- ERP signal power
- CFC cross-frequency coupling
- a quadratic expansion allows the detection of power in different frequency bands
- cubic expansion reveals cross-frequency coupling (CFC) between power and phase
- the third degree terms can be interpreted as a product of amplitude and a power feature
- a fourth degree polynomial expansion can be used to find power-power CFC.
- Fig. 3 shows graphs relating to an exemplary ERD processed in the processing pipeline 10 of the present invention embodiments with quadratic expansion of features (i.e. the polynomial kernel 15 is a second degree polynomial kernel, i.e. a non-linear polynomial kernel).
- the polynomial kernel 15 is a second degree polynomial kernel, i.e. a non-linear polynomial kernel.
- Fig. 3a an example of a time trace of the amplitude of an oscillation is shown.
- Target traces all have a slight decrease in power (variance) in the middle of the segment. This ERD cannot be detected with a linear transform of the raw samples, since the phase of the oscillation is unknown and variable.
- Fig. 3b a quadratic expansion of the time trace is shown.
- Fig. 3c shows the weights w of a linear classifier that is trained on these quadratic features to separate oscillations with an ERD from those without. Note that feature products with features from the middle segment carry all weight, and display diagonal banding at At's of multiples of a (half a) cycle's length.
- the feature space of the present invention may be high-dimensional ('blow up'), requiring a large memory capacity and search space for training the detector.
- kernel trick [1] known in the field of machine learning (ML), which allows us to work efficiently in high-dimensional features spaces by representing a linear classifier with implicit dot products that can be efficiently evaluated in high-dimensional feature spaces.
- ML machine learning
- the dot product between input vectors x and x' of a polynomial expansion ⁇ with degree d can be efficiently computed with the kernel trick:
- the polynomial kernel is inhomogeneous, meaning that terms with degrees up to d are included in the expansion. Note that an explicit map ⁇ from the input space to the feature space is not needed. Instead, the kernel function k(x, x') can be specified directly, thereby inducing an implicit mapping from the input space to the feature space. Using a polynomial kernel to classify ERD and ERP effects
- adaptive whitening 12 is applied to the input signal for removing correlations therein (measurement sensors providing EEG signals are often heavily correlated).
- the whitening may further comprise adaptive sensor covariance estimation to account for e.g. changing background activity.
- the adaptive sensor covariance estimation may have a rate of adaptation defined by a half-life of less than a desired value (e.g. less than 60 seconds, e.g. less than 50 seconds, e.g. 30 seconds) or a forgetting factor ( ⁇ ) between zero and one, i.e. ⁇ G
- the whitening may comprise filtering 17 the input signal with a high-pass filter having a suitable band or cut-off frequency, e.g. a cut-off frequency of less than 5 Hz, e.g. lHz.
- the method may in a further group of embodiments further comprise windowing the input signal using predetermined windowing parameters to bias use of primarily the newest parts of the input signal.
- the window in principle is used with a (time) length between zero and infinity, hence the present invention does not impose a temporal limit for capturing samples up to a current point in time.
- Fig. 1 discloses a very general description of the method according the present invention. Details and further embodiments will be explained and clarified in the following paragraphs, and with reference to the schematic drawing of Fig. 2.
- the (optional) windowing step 17 of the present invention aims to remove parameters necessary in prior art methods that define temporal intervals of interest. Ideally, no limit is imposed and the classifier uses arbitrary long windows capturing all samples up to the current point in time. That is, the windows may have a length between zero and infinity.
- the windowing comprises a variance biasing step, using a function that specifies the fraction of variance for a sample age.
- the classifier is biased towards the last seconds by scaling the amplitude in the window such that the most recent h seconds are as important or influential as much as all the rest of the history.
- Tikhonov regularization can be implemented for methods that only implement an £ 2 regularizer by mapping the features through r- r .
- Fig. 4 shows a graph of an exemplary embodiment of a function / that specifies the fraction of variance for a sample age.
- the number of features n in (12) that share a given sample age needs to reflect the increase in variance in feature space caused by increasing the input space with terms of age a in input space.
- the growth in feature space caused by adding ⁇ ⁇ features in input space is the following for a polynomial kernel with degree
- the whitening step 12 of the present invention aims to remove correlations in the input space. More precisely, the above presented feature space assumes that the features in input space are white, that is, they are uncorrelated and have unit variance. Temporally, the EEG displays an _1 spectrum, meaning that most power is contained in the lower frequencies. In an embodiment, a low order high-pass infinite impulse response (IIR) filter removes most power from these low frequencies and can be used to reduce temporal correlations in the EEG signal. Spatially, the signals are heavily correlated due to volume conductance. In a further embodiment, to remove the spatial correlations and to rescale the high-pass filtered signals to unit variance, a whitening transform P is used, based on the sensor covariance matrix C :
- E is a continuous recording as introduced earlier in this document, wherein each row of E comprises the time series for a specific sensor.
- the covariance can be adaptively estimated with a low-pass filter such as the exponentially weighted moving average (EMWA), see [8,7]:
- EMWA exponentially weighted moving average
- Fig. 5a and 5b show examples of fragments of biologically originating signals before and after whitening.
- the different channels Before whitening (Fig. 5a), the different channels are heavily correlated and contain signals at different scales.
- Fig 2. shows a more detailed embodiment of the method for learning BCI detectors according to the present invention, including the system parameters to be selected beforehand, i.e. predetermined whitening parameter 12a ( ⁇ , ti /2 ), predetermined windowing parameter 17a ( ⁇ , ti /2 ), predetermined polynomial kernel parameters 15a (d, c).
- predetermined whitening parameter 12a ⁇ , ti /2
- predetermined windowing parameter 17a ⁇ , ti /2
- predetermined polynomial kernel parameters 15a d, c.
- one or more raw EEG signals 11 are provided, wherein linear trends are removed from the raw EEG signals.
- the data is subsequently filtered using a hi-pass filter 18 with a band or cut-off frequency of less than 5 Hz, e.g. 1 Hz (parameter 18a, Band).
- the data is then whitened 12 based on adaptive covariance estimation, wherein the half-life t x / 2 for the whitener is set to e.g. 30 seconds.
- the signals are subsequently windowed with an exponential decay, wherein the half life t 2 is set to 3 seconds for the function / specifying the fraction of variance for the sample age.
- a support vector machine e.g. one-class SVM
- the kernel method are used (block 16), wherein the classifier's weights w are learned from the EEG signals.
- the high-pass cut-off frequency (band) and the rate of adaptation for the adaptive whitener ⁇ t 2 ) need to be set such that the neural signature is not filtered out. With a low cut-off and a long half-life for the whitener 12 this is generally the case.
- the half- life for the variance decay 17 biases the detector 13, where we suspect that choosing a value encompassing the cognitive task is sufficient. Since we are not aware of any successful CFC-based BCIs, a quadratic kernel is presumably sufficient, but higher order kernels are conceivable in view of the present invention.
- the SVM' s v- parameter controls the number of false negatives - a choice that is driven by the intended application (see e.g. [1 1] or [13]).
- the default parameters support the learning of conventional BCI neural signatures, such as the P300, motor imagery and steady-state visually evoked potential (SSVEP).
- the present invention method embodiments as described above may be implemented on a general purpose computer or special purpose processing system, e.g. in the form of a computer program product.
- the computer program product comprises computer executable instructions, which when loaded on a processing system (adapted to properly receive or store the input signal) provide the processing system with the functionality of one of the present method embodiments.
- the present invention may also be embodied as a brain computer interface comprising an input unit for receiving an input signal, and a processing unit connected to the input unit. Again, the processing unit is then arranged for executing the method according to the present invention embodiments.
- the method embodiments may be specifically adapted for a dedicated application, by properly selecting the predetermined parameters as described above.
- Applications of the present invention BCI method include, but are not limited to the use in an entertainment application (e.g. a computer game), for operating a further apparatus (e.g. a machine or an appliance), for training applications (e.g. using feedback training to a user), or for monitoring purposes (e.g. monitoring sleep deprivation in car or truck drivers).
- an entertainment application e.g. a computer game
- a further apparatus e.g. a machine or an appliance
- training applications e.g. using feedback training to a user
- monitoring purposes e.g. monitoring sleep deprivation in car or truck drivers.
- Informal experimentation indicates that performance keeps improving with more training data, and with more than 3 of the 118 EEG channels.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Psychology (AREA)
- Dermatology (AREA)
- Human Computer Interaction (AREA)
- Neurosurgery (AREA)
- Neurology (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
Method for providing a brain-computer interface, and a brain-computer interface, having a classification model as part of a processing pipeline. The input signal comprises a neural signature to be detected by the classification model. Obtaining the classification model comprises training the classification model using the input signal and assigning one or more labels to the input signal at different time points. Each label indicates whether the input signal at the associated time point of the label should be classified as a target activity. Further processing steps are whitening the input signal using whitening parameters to reduce temporal and spatial correlations in the input signal to obtain a whitened time series, specifying a polynomial kernel using polynomial kernel parameters,that induces a mapping of the whitened time series to a linearly separable feature space,and classifying the feature space using the output of the polynomial kernel, classification parameters, and weights.
Description
Method and System for a Brain-Computer Interface
Field of the invention
The present invention relates to a method and system for a brain-computer interface (BCI), more particularly to a method for providing a brain-computer interface, comprising obtaining a classification model as part of a processing pipeline for processing an input signal, the input signal comprising a neural signature to be detected by the classification model.
Prior Art
BCIs promise to bring interaction based on manipulation of brain activity.
Among the intended applications for brain-based interaction are providing
communication and orthosis control for patients [14], improving human performance [9], monitoring user state [15], and entertainment [6].
A key element in a BCI system is the software or classification model that classifies the neural signature of specific brain activity that the user produces with a cognitive task used to interact with the BCI. Creating a BCI classifier that uses a different cognitive tasks is a laborious task, which typically requires extensive neuroscientific knowledge, and in addition requires months to even years to fine-tune parameters for a delicate series of signal processing and machine learning steps. This complex pipeline can then be calibrated to work for a specific user. For this reason, perhaps, the number of distinct types of brain activity that can be reliably used for BCI remains limited to date.
For example, consider the knowledge required to automatically discern brain activity as measured with electroencephalography (EEG) during imagination of different movements. First, one needs to decide if we are using the characteristic electroencephalography (EEG) time course of motor activity (event-related potential (ERP), i.e. change in amplitude of the electrical potential), or the associated changes in neural oscillation (event-related desynchronization (ERD), i.e. changes in frequency- specific power) to detect motor imagery. Assume we make the conventional choice to use oscillations. Now, we need to specify a high-pass and a low-pass cut-off frequency to isolate the μ- and β- band - two additional parameters, that might differ between users. Then we need isolate a specific time period in which this ERD is the strongest,
for example, between 0.5 - 3 seconds after the user is requested to imagine moving his/her hand. To attenuate unrelated brain activity, spatial filters are typically trained with the common spatial patterns (CSP) algorithm, and the band power of about 6 (another choice) filters are kept. Finally, a linear classifier (e.g. an SVM) is trained on the these band power features - and this support vector machine (SVM) requires a regularization parameter c to be set as well. In summary, to classify one of the most popular BCI user tasks, one needs to carefully select 7 user-dependent parameters, parameters that might differ between users and may not have a global optimum to get a well-performing detector. Just using more data cannot compensate for incorrectly set parameters. Further, one may need to be repeated this process even for small changes in the experimental setup.
There is some prior art on BCI detectors that can be separated in methods for detecting specific neural signatures, such as the P300 ERP [3], and methods for detecting novel or anomalous data, wherein deviations from a "normal" resting state are interpreted as intentions to control the BCI [10].
The patent publication US 2008/0201278A1 discloses an approach to detect anomalies in a data stream, wherein the model describing the inlier distribution adapts to new samples. An application to a BCI similar to [10] is presented in this patent publication.
The producer of consumer-grade EEG hardware Emotiv has a series of very generic patents on BCI. Most notable is the patent publication US 2007/0173733, a patent that discloses detection and interaction based on mental states.
The article 'Real Coded GA-Based SVM for Motor Imagery Classification in a Brain-Computer Interface' by A. Bamdadian et al., 2011 9th IEEE International Conference on Control and Automation (ICC A), Santiago, Chile, December 19-21, 2011 discloses a motor imagery-based Brain Computer Interface implementation. The article discloses the use of a real-coded generic algorithm to determine the free kernel parameters of the Support Vector Machine (SVM). The method uses a Common Spatial Pattern (CSP) algorithm which is known as such. The implementation as disclosed can be seen as a spatial whitening specifically and only aimed at removing spatial correlations (disregarding any temporal correlations). Furthermore, in the disclosed embodiment, an EEG signal is frequency filtered, subsequently the CSP algorithm is applied, after which the variance is calculated (i.e. a non-linear transformation). The
output is then fed to a SVM for classification. As a result, in the method disclosed in this article, only a single specific frequency can be used.
The article by Wei-Yen Hsu et al. 'Wavelet-based envelope features with automatic EOG artifact removal: Application to single-trial EEG data', Expert Systems With Applications, vol. 39, no. 3, 15 February 2012 (2012-02-15), pages 2743-2749, discloses a method for removing EOG artefacts from EEG data. Many elements for data processing steps are mentioned, however, in different mutual relationships as in the present application embodiments. E.g. an SVM classifier is used, but again only trained on power features, and as a result phase information is lost. In the EEG signals EOG artefacts are removed (de-correlation using an independent component analysis (ICA)), but no whitening takes place in the classification pipeline.
The article 'An optimal kernel feature extractor and its application to EEG signal classification', Neurocomputing, Elsevier Science Publishers, Amsterdam L, vol. 69, no. 13-15, 1 August 2006 (2006-08-01), pages 1743-1748 discloses an optimized kernel feature extractor (KFE) for EEG analysis. Subsequently SVM classification is performed using a second order non-homogeneous polynomial as kernel function. However, selection of a proper frequency band is necessary to obtain a good result, and again, the polynomial kernel is applied to a power signal.
Summary of the invention
The present invention seeks to provide an improved, more robust BCI method, which is less dependent on specific parameter choices, especially user-dependent parameter selections. According to the present invention, a method according to the preamble defined above is provided, wherein obtaining the classification model comprises training the classification model using the input signal and assigning one or more labels to the input signal at different time points, wherein each label indicates whether the input signal at the associated time point of the label should be classified as target activity by the classification model, further comprising processing steps for
- whitening the input signal using predetermined whitening parameters to reduce temporal and spatial correlations in the input signal to obtain a whitened time series;
- specifying a polynomial kernel using predetermined polynomial kernel
parameters, that induces a mapping of the whitened time series to a linearly separable feature space,
- classifying the feature space using the output of the polynomial kernel, predetermined classification parameters, and weights,
wherein training the classification model comprises determining weights to be used in the classifying step using a numerical optimization procedure.
The input signal used is for a BCI application of biological origin, e.g. a (sub)set of EEG signals. In a further embodiment, the method further comprises executing the classification model using the determined weights and an input signal (without the offset vector) to detect the specific neural signature trained for. The detection may be a presence/absence detector, or more complex prediction or classification schemes.
The present invention allows to avoid the laborious tuning of classifiers commonly required by prior art methods. A unified method is provided called
"unifeat", which learns a detector for brain activity fully from data, that is, without relying on the designer's knowledge of neuroscience, psychology or signal processing. In this respect, the word "detector" is used to describe a classifier that discriminates data having a particular neural signature from all other data.
Application of the non-linear classifier in the present invention (the polynomial kernel) allows to implicitly use the information in the EEG signal related to the power of the EEG signal. In combination with the further features of the present independent claim embodiment, this allows to obtain an optimal filtering maintaining spatial, temporal and functional information in the processed signals.
The few remaining free parameters (whitening parameters, polynomial kernel parameters) directly control aspects of the behaviour of the classifier (e.g. the rate of false positives), or only bias the classifier to specific solutions as opposed to imposing hard limits. Given enough data, the method of the present invention can learn well- performing detectors despite a badly chosen bias. Such a black-box method for BCI development allows non-experts to design BCIs, even for novel user tasks with currently unknown neural signatures. It is expected that the associated faster development cycles and the increased number of researchers that can design BCIs will further accelerate the progress in BCI research.
One could argue that using prior knowledge in addition to learning from the data yields better detectors. While this may be true in theory, in practise this knowledge is often absent, not very reliable, and typically does not improve the performance much
[2]. Acquiring more data, especially when the detector can generalize across subjects, is a much more efficient way to improve detection performance.
An overview of possible approaches to classify brain activity for BCI purposes is given in the review paper by e.g. Lotte et al. [4]. Though, it is of relevance to note that non-linear classifiers have been used for BCI (e.g. see [4]) although simple linear classification methods often perform as well as more complex, nonlinear methods, and are preferred over nonlinear methods [5]. One of the reason that linear methods are so powerful in BCI settings is that the feature extraction is non linear, and usually designed to yield linearly separable feature distributions. This leaves little room for improvement for non-linear classification methods.
In BCI research classification approaches with unbounded temporal history are yet unknown.
In a further embodiment, the polynomial kernel is a non-linear polynomial kernel, i.e. the polynomial kernel is at least a second degree polynomial kernel. When the input signal is a sampled input signal (vector) in time space, quadratic expansion allows the detection of power in different frequency bands, cubic expansion reveals cross-frequency coupling between power and phase, and a fourth degree polynomial would allow to find power-power cross frequency coupling.
The expansion of data is dealt with by the use of the 'kernel trick' [1] known as such from the field of machine learning. This results in lesser requirements on memory use.
In a further embodiment, the numerical optimization procedure is executed for a kernel machine, e.g. using a one-class support vector machine. A kernel machine in various implementations is well known in the art and can advantageously be used in the present invention embodiments.
In specific group of embodiments, whitening the input signal comprises an adaptive whitening step for removing correlations in the input signal. This assures that the feature(s) to be detected are uncorrected and have unity variance. By applying the whitening step, the traces in the input signal (which are e.g. EEG channel traces) are less correlated and represent the activity local to the associated sensor.
In a specific embodiment, the whitening comprises adaptive sensor covariance estimation having a number of predetermined parameters to select: a rate of adaptation defined by a half-life of less than a desired value (e.g. 60 seconds, 50 seconds or 30
seconds) or a forgetting factor (β) between zero and one, i.e. β G [0,1], which determines the rate of adaptation.
An additional filtering may be used for the input signal prior to the whitening: the whitening in a further embodiment comprises filtering the input signal using a high- pass filter with a cut-off frequency of less than 5 Hz, e.g. 1 Hz. This ensures that the neural signature to be trained and detected is not filtered out from the input signal and that correlations over time (covariance over time) are reduced in the input signal.
In a further group of embodiments, the processing pipeline further comprises, after the whitening, windowing the input signal using predetermined windowing parameters to bias use of primarily the newest parts of the input signal (as in general for
BCI applications the features to be detected are in the last few seconds of a biologically originating input signal). The windowing in one embodiment comprises a variance biasing step, using a function that specifies the fraction of variance for a sample age.
Again, this relates to simple predetermined windowing parameters, which are user independent.
In a further aspect, the present invention relates to a brain computer interface comprising an input unit for receiving (and optionally pre-processing) an input signal, and a processing unit connected to the input unit, the processing unit being arranged for executing the method according to any one of the present invention method
embodiments.
In an even further aspect, the present invention relates to a computer program product comprising computer executable instructions, which when loaded on a processing system (arranged to receive an input signal) provide the processing system with the functionality of the method according to any one of the present invention embodiments.
The present invention method embodiments may be advantageously used in a number of specific applications, such as an entertainment application (e.g. a computer game, for operating a further apparatus (e.g. a machine, an appliance, etc), or for training applications (e.g. in user feedback training).
Short description of drawings
The present invention will be explained in further detail hereinafter based on a number of exemplary embodiments with reference to the drawings, wherein:
Fig. 1 shows a schematic view of an embodiment of the method according to the present invention;
Fig. 2 shows a schematic view of a further embodiment of the method according to the present invention;
Fig. 3 a shows a graph of an exemplary input signal as used in the present invention embodiments;
Fig. 3b shows a graph of a quadratic expansion of the time trace of Fig.3a;
Fig. 3c shows a graph of the weights of a linear classifier that is trained using the present invention for detecting oscillations with an ERD in the input signal of Fig. 3a;
Fig. 4 shows a graphical representation of a function specifying the fraction of variance for a sample age as applied in an embodiment of the present invention;
Fig. 5a shows a graph of an input signal as can be used with the present invention embodiments; and
Fig. 5b shows a graph of the input signal of Fig. 5a, after execution of the whitening according to an embodiment of the present invention.
Detailed description of exemplary embodiments
The present invention aims to provide a method of learning detectors for characteristic brain activity fully from data. In prior art methods, creating brain- computer interfaces (BCIs) typically requires extensive knowledge of the brain and of signal processing methods. This slows scientific progress and dissemination of BCI technology. To alleviate this issue, this means that a layperson can record his/her own brain signals and detect at which point in time he/she recorded brain activity of specific interest. Based on only these recorded signals and a label for each time point, the method of the present invention can create software for detecting the presence of certain brain activity in real-time. Effectively, the method of the present invention enables anyone to build BCIs. The method is based on a polynomial kernel, which can implicitly use time domain, band-power and even cross-frequency coupling features, combined with a regularizer that promotes solutions based on recently recorded samples. A one-class kernel machine is subsequently used to optimize detectors that selectively fire when the detector's target brain activity is produced. We show that the detectors learned with this generic method can match and outperform conventional methods that were developed and optimized for specific neuronal signatures. In general
terms, the present invention embodiments relate to a method for providing a brain- computer interface, comprising obtaining a classification model 13 as part of a processing pipeline 10 for processing an input signal E, the input signal E comprising a neural signature to be detected by the classification model (or detector) 13.
A processing pipeline 10 for the brain-computer interface is shown in general form in the schematic drawing of Fig. 1. The input signal 11 is a signal of biological origin, e.g. an electro-physiological signal such as an EEG signal E (having multiple traces from the associated multiple sensors). In general these types of signals are sampled over time, and may have undergone pre-processing (e.g. amplification).
The input signal is processed in a whitening block 12 and an optional windowing block 17, before being fed to the classification model 13. The output of the classification model 13 is a detection or prediction 14 of the actual occurrence of a neural signature of interest. The classification model 13 comprises application of a polynomial kernel 15 and classifying with a kernel method 16.
In general terms, obtaining the classification model 13 comprises training the classification model 13 using the input signal and assigning (one or more) labels to the input signal at different time points, wherein each label indicates whether the input signal at the associated time point of the label should be classified as target activity by the classification model 13, and the method comprises processing steps for
- whitening 12 the input signal using predetermined whitening parameters to reduce temporal and spatial correlations in the input signal;
- specifying 15 a polynomial kernel using predetermined polynomial kernel
parameters, that induces a mapping of the input signal from an input space to a linearly separable feature space,
- classifying 16 the feature space using the output of the polynomial kernel,
predetermined classification parameters, and weights,
wherein training the classification model 13 comprises determining weights to be used in the classifying step using a numerical optimization procedure.
When actually using the trained classification model, only an input signal 11 is used without assigning labels to the input signal at different time points, and the classification model 13 is executed using the determined weights and the input signal.
In the rest of this document we use the following notation. Matrices are indicated with capitals (e.g. A), where / is the identity matrix. Column vectors are
indicated with a harpoon (e.g. x). Lower case variables denote scalars (e.g. s). The Euclidean norm is indicated with || -||.The transpose of is indicated with MT . A strict linear relation between a and b is indicated with a < b. The set of integers is indicated with Π. A <i-dimensional real- valued vectors are G Ed. According to the present invention, a classification model g · ¾ → E is learned based on example data of brain activity comprising a neural signature of interest. Assume that we regularly sample p sensors over time t, resulting in a continuous recording of EEG data that is collected in a matrix E = [e e2, ... , en], et G Ep . To each sample et , a label yt G {0,1} is assigned at time point t indicating whether the classification model g should classify the samples et as target activity (yt = 1) or not (yt = 0). For convenience, the labels y can be represented as sparsely by denoting the offsets of the target events: [t\yt = 1, t G %]. The classification model g(wT ~x ) is automatically found based on features *i derived from past samples of E that detect the neural signature for that detector 13, but does not respond to other feature distributions.
The classification model is learned from examples or training data by using a numerical optimization procedure that minimizes a loss (cost) function that is defined by the classification method. Typically, a loss function I: (MP ', E) → E is chosen containing a term penalizing a modelling error e and a term penalizing model complexity with a so-called regularizer r:
The prediction model m is parametrized with w that linearly combines a numerical feature description. A representation of an object to be classified is mapped from an input space X to a feature space through a map φ:
m( j, w) = g wT(p Xi )), (2) wherein the function g maps the linear projection to an approximation of the requested output yt. A common choice for the regularizer r is Tikhonov regularization:
which degenerates to the common i2 regulizer with T=I. Different choices for the error term and regularization term lead to different classification methods.
In an embodiment, the numerical optimization procedure is executed for a kernel machine. In another embodiment, the numerical optimization procedure is executed for a one-class support vector machine (SVM) [11, 13], which works well
with high-dimensional feature spaces. In a group of embodiments, the numerical optimization procedure is executed for other types of kernel machines [12].
Most off-the- shelve regularization methods do not implement Tikhonov regularization but one can use an appropriate transformation of the feature space having the same effect. To show this, substitute γ = Tw to simplify the regularization term of the intended £2 regularizer as defined in equation (3):
Similary, substitute γ in the prediction model m as defined in equation (2):
m( j, w) = g(yTY-T( (Xi )) (6) Note that equation (6) has the same functional form but now introduces a linear transformation Γ_τ. Therefore, a transformation of the feature space φ( ι ) with Γ_τ of an £2 regularized classifier can be used to simulate Tikhonov regularization.
To describe the "unifeat" method of the present invention embodiments, we work backwards from our aim to eliminate the need to choose neural-signature specific and/or user specific parameters for the design of a detector, to the actual method because choices for early stages in the processing pipeline 10 are driven by
assumptions made in later stages.
According to the present invention, a generalized feature space is used to eliminate the choice between methods based on signal amplitude (ERP) and methods based on signal power (ERD). That is, the generalized feature space can capture even more complex signal characteristics, such as cross-frequency coupling (CFC).
Specifically, we use a polynomial expansion of the amplitude sampled over time: a quadratic expansion allows the detection of power in different frequency bands, cubic expansion reveals cross-frequency coupling (CFC) between power and phase, wherein the third degree terms can be interpreted as a product of amplitude and a power feature, and a fourth degree polynomial expansion can be used to find power-power CFC.
Fig. 3 shows graphs relating to an exemplary ERD processed in the processing pipeline 10 of the present invention embodiments with quadratic expansion of features (i.e. the polynomial kernel 15 is a second degree polynomial kernel, i.e. a non-linear polynomial kernel). In Fig. 3a, an example of a time trace of the amplitude of an oscillation is shown. Target traces all have a slight decrease in power (variance) in the middle of the segment. This ERD cannot be detected with a linear transform of the raw
samples, since the phase of the oscillation is unknown and variable. In Fig. 3b, a quadratic expansion of the time trace is shown. Dark spots indicate a negative correlation in time and light spots indicate a positive correlation in time (these two types alternate in the graph of Fig. 3b). The graph shown in Fig. 3c shows the weights w of a linear classifier that is trained on these quadratic features to separate oscillations with an ERD from those without. Note that feature products with features from the middle segment carry all weight, and display diagonal banding at At's of multiples of a (half a) cycle's length.
The feature space of the present invention may be high-dimensional ('blow up'), requiring a large memory capacity and search space for training the detector. To circumvent this issue, we can use the so called "kernel trick" [1] known in the field of machine learning (ML), which allows us to work efficiently in high-dimensional features spaces by representing a linear classifier with implicit dot products that can be efficiently evaluated in high-dimensional feature spaces. Conveniently, the dot product between input vectors x and x' of a polynomial expansion φ with degree d can be efficiently computed with the kernel trick:
k(x, χ') = φ(χ) · φ(χ') = ( · x' + c)d . (7) With c≠ 0, the polynomial kernel is inhomogeneous, meaning that terms with degrees up to d are included in the expansion. Note that an explicit map φ from the input space to the feature space is not needed. Instead, the kernel function k(x, x') can be specified directly, thereby inducing an implicit mapping from the input space to the feature space. Using a polynomial kernel to classify ERD and ERP effects
simultaneously is a first advantage of the present invention embodiments.
Using a polynomial kernel 15, power fluctuations at arbitrary time offsets with arbitrary frequencies can be detected. This removes the need for parameters to set the frequency band of interest, since the Nyquist frequency defines an upper limit, and the lowest frequency depends on the length of the analysis window. The relative dominance of low- frequency in EEG signals biases the classifier 13 to use low- frequency signals tough. This bias is caused by the regularizer of the classifier 13, that favours classification weights w with a low magnitude to reduce the model's complexity. This regularizer penalizes the amplification of small signals; the typical decreasing power of higher frequencies in EEG thus makes high-frequency signals more costly to use for the classifier.
In an embodiment adaptive whitening 12 is applied to the input signal for removing correlations therein (measurement sensors providing EEG signals are often heavily correlated). In a further embodiment, the whitening may further comprise adaptive sensor covariance estimation to account for e.g. changing background activity. The adaptive sensor covariance estimation may have a rate of adaptation defined by a half-life of less than a desired value (e.g. less than 60 seconds, e.g. less than 50 seconds, e.g. 30 seconds) or a forgetting factor (β) between zero and one, i.e. β G
[0,1]· In an even further embodiment, the whitening may comprise filtering 17 the input signal with a high-pass filter having a suitable band or cut-off frequency, e.g. a cut-off frequency of less than 5 Hz, e.g. lHz.
The method may in a further group of embodiments further comprise windowing the input signal using predetermined windowing parameters to bias use of primarily the newest parts of the input signal. The window in principle is used with a (time) length between zero and infinity, hence the present invention does not impose a temporal limit for capturing samples up to a current point in time.
The embodiment shown in Fig. 1 discloses a very general description of the method according the present invention. Details and further embodiments will be explained and clarified in the following paragraphs, and with reference to the schematic drawing of Fig. 2.
The (optional) windowing step 17 of the present invention aims to remove parameters necessary in prior art methods that define temporal intervals of interest. Ideally, no limit is imposed and the classifier uses arbitrary long windows capturing all samples up to the current point in time. That is, the windows may have a length between zero and infinity. In a further embodiment, the windowing comprises a variance biasing step, using a function that specifies the fraction of variance for a sample age. In this embodiment, the classifier is biased towards the last seconds by scaling the amplitude in the window such that the most recent h seconds are as important or influential as much as all the rest of the history.
Assume that the informative changes in brain activity happen a few seconds before the detector 13 is supposed to fire. By biasing the classifier's projection vector w to mainly use these last few seconds, we can retain a discriminative feature space even though we do not impose a limit on the length of the temporal history used. The
bias is specified through a function / denoting the preferred fraction of variance contributed by features from the youngest a G Π samples:
var(ur x) uTC u rw if T(j) < a
/(a) = ,→r →. = →T → with U[ = 1 . . (8) var(w' x) w' C w <- 0 otherwise where C is the covariance of features x, and u is a sparse variant of w with zeroed weights for features i that correspond to samples with an age τ(ί) greater than a.
To simplify formula (8), we assume without loss of generality that C = /, that is, x is white (i.e. is uncorrelated and has unit variance). The case C≠ I will be described in following paragraphs. Now we derive the preferred relative scale for weight Wj depending on its sample's age τ(ί) :
The contribution of features with exactly age a to the variance can thus be expressed with fi ) - f(a - 1) = m (10)
/(α) - (α - 1) = Σ [Wi2|T( = U] = * L ^11)
A bias towards this specific, relative scaling can be induced with Tikhonov regularization. With the Tikhonov matrix Γ we can transform w into a space
where the regularizer r's is minimal £2 penalty corresponds with (11): r(w) = Ιΐ ΓννΙΙ2, Γ = C-½diag(y), Yi = - [ (τ(ί)) - f«i - 1))], (12) where rij indicates the summed variance of features that share a sample age
τ(ί). Due to the whiteness assumption, the summed variance corresponds to the number of features that is summed over. Tikhonov regularization can be implemented for methods that only implement an £2 regularizer by mapping the features through r-r.
A wide range of functions can be used for the fraction of the output variance /, as long as it is monotonously increasing, (0) > 0 and lim%→∞ fix) = 1 . For simplicity, we use a (normalized to have a unit integral) exponential decay to model /, parameterized with a half-life h specifying the number of youngest samples that contribute half of the variance:
(13)
/(α) = 1 - (1 - a)a, > a = 1— exp
h
Due to the self-similar nature of this /, the half-life h introduces a gentle bias towards young features while still allowing features from much older data to be used. Fig. 4 shows a graph of an exemplary embodiment of a function / that specifies the fraction of variance for a sample age. A half life h = 3 has the result that solutions with half of the total variance produced by samples with an age τ up to 3 seconds are preferred. With increasing age, the samples contribute less and less to the classifier' s variance.
When non-linear kernels are used - such as a quadratic kernel for ERD classification - the number of features n in (12) that share a given sample age needs to reflect the increase in variance in feature space caused by increasing the input space with terms of age a in input space. For example, the growth in feature space caused by adding ηέ features in input space is the following for a polynomial kernel with degree
The whitening step 12 of the present invention aims to remove correlations in the input space. More precisely, the above presented feature space assumes that the features in input space are white, that is, they are uncorrelated and have unit variance. Temporally, the EEG displays an _1 spectrum, meaning that most power is contained in the lower frequencies. In an embodiment, a low order high-pass infinite impulse response (IIR) filter removes most power from these low frequencies and can be used to reduce temporal correlations in the EEG signal. Spatially, the signals are heavily correlated due to volume conductance. In a further embodiment, to remove the spatial correlations and to rescale the high-pass filtered signals to unit variance, a whitening transform P is used, based on the sensor covariance matrix C :
1
PEETPT oc PCPT = I p = c 2. (15)
Here E is a continuous recording as introduced earlier in this document, wherein each row of E comprises the time series for a specific sensor.
In yet a further embodiment, to account for changing background activity, the covariance can be adaptively estimated with a low-pass filter such as the exponentially weighted moving average (EMWA), see [8,7]:
1
et = (i - ?)et_1 + ?(e>→ t T) (16) where β E [0, 1] is a forgetting factor determining the rate of adaptation. Effectively, this results in a whitening transform that passes short-term changes while removing long-term changes in the covariance. An example of EEG signals processed with an adaptive whitener 12 as described above is presented in Fig. 5a and 5b.
Fig. 5a and 5b show examples of fragments of biologically originating signals before and after whitening. Before whitening (Fig. 5a), the different channels are heavily correlated and contain signals at different scales. After whitening 12 (Fig. 5b), the traces are much less correlated and display activity local to the sensor.
Taking the above details into account, Fig 2. shows a more detailed embodiment of the method for learning BCI detectors according to the present invention, including the system parameters to be selected beforehand, i.e. predetermined whitening parameter 12a (β, ti/2), predetermined windowing parameter 17a (Γ, ti/2), predetermined polynomial kernel parameters 15a (d, c). In an embodiment, one or more raw EEG signals 11 (multiple signals E) are provided, wherein linear trends are removed from the raw EEG signals. The data is subsequently filtered using a hi-pass filter 18 with a band or cut-off frequency of less than 5 Hz, e.g. 1 Hz (parameter 18a, Band). The data is then whitened 12 based on adaptive covariance estimation, wherein the half-life tx/2 for the whitener is set to e.g. 30 seconds. The signals are subsequently windowed with an exponential decay, wherein the half life t 2 is set to 3 seconds for the function / specifying the fraction of variance for the sample age. A non-linear polynomial kernel is specified as a second degree polynomial kernel, as in (1) with d = 2 and c = 1. Then a support vector machine (e.g. one-class SVM) and the kernel method are used (block 16), wherein the classifier's weights w are learned from the EEG signals.
There are still a number of free parameters in the embodiment shown in Fig. 2. The high-pass cut-off frequency (band) and the rate of adaptation for the adaptive whitener {t 2 ) need to be set such that the neural signature is not filtered out. With a low cut-off and a long half-life for the whitener 12 this is generally the case. The half- life for the variance decay 17 biases the detector 13, where we suspect that choosing a
value encompassing the cognitive task is sufficient. Since we are not aware of any successful CFC-based BCIs, a quadratic kernel is presumably sufficient, but higher order kernels are conceivable in view of the present invention. And the SVM' s v- parameter controls the number of false negatives - a choice that is driven by the intended application (see e.g. [1 1] or [13]). To summarize, although there are still a number of free parameters, the default parameters support the learning of conventional BCI neural signatures, such as the P300, motor imagery and steady-state visually evoked potential (SSVEP).
The present invention method embodiments as described above may be implemented on a general purpose computer or special purpose processing system, e.g. in the form of a computer program product. The computer program product comprises computer executable instructions, which when loaded on a processing system (adapted to properly receive or store the input signal) provide the processing system with the functionality of one of the present method embodiments. The present invention may also be embodied as a brain computer interface comprising an input unit for receiving an input signal, and a processing unit connected to the input unit. Again, the processing unit is then arranged for executing the method according to the present invention embodiments.
The method embodiments may be specifically adapted for a dedicated application, by properly selecting the predetermined parameters as described above.
Applications of the present invention BCI method include, but are not limited to the use in an entertainment application (e.g. a computer game), for operating a further apparatus (e.g. a machine or an appliance), for training applications (e.g. using feedback training to a user), or for monitoring purposes (e.g. monitoring sleep deprivation in car or truck drivers).
Examples
We demonstrated the method on one subject. Due to memory restrictions, we used only channels C3, Cz and C4 on the motor cortex (the task is motor imagery). The "unifeat" pipeline 10 was parameterized as follows: we used a sliding window with length I = 5, half life h = 4, spaced 1/3 of a second apart. A one-class SVM with v = 1/2 and a second degree polynomial kernel (d, c) = (2, 1) was trained only on windows ending 4 seconds after the imagination of movement with the right hand
started. The u-parameter gives an upper bound on the number of errors on the training data, or in this case, false negatives. The first 80% of the session was used to train the detector 13, the remaining data was used for evaluation. While a conventional ERD pipeline based on the whitened covariance features of 8-30 Hz band pass could not obtain performance better than chance, the results of the "unifeat" method are quite promising: accuracy = 91%, AUC-ROC = 75%, and an information transfer rate (ITR) of 2.7 bits per minute.
Informal experimentation indicates that performance keeps improving with more training data, and with more than 3 of the 118 EEG channels.
The present invention embodiments have been described above with reference to a number of exemplary embodiments as shown in the drawings. Modifications and alternative implementations of some parts or elements are possible, and are included in the scope of protection as defined in the appended claims.
List of References
[1] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273-297, 1995.
[2] Moritz Grosse-Wentrup, Christian Liefhold, Klaus Gramann, and Martin Buss. Beamforming in non-invasive brain-computer interfaces. IEEE
Transactions on Biomedical Engineering, 56(4): 1209-1219, 2009.
[3] Yi-Hung Liu, Jui-Tsung Weng, Zhi-Hao Kang, Jyh-Tong Teng, and Han- Pang Huang. An improved SVM-based real-time p300 speller for braincomputer interface. In 2010 IEEE International Conference on Systems Man
and Cybernetics (SMC), pages 1748-1754, 2010.
[4] Fabien Lotte, M Condego, A Lecuyer, F Lamarche, and B Arnaldi. A review of classification algorithms for EEG-based brain-computer interfaces. Journal of Neural Engineering, 4:R1-R13, 2007.
[5] Klaus-Robert Miiller, Charles W. Anderson, and Gary E. Birch. Linear and nonlinear methods for brain-computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 11(2): 165-169, 2003.
[6] Anton Nijholt, Danny Plass-Oude Bos, and Boris Reuderink. Turning shortcomings into challenges: Brain-computer interfaces for games. Entertainment
Computing, 1(2): 85-94, October 2009.
[7] Boris Reuderink. Robust Brain-Computer Interfaces. PhD thesis, University of Twente, October 2011.
[8] Boris Reuderink, Jason Farquhar, Mannes Poel, and Anton Nijholt.
A subject-independent brain-computer interface based on smoothed,
second-order baselining. In Proceedings of the 33st Annual International
Conference of the IEEE Engineering in Medicine and Biology Society (EMBC
2011), pages 4600-4604, 2011. doi: 10.1109/IEMBS.2011.6091139.
[9] Paul Sajda, Eric Pohlmeyer, Jun Wang, Lucas C. Parra, Christoforos
Christoforou, Jacek Dmochowski, Barbara Hanna, CLaus Bahlmann, Maneesh
Kumar Singh, and Shih-Fu Chang. In a blink of an eye and a
switch of a transistor: Cortically coupled computer vision. Proceedings of the IEEE, 98(3):462-478, 2010.
[10] Gerwin Schalk, Peter Brunner, Lester A. Gerhardt, Horst Bischof, and
Jonathan R.Wolpaw. Brain-computer interfaces (BCIs): Detection instead of classification. Journal of Neuroscience Methods, 167:51-62, 2008.
[11] Berhard Scholkopf, John C. Piatt, John Shawe-Tyler, Alex J. Smola, and Robert C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13 : 1443-1471, 2001.
[12] Bernhard Scholkopf and Alexander J. Smola. Learning with Kernels. The MIT Press, 2001.
[13] David M. J. Tax and Robert P.W. Duin. Support vector domain description. Pattern Recognition Letters, 20: 1191-1199, 1999.
[14] Jonathan R. Wolpaw, Niels Birbaumer, Dennis J. McFarland, Gert
Pfurtscheller, and Theresa M. Vaughan. Brain-computer interfaces for
communication and control. Clinical Neurophysiology, 113 (6): 767-791,
2002.
[15] Thorsten O. Zander and Christian Kothe. Towards passive braincomputer interfaces: applying brain-computer interface technology to
human-machine systems in general. Journal of Neural Engineering, 8(2):
025005 (5pp), 2011.
Claims
1. Method for providing a brain-computer interface, comprising:
obtaining a classification model as part of a processing pipeline for processing an input signal (£), the input signal comprising a neural signature to be detected by the classification model,
wherein obtaining the classification model comprises training the classification model using the input signal and assigning one or more labels to the input signal at different time points, wherein each label indicates whether the input signal at the associated time point of the label should be classified as a target activity by the classification model, further comprising processing steps for
- whitening the input signal using predetermined whitening parameters to reduce temporal and spatial correlations in the input signal to obtain a whitened time series;
- specifying a polynomial kernel using predetermined polynomial kernel
parameters, that induces a mapping of the whitened time series to a linearly separable feature space,
- classifying the feature space using the output of the polynomial kernel,
predetermined classification parameters, and weights,
wherein training the classification model comprises determining weights to be used in the classifying step using a numerical optimization procedure.
2. Method according to claim 1, further comprising executing the classification model using the determined weights and an input signal.
3. Method according to any one of claims 1-2, wherein the polynomial kernel is a non-linear polynomial kernel.
4. Method according to any one of claims 1-3, wherein the numerical optimization procedure is executed for a kernel machine.
5. Method according to any one of claims 1-4, wherein whitening the input signal comprises an adaptive whitening step for removing correlations in the input signal.
6. Method according to claim 5, wherein the whitening comprises adaptive sensor covariance estimation having a rate of adaptation defined by a half-life of less than 60 seconds, e.g. less than 50 seconds, e.g. 30 seconds. .
7. Method according to any one of claims 1-6, wherein the whitening further comprises filtering the input signal using a high-pass filter with a cut-off frequency of less than 5 Hz, e.g. 1 Hz.
8. Method according to any one of claims 1-7, further comprising, after the whitening, windowing the input signal using predetermined windowing parameters to bias use of primarily the newest parts of the input signal.
9. Method according to any one of claims 1-8, wherein the windowing comprises a variance biasing step, using a function that specifies the fraction of variance for a sample age.
10. Brain computer interface comprising an input unit for receiving an input signal, and a processing unit connected to the input unit, the processing unit being arranged for executing the method according to any one of claims 1-9.
11. Computer program product comprising computer executable instructions, which when loaded on a processing system provide the processing system with the functionality of the method according to any one of claims 1-9.
12. Use of the method according to any one of claims 1-9 in an entertainment application.
13. Use of the method according to any one of claims 1-9 for operating a further apparatus.
14. Use of the method according to any one of claims 1-9 for training applications.
15. Use of the method according to any one of claims 1-9 for monitoring purposes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12190426 | 2012-10-29 | ||
EP12190426.2 | 2012-10-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014069996A1 true WO2014069996A1 (en) | 2014-05-08 |
Family
ID=47225964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NL2013/050762 WO2014069996A1 (en) | 2012-10-29 | 2013-10-29 | Method and system for a brain-computer interface |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2014069996A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016033686A1 (en) | 2014-09-04 | 2016-03-10 | University Health Network | Method and system for brain activity signal-based treatment and/or control of user devices |
CN107301675A (en) * | 2017-06-16 | 2017-10-27 | 华南理工大学 | A kind of three-dimensional modeling method based on brain-computer interface |
CN110251119A (en) * | 2019-05-28 | 2019-09-20 | 深圳和而泰家居在线网络科技有限公司 | Disaggregated model acquisition methods, HRV data classification method, device and Related product |
US10582316B2 (en) | 2017-11-30 | 2020-03-03 | Starkey Laboratories, Inc. | Ear-worn electronic device incorporating motor brain-computer interface |
CN110974221A (en) * | 2019-12-20 | 2020-04-10 | 北京脑陆科技有限公司 | Mixed function correlation vector machine-based mixed brain-computer interface system |
CN113762346A (en) * | 2021-08-06 | 2021-12-07 | 广东工业大学 | Semi-supervised autism identification method and system fusing causal features of brain area |
-
2013
- 2013-10-29 WO PCT/NL2013/050762 patent/WO2014069996A1/en active Application Filing
Non-Patent Citations (5)
Title |
---|
ATIEH BAMDADIAN ET AL: "Real coded GA-based SVM for motor imagery classification in a Brain-Computer Interface", CONTROL AND AUTOMATION (ICCA), 2011 9TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 19 December 2011 (2011-12-19), pages 1355 - 1359, XP032101946, ISBN: 978-1-4577-1475-7, DOI: 10.1109/ICCA.2011.6138097 * |
BORIS REUDERINK ET AL: "The Impact of Loss of Control on Movement BCIs", IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATIONENGINEERING, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 19, no. 6, 1 December 2011 (2011-12-01), pages 628 - 637, XP011379901, ISSN: 1534-4320, DOI: 10.1109/TNSRE.2011.2166562 * |
REUDERINK B ET AL: "A subject-independent brain-computer interface based on smoothed, second-order baselining", ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY,EMBC, 2011 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE, IEEE, 30 August 2011 (2011-08-30), pages 4600 - 4604, XP032319708, ISBN: 978-1-4244-4121-1, DOI: 10.1109/IEMBS.2011.6091139 * |
SUN S ET AL: "An optimal kernel feature extractor and its application to EEG signal classification", NEUROCOMPUTING, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 69, no. 13-15, 1 August 2006 (2006-08-01), pages 1743 - 1748, XP027970431, ISSN: 0925-2312, [retrieved on 20060801] * |
WEI-YEN HSU ET AL: "Wavelet-based envelope features with automatic EOG artifact removal: Application to single-trial EEG data", EXPERT SYSTEMS WITH APPLICATIONS, vol. 39, no. 3, 15 February 2012 (2012-02-15), pages 2743 - 2749, XP028333745, ISSN: 0957-4174, [retrieved on 20110828], DOI: 10.1016/J.ESWA.2011.08.132 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016033686A1 (en) | 2014-09-04 | 2016-03-10 | University Health Network | Method and system for brain activity signal-based treatment and/or control of user devices |
CN107072583A (en) * | 2014-09-04 | 2017-08-18 | 大学健康网络 | For the treatment based on cerebration signal and/or the method and system of the control of user equipment |
EP3188656A4 (en) * | 2014-09-04 | 2018-05-02 | University Health Network | Method and system for brain activity signal-based treatment and/or control of user devices |
US10194858B2 (en) | 2014-09-04 | 2019-02-05 | University Health Network | Method and system for brain activity signal-based treatment and/or control of user devices |
US11955217B2 (en) | 2014-09-04 | 2024-04-09 | University Health Network | Method and system for brain activity signal-based treatment and/or control of user devices |
CN107301675A (en) * | 2017-06-16 | 2017-10-27 | 华南理工大学 | A kind of three-dimensional modeling method based on brain-computer interface |
US10582316B2 (en) | 2017-11-30 | 2020-03-03 | Starkey Laboratories, Inc. | Ear-worn electronic device incorporating motor brain-computer interface |
US10694299B2 (en) | 2017-11-30 | 2020-06-23 | Starkey Laboratories, Inc. | Ear-worn electronic device incorporating motor brain-computer interface |
US11102591B2 (en) | 2017-11-30 | 2021-08-24 | Starkey Laboratories, Inc. | Ear-worn electronic device incorporating motor brain-computer interface |
US11638104B2 (en) | 2017-11-30 | 2023-04-25 | Starkey Laboratories, Inc. | Ear-worn electronic device incorporating motor brain-computer interface |
CN110251119B (en) * | 2019-05-28 | 2022-07-15 | 深圳数联天下智能科技有限公司 | Classification model obtaining method, HRV data classification device and related products |
CN110251119A (en) * | 2019-05-28 | 2019-09-20 | 深圳和而泰家居在线网络科技有限公司 | Disaggregated model acquisition methods, HRV data classification method, device and Related product |
CN110974221A (en) * | 2019-12-20 | 2020-04-10 | 北京脑陆科技有限公司 | Mixed function correlation vector machine-based mixed brain-computer interface system |
CN113762346A (en) * | 2021-08-06 | 2021-12-07 | 广东工业大学 | Semi-supervised autism identification method and system fusing causal features of brain area |
CN113762346B (en) * | 2021-08-06 | 2023-11-03 | 广东工业大学 | Semi-supervised autism identification method and system fusing causal features of brain regions |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014069996A1 (en) | Method and system for a brain-computer interface | |
Raza et al. | Adaptive learning with covariate shift-detection for motor imagery-based brain–computer interface | |
Murugavel et al. | Hierarchical multi-class SVM with ELM kernel for epileptic EEG signal classification | |
Vidaurre et al. | BioSig: the free and open source software library for biomedical signal processing | |
Vidaurre et al. | Machine-learning-based coadaptive calibration for brain-computer interfaces | |
Shoker et al. | Artifact removal from electroencephalograms using a hybrid BSS-SVM algorithm | |
Jaiswal et al. | Epileptic seizure detection in EEG signal with GModPCA and support vector machine | |
Rejer | EEG feature selection for BCI based on motor imaginary task | |
Oppelt et al. | Combining scatter transform and deep neural networks for multilabel electrocardiogram signal classification | |
Yadav et al. | EEG/ERP signal enhancement through an optimally tuned adaptive filter based on marine predators algorithm | |
Khanam et al. | Electroencephalogram-based cognitive load level classification using wavelet decomposition and support vector machine | |
Malekmohammadi et al. | An efficient hardware implementation for a motor imagery brain computer interface system | |
Van Maanen et al. | The discovery and interpretation of evidence accumulation stages | |
Nakra et al. | Deep neural network with harmony search based optimal feature selection of EEG signals for motor imagery classification | |
López-García et al. | Multivariate pattern analysis techniques for electroencephalography data to study flanker interference effects | |
Khan et al. | A novel framework for classification of two-class motor imagery EEG signals using logistic regression classification algorithm | |
Model et al. | Learning subject-specific spatial and temporal filters for single-trial EEG classification | |
Chandel et al. | Computer Based Detection of Alcoholism using EEG Signals | |
Ma et al. | Online learning using projections onto shrinkage closed balls for adaptive brain-computer interface | |
Parto Dezfouli et al. | Single-trial decoding from local field potential using bag of word representation | |
Mattar et al. | Electroencephalography features extraction and deep patterns analysis for robotics learning and control through brain-computer interface | |
Bhattacharyya et al. | Motor imagery-based neuro-feedback system using neuronal excitation of the active synapses | |
Brynestad | EEG-based motor imagery classification using dwt-based feature extraction with svm and riemannian geometry-based classifiers | |
CN114366122B (en) | Motor imagery analysis method and system based on EEG brain-computer interface | |
Saibene et al. | Human-machine interaction: EEG electrode and feature selection exploiting evolutionary algorithms in motor imagery tasks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13786327 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13786327 Country of ref document: EP Kind code of ref document: A1 |