Generalized Deep Learning EEG Models for Cross-Participant and Cross-Task Detection of the Vigilance Decrement in Sustained Attention Tasks
<p>Examples of the different Air Traffic Controller Task stimuli [<a href="#B19-sensors-21-05617" class="html-bibr">19</a>].</p> "> Figure 2
<p>Shapes for the 3-Stimulus Task. The target is a large circle (<b>A</b>), the non-target distractor a large square (<b>B</b>), and the standard distractor a small circle (<b>C</b>) [<a href="#B18-sensors-21-05617" class="html-bibr">18</a>].</p> "> Figure 3
<p>Examples of the different line task stimuli, with lines being the same length on the left and different lengths on the right.</p> "> Figure 4
<p>BIS measures and the corresponding best-fit lines for: (<b>A</b>) Air Traffic Controller task (top), (<b>B</b>) Oddball task (middle), and (<b>C</b>) Line task (bottom). The lines represent a participant’s BIS measures over the duration of the task, with lines on the right being best-fit lines. BIS measures vary from bin to bin for each participant, with some participants decreasing steadily throughout the entire task, some decreasing initially and then recovering, or some alternating between decreasing and increasing BIS. Note that every participant’s best-fit line has a negative slope, indicating that every participant’s first bin is their most attentive bin with their largest BIS measure.</p> "> Figure 5
<p>Visual illustration of a causal TCN [<a href="#B61-sensors-21-05617" class="html-bibr">61</a>]. This TCN has a block size of 1, a dilation list (1,2,4,8) (i.e., dilation factor 8), and a kernel size of 2. This results in a receptive field of <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>·</mo> <mn>1</mn> <mo>·</mo> <mn>8</mn> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>.</p> "> Figure 6
<p>Visual illustration of a causal TCN with stacked blocks [<a href="#B62-sensors-21-05617" class="html-bibr">62</a>]. This TCN has a block size of 2, a dilation list (1,2,4,8) (i.e., dilation factor 8), and a kernel size of 2. This results in a receptive field of <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>·</mo> <mn>2</mn> <mo>·</mo> <mn>8</mn> <mo>=</mo> <mn>32</mn> </mrow> </semantics></math>.</p> "> Figure 7
<p>Visual representation of a standard AE architecture [<a href="#B65-sensors-21-05617" class="html-bibr">65</a>].</p> "> Figure 8
<p>Visual representation of MLPNN classifier. The MLPNN architecture consists of three fully-connected hidden layers with hidden units <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>u</mi> </mrow> </semantics></math> and the ReLU activation, each followed by a dropout layer with a dropout rate <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>r</mi> </mrow> </semantics></math>.</p> "> Figure 9
<p>Visual representation of the TCN-AE architecture. Each block corresponds to a layer, with hyperparameters for that layer <span class="html-italic">italicized</span>. The activation function for the TCN and Conv1D layers is in parentheses, using ReLU for the encoder and no activation function for the decoder. The dimensions for the input are also provided in the upper-right of each layer as it passes throughout the architecture, with the dimensions starting at <span class="html-italic">T</span> = 250 for the sequence length and 64 representing the features (corresponding to the 64 electrodes). The latent space dimensions are 50 × <span class="html-italic">L</span>, with <span class="html-italic">L</span> being a hyperparameter.</p> "> Figure 10
<p>Visual representation of the TCN-AE classifier. The Encoder and Decoder comprise the AE architecture, with the latent space then used as input to the FCN classifier shown at the bottom. The FCN classifier architecture consists of two fully-connected hidden layers with hidden units <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>u</mi> </mrow> </semantics></math>, each followed by a dropout layer with a dropout rate <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>r</mi> </mrow> </semantics></math>.</p> "> Figure 11
<p>Visual representation of the TCN classifier. Each block corresponds to a layer, with hyperparameters for that layer <span class="html-italic">italicized</span>, and the activation function in parentheses.</p> "> Figure 12
<p>Participant validation accuracies for the MLPNN model, with 9 participants having validation accuracies statistically greater than random chance. Participants 2, 3, 7, 8, and 11 did not have validation accuracies greater than random chance. This model achieved a 7-fold CV accuracy of 64% (95% CI: 0.59, 0.69).</p> "> Figure 13
<p>Participant validation accuracies for the TCN model, with 3 participants (1, 7, and 12) having validation accuracies statistically greater than random chance. This model achieved a 7-fold CV accuracy of 56% (95% CI: 0.51, 0.61).</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Vigilance Decrement
Performance Measurement
2.2. EEG
3. Methods
3.1. Datasets
3.2. Vigilance Decrement Tasks
3.2.1. Hitchcock Air Traffic Controller Task
3.2.2. 3-Stimulus Oddball Task
3.2.3. Line Task
3.3. Preprocessing and Epoching of EEG Signals
3.4. Model Creation
3.5. Temporal Convolutional Networks
3.6. Autoencoders
3.7. Frequency-Domain Model
3.8. Time-Domain Models
3.8.1. TCN-AE
3.8.2. TCN
4. Results
4.1. Frequency-Domain Model
4.2. Time-Domain Model—TCN-AE
4.3. Time-Domain Model—TCN
5. Discussion
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AE | Autoencoder |
AI | Artificial Intelligence |
ATC | Air Traffic Controller |
BIS | Balanced Integration Score |
CI | Confidence Interval |
CNN | Convolutional Neural Network |
CV | Cross-Validation |
ECG | Electrocardiography |
EEG | Electroencephalography |
ELM | Extreme Learning Machine |
EOG | Electrooculography |
ERP | Event-Related Potential |
FCN | Fully Connected Network |
GRU | Gated Recurrent Unit |
HEOG | Horizontal Electrooculography |
HPW | Human Performance Wing |
IES | Inverse Efficiency Score |
L2PO-CV | Leave-Two-Participants-Out Cross-Validation |
LSTM | Long Short-Term Memory |
MLPNN | Multi-Layer Perceptron Neural Network |
MSE | Mean Squared Error |
PC | Proportion of Correct Responses |
PSD | Power Spectral Density |
PVT | Psychomotor Vigilance Test |
RCS | Rate-Correct Score |
ReLU | Rectified Linear Unit |
RT | Response Time |
TCN | Temporal Convolutional Network |
TCN-AE | Temporal Convolutional Network Autoencoder |
VEOG | Vertical Electrooculography |
Appendix A
- Modifed EEGLAB to use double precision, as single precision can destroy natural commutativity of the linear operations.
- Imported data into EEGLAB and included reference channels based on the equipment used (e.g., Biosemi’s 64 scalp electrode cap uses channels 65 and 66 as reference channels, which are electrodes placed on the mastoids specifically for the purpose of referencing).
- Downsampled to 250 Hz for purpose of improving ICA decomposition by cutting off unnecessary high-frequency information and also to reduce data size.
- High-pass filtered the data at 1 Hz to reduce baseline drift, to improve line-noise removal, and to improve ICA [72]. High-pass filter is done before line-noise removal and 1 Hz is used as we were not performing event-related potential (ERP) analysis, which could be affected by using a 1 Hz high-pass filter, and would require an alternate strategy.
- Imported channel info using International 10–20 system to allow for re-referencing.
- Removed line noise using CleanLine plugin (default 60Hz notch filter) [73].
- Removed bad channels using EEGLAB clean_rawdata plugin patented by Christian Kothe [74], which utilizes Artifact Subspace Reconstruction.
- Interpolated all removed channels to minimize a potential bias in the average referencing step.
- Independent Component Analysis (ICA) "runica" “‘infomax’: (extended)” algorithm variant was executed with the vertical EOG (VEOG) electrode used as input for the function.
- ICA results from Step 10 are used to remove artifact ICA components.
References
- Lal, S.K.; Craig, A. A critical review of the psychophysiology of driver fatigue. Biol. Psychol. 2001, 55, 173–194. [Google Scholar] [CrossRef]
- Ackerman, P.L.; Kanfer, R. Test length and cognitive fatigue: An empirical examination of effects on performance and test-taker reactions. J. Exp. Psychol. Appl. 2009, 15, 163. [Google Scholar] [CrossRef] [Green Version]
- Parasuraman, R.; Warm, J.S.; Dember, W.N. Vigilance: Taxonomy and utility. In Ergonomics and Human Factors; Springer: New York, NY, USA, 1987; pp. 11–32. [Google Scholar]
- Parasuraman, R.; Mouloua, M. Automation and Human Performance: Theory and Applications; Routledge: London, UK, 2018. [Google Scholar]
- Warm, J.S.; Parasuraman, R.; Matthews, G. Vigilance requires hard mental work and is stressful. Hum. Factors 2008, 50, 433–441. [Google Scholar] [CrossRef] [PubMed]
- Mackworth, N.H. The breakdown of vigilance during prolonged visual search. Q. J. Exp. Psychol. 1948, 1, 6–21. [Google Scholar] [CrossRef]
- Sasahara, I.; Fujimura, N.; Nozawa, Y.; Furuhata, Y.; Sato, H. The effect of histidine on mental fatigue and cognitive performance in subjects with high fatigue and sleep disruption scores. Physiol. Behav. 2015, 147, 238–244. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Smolders, K.C.; de Kort, Y.A. Bright light and mental fatigue: Effects on alertness, vitality, performance and physiological arousal. J. Environ. Psychol. 2014, 39, 77–91. [Google Scholar] [CrossRef]
- Shigihara, Y.; Tanaka, M.; Ishii, A.; Tajima, S.; Kanai, E.; Funakura, M.; Watanabe, Y. Two different types of mental fatigue produce different styles of task performance. Neurol. Psychiatry Brain Res. 2013, 19, 5–11. [Google Scholar] [CrossRef]
- Hogan, L.C.; Bell, M.; Olson, R. A preliminary investigation of the reinforcement function of signal detections in simulated baggage screening: Further support for the vigilance reinforcement hypothesis. J. Organ. Behav. Manag. 2009, 29, 6–18. [Google Scholar] [CrossRef]
- Fisk, A.D.; Schneider, W. Control and automatic processing during tasks requiring sustained attention: A new approach to vigilance. Hum. Factors 1981, 23, 737–750. [Google Scholar] [CrossRef]
- Ariga, A.; Lleras, A. Brief and rare mental “breaks” keep you focused: Deactivation and reactivation of task goals preempt vigilance decrements. Cognition 2011, 118, 439–443. [Google Scholar] [CrossRef]
- Craig, A.; Tran, Y.; Wijesuriya, N.; Nguyen, H. Regional brain wave activity changes associated with fatigue. Psychophysiology 2012, 49, 574–582. [Google Scholar] [CrossRef] [PubMed]
- Zhao, C.; Zhao, M.; Liu, J.; Zheng, C. Electroencephalogram and electrocardiograph assessment of mental fatigue in a driving simulator. Accid. Anal. Prev. 2012, 45, 83–90. [Google Scholar] [CrossRef] [PubMed]
- Gao, Z.; Wang, X.; Yang, Y.; Mu, C.; Cai, Q.; Dang, W.; Zuo, S. EEG-based spatio–temporal convolutional neural network for driver fatigue evaluation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2755–2763. [Google Scholar] [CrossRef]
- Hu, X.; Lodewijks, G. Detecting fatigue in car drivers and aircraft pilots by using non-invasive measures: The value of differentiation of sleepiness and mental fatigue. J. Saf. Res. 2020, 72, 173–187. [Google Scholar] [CrossRef]
- Yin, Z.; Zhang, J. Task-generic mental fatigue recognition based on neurophysiological signals and dynamical deep extreme learning machine. Neurocomputing 2018, 283, 266–281. [Google Scholar] [CrossRef]
- Walsh, M.M.; Gunzelmann, G.; Anderson, J.R. Relationship of P3b single-trial latencies and response times in one, two, and three-stimulus oddball tasks. Biol. Psychol. 2017, 123, 47–61. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Haubert, A.; Walsh, M.; Boyd, R.; Morris, M.; Wiedbusch, M.; Krusmark, M.; Gunzelmann, G. Relationship of event-related potentials to the vigilance decrement. Front. Psychol. 2018, 9, 237. [Google Scholar] [CrossRef]
- Kamrud, A.; Borghetti, B.; Schubert Kabban, C. The Effects of Individual Differences, Non-Stationarity, and the Importance of Data Partitioning Decisions for Training and Testing of EEG Cross-Participant Models. Sensors 2021, 21, 3225. [Google Scholar] [CrossRef]
- Parasuraman, R.; Davies, D. A taxonomic analysis of vigilance performance. In Vigilance; Springer: Boston, MA, USA, 1977; pp. 559–574. [Google Scholar]
- See, J.E.; Howe, S.R.; Warm, J.S.; Dember, W.N. Meta-analysis of the sensitivity decrement in vigilance. Psychol. Bull. 1995, 117, 230. [Google Scholar] [CrossRef]
- Oken, B.S.; Salinsky, M.C.; Elsas, S. Vigilance, alertness, or sustained attention: Physiological basis and measurement. Clin. Neurophysiol. 2006, 117, 1885–1901. [Google Scholar] [CrossRef] [Green Version]
- Charbonnier, S.; Roy, R.N.; Bonnet, S.; Campagne, A. EEG index for control operators’ mental fatigue monitoring using interactions between brain regions. Expert Syst. Appl. 2016, 52, 91–98. [Google Scholar] [CrossRef] [Green Version]
- Gartenberg, D.; Veksler, B.Z.; Gunzelmann, G.; Trafton, J.G. An ACT-R process model of the signal duration phenomenon of vigilance. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage CA: Los Angeles, CA, USA, 2014; Volume 58, pp. 909–913. [Google Scholar]
- Desmond, P.; Matthews, G.; Bush, J. Sustained visual attention during simultaneous and successive vigilance tasks. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage CA: Los Angeles, CA, USA, 2001; Volume 45, pp. 1386–1389. [Google Scholar]
- Baker, C. Signal duration as a factor in vigilance tasks. Science 1963, 141, 1196–1197. [Google Scholar] [CrossRef]
- Teo, G.; Szalma, J.L. The effects of task type and source complexity on vigilance performance, workload, and stress. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage CA: Los Angeles, CA, USA, 2011; Volume 55, pp. 1180–1184. [Google Scholar]
- Parasuraman, R.; Mouloua, M. Interaction of signal discriminability and task type in vigilance decrement. Percept. Psychophys. 1987, 41, 17–22. [Google Scholar] [CrossRef] [Green Version]
- Lanzetta, T.M.; Dember, W.N.; Warm, J.S.; Berch, D.B. Effects of task type and stimulus heterogeneity on the event rate function in sustained attention. Hum. Factors 1987, 29, 625–633. [Google Scholar] [CrossRef] [PubMed]
- Teichner, W.H. The detection of a simple visual signal as a function of time of watch. Hum. Factors 1974, 16, 339–352. [Google Scholar] [CrossRef] [PubMed]
- Pachella, R.G. The Interpretation of Reaction Time in Information Processing Research; Technical Report; Michigan University Ann Arbor Human Performance Center: Ann Arbor, MI, USA, 1973. [Google Scholar]
- Wickelgren, W.A. Speed-accuracy tradeoff and information processing dynamics. Acta Psychol. 1977, 41, 67–85. [Google Scholar] [CrossRef]
- Townsend, J.T.; Ashby, F.G. Methods of modeling capacity in simple processing systems. In Cognitive Theory; Psychology Press: London, UK, 2014; pp. 211–252. [Google Scholar]
- Woltz, D.J.; Was, C.A. Availability of related long-term memory during and after attention focus in working memory. Mem. Cogn. 2006, 34, 668–684. [Google Scholar] [CrossRef] [Green Version]
- Liesefeld, H.R.; Janczyk, M. Combining speed and accuracy to control for speed-accuracy trade-offs (?). Behav. Res. Methods 2019, 51, 40–60. [Google Scholar] [CrossRef] [Green Version]
- Mueller, S.T.; Alam, L.; Funke, G.J.; Linja, A.; Ibne Mamun, T.; Smith, S.L. Examining Methods for Combining Speed and Accuracy in a Go/No-Go Vigilance Task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage CA: Los Angeles, CA, USA, 2020; Volume 64, pp. 1202–1206. [Google Scholar]
- Gunzelmann, G.; Gross, J.B.; Gluck, K.A.; Dinges, D.F. Sleep deprivation and sustained attention performance: Integrating mathematical and cognitive modeling. Cogn. Sci. 2009, 33, 880–910. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Caldwell, J.A.; Hall, K.K.; Erickson, B.S. EEG data collected from helicopter pilots in flight are sufficiently sensitive to detect increased fatigue from sleep deprivation. Int. J. Aviat. Psychol. 2002, 12, 19–32. [Google Scholar] [CrossRef]
- Di Stasi, L.L.; Diaz-Piedra, C.; Suárez, J.; McCamy, M.B.; Martinez-Conde, S.; Roca-Dorda, J.; Catena, A. Task complexity modulates pilot electroencephalographic activity during real flights. Psychophysiology 2015, 52, 951–956. [Google Scholar] [CrossRef] [PubMed]
- Cao, L.; Li, J.; Sun, Y.; Zhu, H.; Yan, C. EEG-based vigilance analysis by using fisher score and PCA algorithm. In Proceedings of the 2010 IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, 10–12 December 2010; Volume 1, pp. 175–179. [Google Scholar]
- Lin, C.T.; Wu, R.C.; Liang, S.F.; Chao, W.H.; Chen, Y.J.; Jung, T.P. EEG-based drowsiness estimation for safety driving using independent component analysis. IEEE Trans. Circuits Syst. I Regul. Pap. 2005, 52, 2726–2738. [Google Scholar]
- Ji, H.; Li, J.; Cao, L.; Wang, D. A EEG-Based brain computer interface system towards applicable vigilance monitoring. In Foundations of Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2011; pp. 743–749. [Google Scholar]
- Borghini, G.; Vecchiato, G.; Toppi, J.; Astolfi, L.; Maglione, A.; Isabella, R.; Caltagirone, C.; Kong, W.; Wei, D.; Zhou, Z.; et al. Assessment of mental fatigue during car driving by using high resolution EEG activity and neurophysiologic indices. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 6442–6445. [Google Scholar]
- Zhang, C.; Zheng, C.X.; Yu, X.L. Automatic recognition of cognitive fatigue from physiological indices by using wavelet packet transform and kernel learning algorithms. Expert Syst. Appl. 2009, 36, 4664–4671. [Google Scholar] [CrossRef]
- Rosipal, R.; Peters, B.; Kecklund, G.; Åkerstedt, T.; Gruber, G.; Woertz, M.; Anderer, P.; Dorffner, G. EEG-based drivers’ drowsiness monitoring using a hierarchical Gaussian mixture model. In Proceedings of the International Conference on Foundations of Augmented Cognition, Beijing, China, 22–27 July 2007; pp. 294–303. [Google Scholar]
- Al-Shargie, F.; Tariq, U.; Hassanin, O.; Mir, H.; Babiloni, F.; Al-Nashash, H. Brain connectivity analysis under semantic vigilance and enhanced mental states. Brain Sci. 2019, 9, 363. [Google Scholar] [CrossRef] [Green Version]
- Al-Shargie, F.M.; Hassanin, O.; Tariq, U.; Al-Nashash, H. EEG-based semantic vigilance level classification using directed connectivity patterns and graph theory analysis. IEEE Access 2020, 8, 115941–115956. [Google Scholar] [CrossRef]
- Hitchcock, E.M.; Warm, J.S.; Matthews, G.; Dember, W.N.; Shear, P.K.; Tripp, L.D.; Mayleben, D.W.; Parasuraman, R. Automation cueing modulates cerebral blood flow and vigilance in a simulated air traffic control task. Theor. Issues Ergon. Sci. 2003, 4, 89–112. [Google Scholar] [CrossRef]
- Dinges, D.F.; Powell, J.W. Microcomputer analyses of performance on a portable, simple visual RT task during sustained operations. Behav. Res. Methods Instrum. Comput. 1985, 17, 652–655. [Google Scholar] [CrossRef]
- Comerchero, M.D.; Polich, J. P3a and P3b from typical auditory and visual stimuli. Clin. Neurophysiol. 1999, 110, 24–30. [Google Scholar] [CrossRef]
- Brainard, D.H. The psychophysics toolbox. Spat. Vis. 1997, 10, 433–436. [Google Scholar] [CrossRef] [Green Version]
- Hitchcock, E.M.; Dember, W.N.; Warm, J.S.; Moroney, B.W.; See, J.E. Effects of cueing and knowledge of results on workload and boredom in sustained attention. Hum. Factors 1999, 41, 365–372. [Google Scholar] [CrossRef]
- Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Miyakoshi, M. Makoto’s Preprocessing Pipeline. 2020. Available online: https://sccn.ucsd.edu/wiki/Makoto’s_preprocessing_pipeline (accessed on 1 May 2020).
- Bigdely-Shamlo, N.; Mullen, T.; Kothe, C.; Su, K.M.; Robbins, K.A. The PREP pipeline: Standardized preprocessing for large-scale EEG analysis. Front. Neuroinform. 2015, 9, 16. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Zhao, Z.; Song, D.; Zhang, Y.; Pan, J.; Wu, L.; Huo, J.; Niu, C.; Wang, D. Latent Factor Decoding of Multi-Channel EEG for Emotion Recognition Through Autoencoder-Like Neural Networks. Front. Neurosci. 2020, 14, 87. [Google Scholar] [CrossRef] [PubMed]
- Prabhudesai, K.S.; Collins, L.M.; Mainsah, B.O. Automated feature learning using deep convolutional auto-encoder neural network for clustering electroencephalograms into sleep stages. In Proceedings of the 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA, 20–23 March 2019; pp. 937–940. [Google Scholar]
- Ayata, D.; Yaslan, Y.; Kamasak, M. Multi channel brain EEG signals based emotional arousal classification with unsupervised feature learning using autoencoders. In Proceedings of the 2017 25th Signal Processing and Communications Applications Conference (SIU), Antalya, Turkey, 15–18 May 2017; pp. 1–4. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
- Oord, A.v.d.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
- Remy, P. Temporal Convolutional Networks for Keras. 2020. Available online: https://github.com/philipperemy/keras-tcn (accessed on 1 April 2021).
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media: Sebastopol, CA, USA, 2019. [Google Scholar]
- Dertat, A. Applied Deep Learning-Part 3: Autoencoders. 2017. Available online: https://towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798 (accessed on 1 April 2021).
- Cohen, M.X. Analyzing Neural Time Series Data: Theory and Practice; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Thill, M.; Konen, W.; Bäck, T. Time Series Encodings with Temporal Convolutional Networks. In Proceedings of the International Conference on Bioinspired Methods and Their Applications, Brussels, Belgium, 19–20 November 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 161–173. [Google Scholar]
- Agresti, A.; Coull, B.A. Approximate is better than “exact” for interval estimation of binomial proportions. Am. Stat. 1998, 52, 119–126. [Google Scholar]
- Alphabet Inc. Kaggle. 2017. Available online: https://www.kaggle.com/datasets (accessed on 1 February 2021).
- Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu/ml (accessed on 1 February 2021).
- Winkler, I.; Debener, S.; Müller, K.R.; Tangermann, M. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 4101–4105. [Google Scholar]
- Mullen, T. CleanLine EEGLAB plugin. 2012. Available online: https://www.nitrc.org/projects/cleanline/ (accessed on 1 May 2020).
- Kothe, C.A.E.; Jung, T.P. Artifact Removal Techniques with Signal Reconstruction. U.S. Patent Application No. 14/895,440, 28 April 2016. [Google Scholar]
Participant # | MLPNN Val Acc | TCN-AE Val Acc | TCN Val Acc |
---|---|---|---|
0 | 0.69 (0.62, 0.76) | 0.51 (0.44, 0.58) | 0.49 (0.42, 0.56) |
1 | 0.78 (0.71, 0.83) | 0.49 (0.42, 0.56) | 0.57 (0.50, 0.64) |
2 | 0.48 (0.41, 0.55) | 0.49 (0.42, 0.57) | 0.51 (0.44, 0.58) |
3 | 0.48 (0.41, 0.56) | 0.52 (0.45, 0.59) | 0.50 (0.43, 0.57) |
4 | 0.83 (0.77, 0.88) | 0.49 (0.42, 0.57) | 0.56 (0.48, 0.63) |
5 | 0.68 (0.61, 0.74) | 0.55 (0.48, 0.62) | 0.53 (0.46, 0.60) |
6 | 0.71 (0.63, 0.77) | 0.47 (0.40, 0.54) | 0.54 (0.47, 0.62) |
7 | 0.52 (0.45, 0.59) | 0.56 (0.48, 0.63) | 0.65 (0.58, 0.72) |
8 | 0.53 (0.46, 0.60) | 0.56 (0.49, 0.63) | 0.49 (0.42, 0.57) |
9 | 0.67 (0.60, 0.74) | 0.54 (0.47, 0.62) | 0.48 (0.41, 0.55) |
10 | 0.63 (0.56, 0.69) | 0.49 (0.42, 0.57) | 0.56 (0.49, 0.63) |
11 | 0.46 (0.38, 0.53) | 0.46 (0.38, 0.53) | 0.48 (0.41, 0.56) |
12 | 0.84 (0.78, 0.89) | 0.48 (0.41, 0.55) | 0.64 (0.57, 0.71) |
13 | 0.63 (0.56, 0.70) | 0.53 (0.46, 0.60) | 0.52 (0.44, 0.59) |
7-fold CV | 0.64 (0.59, 0.69) | 0.52 (0.47, 0.57) | 0.56 (0.51, 0.61) |
Participant # | BIS Slope | BIS Difference (1st Bin–4th Bin) | MLPNN Val Acc |
---|---|---|---|
0 | −4.46 | 12.02 | 0.69 (0.62, 0.76) |
1 | −8.48 | 21.67 | 0.78 (0.71, 0.83) |
2 | −8.42 | 28.53 | 0.48 (0.41, 0.55) |
3 | −1.84 | 4.85 | 0.48 (0.41, 0.56) |
4 | −5.34 | 14.07 | 0.83 (0.77, 0.88) |
5 | −3.92 | 12.68 | 0.68 (0.61, 0.74) |
6 | −7.22 | 21.75 | 0.71 (0.63, 0.77) |
7 | −4.64 | 11.56 | 0.52 (0.45, 0.59) |
8 | −5.30 | 16.19 | 0.53 (0.46, 0.60) |
9 | −5.11 | 19.81 | 0.67 (0.60, 0.74) |
10 | −8.96 | 30.53 | 0.63 (0.56, 0.69) |
11 | −5.40 | 16.16 | 0.46 (0.38, 0.53) |
12 | −3.21 | 10.41 | 0.84 (0.78, 0.89) |
13 | −8.24 | 24.08 | 0.63 (0.56, 0.70) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kamrud, A.; Borghetti, B.; Schubert Kabban, C.; Miller, M. Generalized Deep Learning EEG Models for Cross-Participant and Cross-Task Detection of the Vigilance Decrement in Sustained Attention Tasks. Sensors 2021, 21, 5617. https://doi.org/10.3390/s21165617
Kamrud A, Borghetti B, Schubert Kabban C, Miller M. Generalized Deep Learning EEG Models for Cross-Participant and Cross-Task Detection of the Vigilance Decrement in Sustained Attention Tasks. Sensors. 2021; 21(16):5617. https://doi.org/10.3390/s21165617
Chicago/Turabian StyleKamrud, Alexander, Brett Borghetti, Christine Schubert Kabban, and Michael Miller. 2021. "Generalized Deep Learning EEG Models for Cross-Participant and Cross-Task Detection of the Vigilance Decrement in Sustained Attention Tasks" Sensors 21, no. 16: 5617. https://doi.org/10.3390/s21165617
APA StyleKamrud, A., Borghetti, B., Schubert Kabban, C., & Miller, M. (2021). Generalized Deep Learning EEG Models for Cross-Participant and Cross-Task Detection of the Vigilance Decrement in Sustained Attention Tasks. Sensors, 21(16), 5617. https://doi.org/10.3390/s21165617