sEMG-Based Hand-Gesture Classification Using a Generative Flow Model
<p>Human hierarchical system for controlling hand gestures.</p> "> Figure 2
<p>Pipeline for calculating sEMG linear envelope. Modified from [<a href="#B27-sensors-19-01952" class="html-bibr">27</a>] with permission.</p> "> Figure 3
<p>Overall structure of the proposed approach.</p> "> Figure 4
<p>Acquisition setups for the two Myo armbands.</p> "> Figure 5
<p>Work flow for the proposed approach.</p> "> Figure 6
<p>Hand-gesture classification accuracy for different test sets.</p> "> Figure 7
<p>Distributions of hand rest, hand open, and hand close in the feature space. Total variance described: 24.5%</p> "> Figure 8
<p>The feature centers of different hand gestures, and their reverse transformation to the sEMG linear envelope.</p> "> Figure 9
<p>Correlation matrix for the proposed approach.</p> "> Figure 10
<p>The sEMG linear envelope corresponding to each dimension of the factorized feature.</p> "> Figure A1
<p>Hand gestures of the NinaPro database. Modified from [<a href="#B34-sensors-19-01952" class="html-bibr">34</a>] licensed under a Creative Commons Attribution 4.0 International License.</p> "> Figure A2
<p>Distribution of all 53 hand gestures (labeled by colors) in the latent variable space.</p> "> Figure A3
<p>A GUI for interacting with the proposed approach. The <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>i</mi> <mo>;</mo> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>∼</mo> <mn>255</mn> </mrow> </semantics></math> denotes the <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>-</mo> <mi>t</mi> <mi>h</mi> </mrow> </semantics></math> dimension of the factorized feature. The transform button transforms the current feature to the sEMG linear envelope and shows the result on the canvas at the bottom. The reset button is used to save the canvas. The reset button is used to set all the dimensions of the feature to 0.</p> ">
Abstract
:1. Introduction
2. Physiology Background of Surface Electromyography
3. Methods
3.1. Surface Electromyography Signal Processing
3.2. Generative Flow Model
3.3. Classifier
4. Experiment
5. Results
6. Discussion
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
sEMG | Surface electromyography signal |
GFM | Generative flow model |
CNN | Convolutional neural network |
RNN | Recurrent neural network |
AE | Auto-encoder |
DBN | Deep belief network |
Appendix A
Appendix A.1. Modules Used in the Generative Flow Model
Appendix A.2. Code for the Proposed Approach
Appendix A.3. Figures of the Hand Gestures and the GUI
References
- Fariman, H.J.; Ahmad, S.A.; Marhaban, M.H.; Ghasab, M.A.J.; Chappell, P.H. Simple and Computationally Efficient Movement Classification Approach for EMG-controlled Prosthetic Hand: ANFIS vs. Artificial Neural Network. Intell. Autom. Soft Comput. 2015, 21, 559–573. [Google Scholar] [CrossRef]
- Shafivulla, M. sEMG Based Human Computer Interaction for Robotic Wheel Chair Using ANN. Procedia Comput. Sci. 2016, 85, 949–953. [Google Scholar] [CrossRef]
- Cene, V.H.; Balbinot, A. Upper-limb movement classification through logistic regression sEMG signal processing. In Proceedings of the 2015 Latin America Congress on Computational Intelligence (LA-CCI), Curitiba, Brazil, 13–16 October 2015. [Google Scholar]
- Toledo-Pérez, D.C.; Martínez-Prado, M.A.; Gómez-Loenzo, R.A.; Paredes-García, W.J.; Rodríguez-Reséndiz, J. A Study of Movement Classification of the Lower Limb Based on up to 4-EMG Channels. Electronics 2019, 8, 259. [Google Scholar] [CrossRef]
- Geethanjali, P. Myoelectric control of prosthetic hands: State-of-the-art review. Med. Devices (Auckl. NZ) 2016, 9, 247. [Google Scholar] [CrossRef] [PubMed]
- Gijsberts, A.; Bohra, R.; Sierra Gonzz, D.; Werner, A.; Nowak, M.; Caputo, B.; Roa, M.; Castellini, C. Stable myoelectric control of a hand prosthesis using non-linear incremental learning. Front. Neurorobot. 2014, 8, 8. [Google Scholar] [CrossRef] [PubMed]
- Bengio, Y.; Courville, A.; Vincent, P. Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives. arXiv 2012, arXiv:1206.5538v1. [Google Scholar]
- Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef] [PubMed]
- Cote-Allard, U.; Fall, C.L.; Campeau-Lecours, A.; Gosselin, C.; Laviolette, F.; Gosselin, B. Transfer learning for sEMG hand gestures recognition using convolutional neural networks. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1663–1668. [Google Scholar]
- Allard, U.C.; Nougarou, F.; Fall, C.L.; Giguere, P.; Gosselin, C.; Laviolette, F.; Gosselin, B. A convolutional neural network for robotic arm guidance using sEMG based frequency-features. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots & Systems, Daejeon, South Korea, 9–14 October 2016. [Google Scholar]
- Alam, R.U.; Rhivu, S.R.; Haque, M. Improved Gesture Recognition Using Deep Neural Networks on sEMG. In Proceedings of the 2018 International Conference on Engineering, Applied Sciences, and Technology (ICEAST), Phuket, Thailand, 4–7 July 2018; pp. 1–4. [Google Scholar]
- Xia, P.; Hu, J.; Peng, Y. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks. Artif. Organs 2018, 42, E67–E77. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Wang, X.; Matwin, S. Interpretable deep convolutional neural networks via meta-learning. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), San Francisco, CA, USA, 11–15 February 2018; pp. 1–9. [Google Scholar]
- Parde, C.J.; Castillo, C.; Hill, M.Q.; Colon, Y.I.; Sankaranarayanan, S.; Chen, J.C.; O’Toole, A.J. Deep convolutional neural network features and the original image. arXiv 2016, arXiv:1611.01751. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Pascual, S.; Bonafonte, A.; Serrà, J. SEGAN: Speech enhancement generative adversarial network. arXiv 2017, arXiv:1703.09452. [Google Scholar]
- Kumar, M.; Babaeizadeh, M.; Erhan, D.; Finn, C.; Levine, S.; Dinh, L.; Kingma, D. VideoFlow: A Flow-Based Generative Model for Video. arXiv 2019, arXiv:1903.01434. [Google Scholar]
- Oord, A.v.d.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
- Pizzolato, S.; Tagliapietra, L.; Cognolato, M.; Reggiani, M.; Miller, H.; Atzori, M. Comparison of six electromyography acquisition setups on hand movement classification tasks. PLoS ONE 2017, 12, e0186132. [Google Scholar] [CrossRef]
- Safavynia, S.A.; Torres-Oviedo, G.; Ting, L.H. Muscle Synergies: Implications for Clinical Evaluation and Rehabilitation of Movement. Top. Spinal Cord Inj. Rehabil. 2011, 17, 16–24. [Google Scholar] [CrossRef]
- Kingma, D.P.; Dhariwal, P. Glow: Generative flow with invertible 1 × 1 convolutions. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, ON, Canada, 3–8 December 2018; pp. 10236–10245. [Google Scholar]
- Shim, H.m.; Lee, S. Multi-channel electromyography pattern classification using deep belief networks for enhanced user experience. J. Cent. South Univ. 2015, 22, 1801–1808. [Google Scholar] [CrossRef]
- Wand, M.; Schultz, T. Pattern learning with deep neural networks in EMG-based speech recognition. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4200–4203. [Google Scholar]
- Spüler, M.; Irastorza-Landa, N.; Sarasola-Sanz, A.; Ramos-Murguialday, A. Extracting muscle synergy patterns from EMG data using autoencoders. In International Conference on Artificial Neural Networks; Springer: Cham, Switzerland, 2016; pp. 47–54. [Google Scholar]
- Hinton, G.E. A practical guide to training restricted Boltzmann machines. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; pp. 599–619. [Google Scholar]
- Doersch, C. Tutorial on variational autoencoders. arXiv 2016, arXiv:1606.05908. [Google Scholar]
- Barzilay, O.; Wolf, A. A fast implementation for EMG signal linear envelope computation. J. Electromyogr. Kinesiol. 2011, 21, 678–682. [Google Scholar] [CrossRef]
- Prenger, R.; Valle, R.; Catanzaro, B. WaveGlow: A Flow-based Generative Network for Speech Synthesis. arXiv 2018, arXiv:1811.00002. [Google Scholar]
- Dinh, L.; Krueger, D.; Bengio, Y. NICE: Non-linear independent components estimation. arXiv 2014, arXiv:1410.8516. [Google Scholar]
- Dinh, L.; Sohl-Dickstein, J.; Bengio, S. Density estimation using real nvp. arXiv 2016, arXiv:1605.08803. [Google Scholar]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, Savannah, GA, USA, 2–4 November 2016. [Google Scholar]
- Wan, Y.; Han, Z.; Zhong, J.; Chen, G. Pattern recognition and bionic manipulator driving by surface electromyography signals using convolutional neural network. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
- Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Miller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Nature 2014, 1, 605–610. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, W.; Liu, H.; Tang, R.; Lang, Y.; He, J.; Huang, Q. sEMG-Based Hand-Gesture Classification Using a Generative Flow Model. Sensors 2019, 19, 1952. https://doi.org/10.3390/s19081952
Sun W, Liu H, Tang R, Lang Y, He J, Huang Q. sEMG-Based Hand-Gesture Classification Using a Generative Flow Model. Sensors. 2019; 19(8):1952. https://doi.org/10.3390/s19081952
Chicago/Turabian StyleSun, Wentao, Huaxin Liu, Rongyu Tang, Yiran Lang, Jiping He, and Qiang Huang. 2019. "sEMG-Based Hand-Gesture Classification Using a Generative Flow Model" Sensors 19, no. 8: 1952. https://doi.org/10.3390/s19081952
APA StyleSun, W., Liu, H., Tang, R., Lang, Y., He, J., & Huang, Q. (2019). sEMG-Based Hand-Gesture Classification Using a Generative Flow Model. Sensors, 19(8), 1952. https://doi.org/10.3390/s19081952