The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour
<p>Summary of the guilty behaviour detection from walking.</p> "> Figure 2
<p>Camera positions during participant movement (blue triangle indicates angle direction upward of camera view; yellow indicates angle direction downward of camera view).</p> "> Figure 3
<p>Sample of body joints’ localisation while walking the stairs (red lines relate to the right side of the body and the blue ones to the left side).</p> "> Figure 4
<p>Interpretation of the selected features from each modality. (<b>a</b>) Top body movement features. (<b>b</b>) Top step acoustics features. (<b>c</b>) Top accelerometer sensor features.</p> ">
Abstract
:1. Introduction
- Studies on deception recognition from verbal and nonverbal cues, in general, are limited. Given the importance of this analysis in customs control and surveillance, this work is an attempt to enrich the literature in this field.
- This study extracts and analyses a novel and comprehensive feature set of body movement and gait inspired by psychology and behavioural expression through the literature on dancing.
- We are the first to analyse acoustic features from gait step sounds for deception.
- A comprehensive set of features from an accelerometer sensor in the context of deception is analysed.
- We also investigate a multimodal/fusion approach of the gait signals (audio, video, and accelerometer).
- Finally, we provide a detailed interpretation of the model and the behavioural gait features that are strongly associated with deception behaviour for future analysis and confirmations.
2. Related Work
2.1. Automatic Deception Detection from Body Movement
2.2. Automatic Affect Detection from Body Movement
3. Method
3.1. Dataset Collection Procedure
3.2. Feature Extraction
3.2.1. Body Movement Features
3.2.2. Steps Acoustics Features
3.2.3. Accelerometer Sensor Features
3.3. Dimensionality Reduction
3.4. Classification
- Non-linear SVM hypothesis:
- Cost Function:
- Gaussian RBF:
- Parameter Grid Search:
- Multi-Layer Perceptron Network
- Search Range for Perceptrons:
- For the first hidden layer:
- For the second hidden layer:
- Activation Function for Hidden Layers:
3.5. Multimodal Fusion
3.6. Statistical Analysis
4. Results
4.1. Classification of Guilty Walks
4.2. Characteristics of Guilty Walks
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mann, S.; Vrij, A.; Bull, R. Suspects, lies, and videotape: An analysis of authentic high-stake liars. Law Hum. Behav. 2002, 26, 365–376. [Google Scholar] [CrossRef] [PubMed]
- National Association for Shoplifting Prevention (NASP). Available online: https://www.shopliftingprevention.org/ (accessed on 9 April 2019).
- Karg, M.; Samadani, A.A.; Gorbet, R.; Kühnlenz, K.; Hoey, J.; Kulić, D. Body movements for affective expression: A survey of automatic recognition and generation. IEEE Trans. Affect. Comput. 2013, 4, 341–359. [Google Scholar] [CrossRef]
- Alghowinem, S.; Gedeon, T.; Goecke, R.; Cohn, J.; Parker, G. Interpretation of Depression Detection Models via Feature Selection Methods. IEEE Trans. Affect. Comput. 2020, 14, 133–152. [Google Scholar] [CrossRef] [PubMed]
- Cardona, P.A.N. A compendium of pattern recognition techniques in face, speech and lie detection. Int. J. Res. Rev. Appl. Sci. 2015, 24, 108. [Google Scholar]
- Nasri, H.; Ouarda, W.; Alimi, A.M. ReLiDSS: Novel lie detection system from speech signal. In Proceedings of the 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA), Agadir, Morocco, 29 November–2 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–8. [Google Scholar]
- Vrij, A.; Fisher, R.P. Lying and nervous behaviours Unravelling the misconception about deception and nervous behaviour. Front. Psychol. 2020, 11, 1377. [Google Scholar] [CrossRef]
- Vrij, A.; Mann, S.A.; Fisher, R.P.; Leal, S.; Milne, R.; Bull, R. Increasing cognitive load to facilitate lie detection: The benefit of recalling an event in reverse order. Law Hum. Behav. 2008, 32, 253–265. [Google Scholar] [CrossRef]
- Pérez-Rosas, V.; Abouelenien, M.; Mihalcea, R.; Burzo, M. Deception Detection Using Real-Life Trial Data. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI ’15, Seattle, WA, USA, 9–13 November 2015; pp. 59–66. [Google Scholar] [CrossRef]
- Gupta, V.; Agarwal, M.; Arora, M.; Chakraborty, T.; Singh, R.; Vatsa, M. Bag-of-lies: A multimodal dataset for deception detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Pérez-Rosas, V.; Mihalcea, R.; Narvaez, A.; Burzo, M. A Multimodal Dataset for Deception Detection. In Proceedings of the LREC, Reykjavik, Iceland, 26–31 May 2014; pp. 3118–3122. [Google Scholar]
- Raiman, N.; Hung, H.; Englebienne, G. Move, and i will tell you who you are: Detecting deceptive roles in low-quality data. In Proceedings of the 13th International Conference on Multimodal Interfaces, Alicante, Spain, 14–18 November 2011; pp. 201–204. [Google Scholar]
- Diana, B.; Elia, M.; Zurloni, V.; Elia, A.; Maisto, A.; Pelosi, S. Multimodal Deception Detection: A t-pattern Approach. In Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection, Seattle, WA, USA, 13 November 2015; pp. 21–28. [Google Scholar]
- Van Der Zee, S.; Poppe, R.; Taylor, P.; Anderson, R. To freeze or not to freeze: A motion-capture approach to detecting deceit. In Proceedings of the Hawaii International Conference on System Sciences, Kauai, HI, USA, 5–8 January 2015. [Google Scholar]
- Randhavane, T.; Bhattacharya, U.; Kapsaskis, K.; Gray, K.; Bera, A.; Manocha, D. The Liar’s Walk: Detecting Deception with Gait and Gesture. arXiv 2019, arXiv:1912.06874. [Google Scholar]
- Kleinsmith, A.; Bianchi-Berthouze, N. Affective body expression perception and recognition: A survey. IEEE Trans. Affect. Comput. 2013, 4, 15–33. [Google Scholar] [CrossRef]
- Zacharatos, H.; Gatzoulis, C.; Chrysanthou, Y.L. Automatic emotion recognition based on body movement analysis: A survey. IEEE Comput. Graph. Appl. 2014, 34, 35–45. [Google Scholar] [CrossRef]
- Stephens-Fripp, B.; Naghdy, F.; Stirling, D.; Naghdy, G. Automatic affect perception based on body gait and posture: A survey. Int. J. Soc. Robot. 2017, 9, 617–641. [Google Scholar] [CrossRef]
- Noroozi, F.; Kaminska, D.; Corneanu, C.; Sapinski, T.; Escalera, S.; Anbarjafari, G. Survey on emotional body gesture recognition. IEEE Trans. Affect. Comput. 2018, 12, 505–523. [Google Scholar] [CrossRef]
- Weinland, D.; Ronfard, R.; Boyer, E. A survey of vision-based methods for action representation, segmentation and recognition. Comput. Vis. Image Underst. 2011, 115, 224–241. [Google Scholar] [CrossRef]
- Dael, N.; Mortillaro, M.; Scherer, K.R. The body action and posture coding system (BAP): Development and reliability. J. Nonverbal Behav. 2012, 36, 97–121. [Google Scholar] [CrossRef]
- Velloso, E.; Bulling, A.; Gellersen, H. AutoBAP: Automatic coding of body action and posture units from wearable sensors. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, 2–5 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 135–140. [Google Scholar]
- Aristidou, A.; Charalambous, P.; Chrysanthou, Y. Emotion analysis and classification: Understanding the performers’ emotions using the LMA entities. Comput. Graph. Forum 2015, 34, 262–276. [Google Scholar] [CrossRef]
- Senecal, S.; Cuel, L.; Aristidou, A.; Magnenat-Thalmann, N. Continuous body emotion recognition system during theater performances. Comput. Animat. Virtual Worlds 2016, 27, 311–320. [Google Scholar] [CrossRef]
- Kessous, L.; Castellano, G.; Caridakis, G. Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. J. Multimodal User Interfaces 2010, 3, 33–48. [Google Scholar] [CrossRef]
- Kleinsmith, A.; Bianchi-Berthouze, N.; Steed, A. Automatic recognition of non-acted affective postures. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 41, 1027–1038. [Google Scholar] [CrossRef]
- Montepare, J.; Koff, E.; Zaitchik, D.; Albert, M. The use of body movements and gestures as cues to emotions in younger and older adults. J. Nonverbal Behav. 1999, 23, 133–152. [Google Scholar] [CrossRef]
- Karg, M.; Kuhnlenz, K.; Buss, M. Recognition of affect based on gait patterns. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2010, 40, 1050–1061. [Google Scholar] [CrossRef]
- Michalak, J.; Troje, N.F.; Fischer, J.; Vollmar, P.; Heidenreich, T.; Schulte, D. Embodiment of sadness and depression—Gait patterns associated with dysphoric mood. Psychosom. Med. 2009, 71, 580–587. [Google Scholar] [CrossRef]
- Wan, C.; Wang, L.; Phoha, V.V. A survey on gait recognition. ACM Comput. Surv. (CSUR) 2018, 51, 89. [Google Scholar] [CrossRef]
- Altaf, M.U.B.; Butko, T.; Juang, B.H.F. Acoustic gaits: Gait analysis with footstep sounds. IEEE Trans. Biomed. Eng. 2015, 62, 2001–2011. [Google Scholar] [CrossRef] [PubMed]
- Huang, J.; Di Troia, F.; Stamp, M. Acoustic Gait Analysis using Support Vector Machines. In Proceedings of the ICISSP, Funchal, Portugal, 22–24 January 2018; pp. 545–552. [Google Scholar]
- Wang, C.; Wang, X.; Long, Z.; Yuan, J.; Qian, Y.; Li, J. Estimation of temporal gait parameters using a wearable microphone-sensor-based system. Sensors 2016, 16, 2167. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Wang, X.; Long, Z.; Yuan, J.; Qian, Y.; Li, J. Multimodal gait analysis based on wearable inertial and microphone sensors. In Proceedings of the 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), San Francisco, CA, USA, 4–8 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar]
- Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Zhang, Z.; Song, Y.; Cui, L.; Liu, X.; Zhu, T. Emotion recognition based on customized smart bracelet with built-in accelerometer. PeerJ 2016, 4, e2258. [Google Scholar] [CrossRef]
- Kikhia, B.; Gomez, M.; Jiménez, L.; Hallberg, J.; Karvonen, N.; Synnes, K. Analyzing body movements within the laban effort framework using a single accelerometer. Sensors 2014, 14, 5725–5741. [Google Scholar] [CrossRef]
- Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F.; Spinsante, S. Pattern recognition techniques for the identification of Activities of Daily Living using mobile device accelerometer. arXiv 2017, arXiv:1711.00096. [Google Scholar] [CrossRef]
- Radwan, I.; Moustafa, N.; Keating, B.; Choo, K.K.R.; Goecke, R. Hierarchical Adversarial Network for Human Pose Estimation. IEEE Access 2019, 7, 103619–103628. [Google Scholar] [CrossRef]
- Grubbs, F.E. Procedures for detecting outlying observations in samples. Technometrics 1969, 11, 1–21. [Google Scholar] [CrossRef]
- Tzanetakis, G.; Essl, G.; Cook, P. Audio analysis using the discrete wavelet transform. In Proceedings of the WSES International Conference Acoustics and Music: Theory and Applications (AMTA 2001), Skiathos, Greece, 26–30 September 2001; Volume 66. [Google Scholar]
- Kaiser, J.F. On a simple algorithm to calculate the’energy’of a signal. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Albuquerque, NM, USA, 3–6 April 1990; IEEE: Piscataway, NJ, USA, 1990; pp. 381–384. [Google Scholar]
- AlKadhi, B.; Alghowinem, S. An Interacting Decision Support System to Determine a Group-Member’s Role Using Automatic Behaviour Analysis. In Proceedings of the Intelligent Computing, Chongqing, China, 6–8 December 2019; Arai, K., Kapoor, S., Bhatia, R., Eds.; Springer: Cham, Switzerland, 2019; pp. 455–470. [Google Scholar]
- Kursa, M.; Rudnicki, W. Feature selection with the boruta package. J. Stat. Softw. 2010, 36, 1–13. [Google Scholar] [CrossRef]
- Liu, H.; Setiono, R. Chi2: Feature selection and discretization of numeric attributes. In Proceedings of the International Conference on Tools with Artificial Intelligence, Herndon, VA, USA, 5–8 November 1995; pp. 388–391. [Google Scholar]
- Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef]
- Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; Wiley-Interscience: New York, NY, USA, 2000. [Google Scholar]
- Aliferis, C.F.; Statnikov, A.; Tsamardinos, I.; Mani, S.; Koutsoukos, X.D. Local Causal and Markov Blanket Induction for Causal Discovery and Feature Selection for Classification Part I: Algorithms and Empirical Evaluation. J. Mach. Learn. Res. 2010, 11, 171–234. [Google Scholar]
- Zou, H.; Jakovlić, I.; Chen, R.; Zhang, D.; Zhang, J.; Li, W.X.; Wang, G.T. The complete mitochondrial genome of parasitic nematode Camallanus cotti: Extreme discontinuity in the rate of mitogenomic architecture evolution within the Chromadorea class. BMC Genom. 2017, 18, 840. [Google Scholar] [CrossRef]
- Alghowinem, S.; Goecke, R.; Wagner, M.; Epps, J.; Hyett, M.; Parker, G.; Breakspear, M. Multimodal Depression Detection: Fusion Analysis of Paralinguistic, Head Pose and Eye Gaze Behaviors. IEEE Trans. Affect. Comput. 2018, 9, 478–490. [Google Scholar] [CrossRef]
- Ekman, P.; Friesen, W.V. Nonverbal leakage and clues to deception. Psychiatry 1969, 32, 88–106. [Google Scholar] [CrossRef]
- Meservy, T.O.; Jensen, M.L.; Kruse, J.; Burgoon, J.K.; Nunamaker, J.F.; Twitchell, D.P.; Tsechpenakis, G.; Metaxas, D.N. Deception detection through automatic, unobtrusive analysis of nonverbal behavior. IEEE Intell. Syst. 2005, 20, 36–43. [Google Scholar] [CrossRef]
Modality | Device | Valid Files | Invalid Files |
---|---|---|---|
Physiological Response | E4 | 46 | 3 |
Acoustics | Lapel Mic | 31 | 18 |
Body Actions | WebCam 1 | 38 | 11 |
WebCam 2 | 44 | 5 | |
WebCam 3 | 43 | 6 | |
WebCam 4 | 37 | 12 |
Feature Group | Descriptions and Calculation |
---|---|
Body Movement | |
Trunk lean angle | The angle between the sternum and the collarbone (clavicle) line and the origin. The angle should indicate trunk leaning left and right. |
Elbows horizontal movement | The distance between the elbow and the body side, which is calculated from the cross point of the elbow and the shoulder (left and right elbows). |
Elbows vertical movement | The distance between the elbow and the shoulder line (left and right elbows). |
Hands to hip distance | The distance between the hand and the hip points (left and right hands). |
Face touching | The distance between the hand and the head points (left and right hands). |
Neck touching | The distance between the hand and the neck points (left and right hands). |
Holding arm | The line cross between one forearm to the other arm (left and right arms). |
Crossed arms | Indicating if both arms are holding each other. |
Shoulders angle | The angle between the left and right shoulder line and the origin. |
Arms symmetric | Using the sternum points as the centre, the symmetric measure of the left and right elbows is calculated. |
Elbow articulation | The angle between the elbow, the shoulder, and the body side (left and right arms). |
Knee bend | The angle between the hip, knee, and ankle points (left & right knees). |
Leg movement | The distance between the sternum and the knee points (left & right legs). |
Foot to hip distance | The distance between the ankle and the hip points (left and right feet). |
Hands to shoulder distance | The distance between the hand and the shoulder points (left and right hands). |
Hands distance | The distance between the left and right hands. |
Gait size | The distance between the left and right ankles. |
Body volume | The area of a polygon from the outer body points (i.e., sternum, collarbone, and neck points are not included). |
Upper body volume | The area of a polygon from the outer upper body points (i.e., arms and head points). |
Lower body volume | The area of a polygon from the outer lower body points (i.e., legs and sternum points). |
Left body volume | The area of a polygon from the outer left body points (i.e., left leg, left hand, head, and sternum points). |
Right body volume | The area of a polygon from the outer right body points. |
Step Acoustics | |
Discrete Wavelet Transform | For audio signal transformation to analyse the temporal and spectral properties from non-speech signals. |
Teager energy operator | The non-linear transform of the time-domain signal that measures the harmonics produced from the sound wave. |
Energy (power) | The signal power using root-mean-square and log functions of the wave signal. |
MFCC | Signal Cepstral analysis using mel-frequency cepstral coefficients. |
Frequency variability | Using jitter, which measures the interference with normal signal. |
Amplitude variability | Using shimmer, which measures variability in the signal amplitude in comparison to the fundamental frequency. |
Magnitude of the signal | measuring sound level using intensity and loudness. |
Accelerometer Sensor | |
Accelerometer movement | The X, Y, and Z of the sensor location in the space. |
Magnitude of the accelerometer | The square root of the X, Y, and Z squares. |
Velocity of movement | The change in accelerometer movement and its magnitude from one frame to another. |
Acceleration of movement | The change in velocity from one frame to another. |
Magnitude frequency | The fast fourier transform of the magnitude signal. |
Magnitude logarithmic frequency | The logarithmic function of the fast fourier transform of the magnitude signal. |
Magnitude amplitude | Shifting the magnitude frequency component to the centre of spectrum. |
SVM | MLP | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
# Samples | Full Features | SF (10%) | Full Features | SF (10%) | ||||||
Acc | MCC | Acc | MCC | Acc | MCC | Acc | MCC | |||
Single Modality | ||||||||||
Modality | Device | |||||||||
Vision | Cam 1 | 24 | 62.5 | 0.38 | 58.3 | 0.30 | 54.2 | 0.21 | 58.3 | 0.30 |
Cam 2 | 28 | 53.6 | 0.19 | 53.6 | 0.19 | 71.4 | 0.43 | 57.1 | 0.16 | |
Cam 3 | 28 | 53.6 | 0.12 | 53.6 | 0.19 | 64.3 | 0.29 | 57.1 | 0.28 | |
Cam 4 | 23 | 52.2 | 0.02 | 60.9 | 0.32 | 56.5 | 0.21 | 52.2 | 0.20 | |
Acoustics | Mic | 20 | 55.0 | 0.23 | 55.0 | 0.23 | 55.0 | 0.23 | 60.0 | 0.33 |
Acceleration | E4 | 32 | 53.1 | 0.18 | 53.1 | 0.18 | 65.6 | 0.35 | 59.4 | 0.26 |
Feature Fusion | ||||||||||
Subjects with full modalities | 8 | 75.0 | 0.49 | 75.0 | 0.49 | 75.0 | 0.47 | 87.5 | 0.75 | |
Subjects with at least 1 cam and all other | 17 | 58.8 | 0.18 | 58.8 | 0.27 | 70.6 | 0.45 | 58.8 | 0.34 | |
Subjects with at least 1 modality | 33 | 51.5 | 0.17 | 51.5 | 0.17 | 54.5 | 0.18 | 54.5 | 0.17 | |
Decision Fusion | ||||||||||
All modalities (raw) | 33 | 51.5 | 0.17 | 51.5 | 0.17 | 72.7 | 0.46 | 57.6 | 0.26 | |
All cams only | 32 | 56.7 | 0.27 | 56.7 | 0.27 | 53.3 | 0.11 | 50.0 | 0.00 | |
All modalities (hierarchical) | 33 | 54.5 | 0.25 | 54.5 | 0.25 | 63.6 | 0.27 | 51.5 | 0.00 | |
Hybrid Fusion—Feature Fusion + Single Modalities | ||||||||||
Subjects with full modalities | 33 | 51.5 | 0.17 | 51.5 | 0.17 | 78.8 | 0.58 | 57.6 | 0.26 | |
Subjects with at least 1 cam and all other | 33 | 54.5 | 0.25 | 54.5 | 0.25 | 69.7 | 0.44 | 51.5 | 0.00 | |
Subjects with at least 1 modality | 33 | 51.5 | 0.17 | 51.5 | 0.17 | 60.6 | 0.33 | 63.6 | 0.38 | |
Hybrid Fusion—Classifiers Decision Fusion (SVM + MLP) | ||||||||||
All modalities (raw) | 33 | 60.6 | 0.36 | 72.7 | 0.55 | - | - | - | - | |
All cams only | 32 | 73.3 | 0.47 | 66.7 | 0.39 | - | - | - | - | |
All modalities (hierarchical) | 33 | 57.6 | 0.31 | 72.7 | 0.55 | - | - | - | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alghowinem, S.; Caldwell, S.; Radwan, I.; Wagner, M.; Gedeon, T. The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour. Information 2025, 16, 6. https://doi.org/10.3390/info16010006
Alghowinem S, Caldwell S, Radwan I, Wagner M, Gedeon T. The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour. Information. 2025; 16(1):6. https://doi.org/10.3390/info16010006
Chicago/Turabian StyleAlghowinem, Sharifa, Sabrina Caldwell, Ibrahim Radwan, Michael Wagner, and Tom Gedeon. 2025. "The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour" Information 16, no. 1: 6. https://doi.org/10.3390/info16010006
APA StyleAlghowinem, S., Caldwell, S., Radwan, I., Wagner, M., & Gedeon, T. (2025). The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour. Information, 16(1), 6. https://doi.org/10.3390/info16010006