[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = nonverbal behaviour

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 871 KiB  
Article
The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour
by Sharifa Alghowinem, Sabrina Caldwell, Ibrahim Radwan, Michael Wagner and Tom Gedeon
Information 2025, 16(1), 6; https://doi.org/10.3390/info16010006 - 26 Dec 2024
Viewed by 300
Abstract
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from [...] Read more.
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Figure 1

Figure 1
<p>Summary of the guilty behaviour detection from walking.</p>
Full article ">Figure 2
<p>Camera positions during participant movement (blue triangle indicates angle direction upward of camera view; yellow indicates angle direction downward of camera view).</p>
Full article ">Figure 3
<p>Sample of body joints’ localisation while walking the stairs (red lines relate to the right side of the body and the blue ones to the left side).</p>
Full article ">Figure 4
<p>Interpretation of the selected features from each modality. (<b>a</b>) Top body movement features. (<b>b</b>) Top step acoustics features. (<b>c</b>) Top accelerometer sensor features.</p>
Full article ">
18 pages, 2763 KiB  
Article
Impact of Robot Size and Number on Human–Robot Persuasion
by Abeer Alam, Michael Lwin, Aila Khan and Omar Mubin
Information 2024, 15(12), 782; https://doi.org/10.3390/info15120782 - 5 Dec 2024
Viewed by 461
Abstract
Technological progress has seamlessly integrated digital assistants into our everyday lives, sparking an interest in social robots that communicate through both verbal and non-verbal means. The potential of these robots to influence human behaviour and attitudes holds significant implications for fields such as [...] Read more.
Technological progress has seamlessly integrated digital assistants into our everyday lives, sparking an interest in social robots that communicate through both verbal and non-verbal means. The potential of these robots to influence human behaviour and attitudes holds significant implications for fields such as healthcare, marketing, and promoting sustainability. This study investigates how the design and behavioural aspects of social robots affect their ability to persuade, drawing on principles from human interaction to enhance the quality of human–robot interactions. Conducted in three stages, the experiments involved 73 participants, offering a comprehensive view of human responses to robotic persuasion. Surprisingly, the findings reveal that individuals tend to be more receptive to a single robot than to groups of robots. Nao was identified as more effective and capable of persuasion than Pepper. This study shows that successful persuasion by robots depends on social influence, the robot’s appearance, and people’s past experiences with technology. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>System architecture.</p>
Full article ">Figure 2
<p>Scenario construction.</p>
Full article ">Figure 3
<p>Mean values for perceived persuasion and perceived competence per condition: a single robot vs. multiple robots.</p>
Full article ">Figure 4
<p>Mean values for perceived persuasion and perceived competence per condition: Pepper vs. Nao vs. multiple robots.</p>
Full article ">Figure 5
<p>Mean values for perceived persuasion and perceived competence per condition: male vs. female across all stages.</p>
Full article ">Figure 6
<p>Familiarity with technology (error bars displaying standard error − multiplier 2).</p>
Full article ">Figure 7
<p>Subjective persuasion (error bars displaying standard error − multiplier 2).</p>
Full article ">
27 pages, 2043 KiB  
Article
Computerised Attention Functions Training Versus Computerised Executive Functions Training for Children with Attention Deficit/Hyperactivity Disorder: A Randomised Controlled Trial
by Inbar Lucia Trinczer and Lilach Shalev
J. Clin. Med. 2024, 13(23), 7239; https://doi.org/10.3390/jcm13237239 - 28 Nov 2024
Viewed by 668
Abstract
Background: Attention deficit/hyperactivity disorder (ADHD) is a prevalent neurodevelopmental disorder characterised by deficits in attention, hyperactivity, and impulsivity. Current treatments, such as stimulant medication and behavioural therapy, ameliorate symptoms but do not address the core cognitive dysfunctions. This study aimed to investigate [...] Read more.
Background: Attention deficit/hyperactivity disorder (ADHD) is a prevalent neurodevelopmental disorder characterised by deficits in attention, hyperactivity, and impulsivity. Current treatments, such as stimulant medication and behavioural therapy, ameliorate symptoms but do not address the core cognitive dysfunctions. This study aimed to investigate the effects of two computerised neurocognitive training programs, attention functions training and executive functions training, in children with ADHD. Methods: Eighty children with ADHD (ages 8–13) were randomly assigned to one of three groups: Attention functions training (AFT), targeting sustained, selective-spatial, orienting, and executive attention; executive functions training (EFT), focusing on working memory, cognitive flexibility, and problem solving; or a passive control group. Training sessions were administered in small groups twice a week for nine weeks. Participants underwent comprehensive assessments of attention (Continuous Performance Test, Conjunctive Visual Search Task), executive functions (Corsi Block-Tapping Tasks), nonverbal reasoning (Raven’s Colored Progressive Matrices), parent-rated behavioural symptoms, and arithmetic performance at baseline, post-intervention, and follow-up. Results: The AFT group demonstrated significant improvements in sustained and selective-spatial attention, nonverbal reasoning, inattentive symptoms, and arithmetic performance, and most improvements persisted at follow-up. The EFT group showed gains in nonverbal reasoning and inattentive symptoms, although no improvements were documented in working memory or in parent ratings of executive functions. Conclusions: The AFT program that addressed core attentional functions in children with ADHD produced robust cognitive and behavioural benefits, whereas the EFT program yielded behavioural benefits and a limited improvement in executive functions. Future research should explore different training protocols for broader gains in executive functions. These findings support the potential of theory-driven, structured neurocognitive training targeting basic cognitive functions as an effective small-group intervention for ADHD. Full article
Show Figures

Figure 1

Figure 1
<p>Recruitment and randomisation of the participants.</p>
Full article ">Figure 2
<p>Training protocol structure and content. (<b>a</b>) The 75 min session structure, consisting of computerised training intervals and group activities. (<b>b</b>) Components of the computerised AFT protocol intervals. (<b>c</b>) Components of the computerised EFT protocol intervals.</p>
Full article ">Figure 3
<p>Sustained attention: performance across groups and testing sessions. (<b>a</b>) Standard deviation (SD) of reaction time (RT) in the CPT, as a function of time (T1, T2) and group (AFT, EFT, PC). (<b>b</b>) Omission error rate in the CPT, as a function of time (T1, T2) and group (AFT, EFT, PC). (<b>c</b>) Standard deviation (SD) of reaction time (RT) in the CPT, as a function of time (T1, T2, T3) and group (AFT, EFT). (<b>d</b>) Omission error rate in the CPT, as a function of time (T1, T2, T3) and group (AFT, EFT). Error bars represent the standard error of the mean (SEM).</p>
Full article ">Figure 4
<p>Selective-spatial attention: performance in the CVST by group and testing session. (<b>a</b>) Differences in RTs between T1 and T2 of the ‘target present’ displays as a function group (AFT, EFT, PC). (<b>b</b>) Differences in RTs between T1 and T2 of the ‘target absent’ displays as a function group (AFT, EFT, PC). (<b>c</b>) Differences in RTs between T1 and T3 of the ‘target present’ displays as a function group (AFT, EFT). (<b>d</b>) Differences in RTs between T1 and T3 of the ‘target absent’ displays as a function group (AFT, EFT). Error bars represent the standard error of the mean (SEM).</p>
Full article ">Figure 5
<p>Nonverbal abstract reasoning: performance in the Raven’s Colored Progressive Matrices (CPM) by group and testing session. (<b>a</b>) Total raw score in the CPM, as a function of time (T1, T2) and group (AFT, EFT, PC). (<b>b</b>) Total raw score in the CPM, as a function of time (T1, T2, T3) and group (AFT, EFT). Error bars represent the standard error of the mean (SEM).</p>
Full article ">Figure 6
<p>CBCL attention problems syndrome sub-scale’s score as a function of time (T1, T2) and group (AFT, EFT, PC). Error bars represent the standard error of the mean (SEM).</p>
Full article ">
20 pages, 860 KiB  
Article
Exploring the Effectiveness of Evaluation Practices for Computer-Generated Nonverbal Behaviour
by Pieter Wolfert, Gustav Eje Henter and Tony Belpaeme
Appl. Sci. 2024, 14(4), 1460; https://doi.org/10.3390/app14041460 - 10 Feb 2024
Cited by 1 | Viewed by 953
Abstract
This paper compares three methods for evaluating computer-generated motion behaviour for animated characters: two commonly used direct rating methods and a newly designed questionnaire. The questionnaire is specifically designed to measure the human-likeness, appropriateness, and intelligibility of the generated motion. Furthermore, this study [...] Read more.
This paper compares three methods for evaluating computer-generated motion behaviour for animated characters: two commonly used direct rating methods and a newly designed questionnaire. The questionnaire is specifically designed to measure the human-likeness, appropriateness, and intelligibility of the generated motion. Furthermore, this study investigates the suitability of these evaluation tools for assessing subtle forms of human behaviour, such as the subdued motion cues shown when listening to someone. This paper reports six user studies, namely studies that directly rate the appropriateness and human-likeness of a computer character’s motion, along with studies that instead rely on a questionnaire to measure the quality of the motion. As test data, we used the motion generated by two generative models and recorded human gestures, which served as a gold standard. Our findings indicate that when evaluating gesturing motion, the direct rating of human-likeness and appropriateness is to be preferred over a questionnaire. However, when assessing the subtle motion of a computer character, even the direct rating method yields less conclusive results. Despite demonstrating high internal consistency, our questionnaire proves to be less sensitive than directly rating the quality of the motion. The results provide insights into the evaluation of human motion behaviour and highlight the complexities involved in capturing subtle nuances in nonverbal communication. These findings have implications for the development and improvement of motion generation models and can guide researchers in selecting appropriate evaluation methodologies for specific aspects of human behaviour. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Participants were asked to rate each statement in the questionnaire on a scale from 1 to 5 using the following anchors: (1) disagree, (2) slightly disagree, (3) neither agree nor disagree, (4) slightly agree, and (5) agree.</p>
Full article ">Figure 2
<p>A screenshot that displays the avatar in the HEMVIP interface [<a href="#B18-applsci-14-01460" class="html-bibr">18</a>]. This interface was used for studies 1 and 3. Each play button is linked to one video, and only one video was shown at the same time. The user had to rate each video before being able to continue to the next page.</p>
Full article ">Figure 3
<p>A screenshot of the pairwise interface (as introduced in [<a href="#B42-applsci-14-01460" class="html-bibr">42</a>]) used in studies 2 and 4.</p>
Full article ">Figure 4
<p>A screenshot of the interface used for the questionnaire. Participants were instructed that they were evaluating the motion for the left video. Each video was accompanied by 15 questions (not all visible in the image).</p>
Full article ">Figure 5
<p>Boxplots of human-likeness scores on gesturing for StyleGestures (SG), baseline (BL), and ground truth (GT) conditions.</p>
Full article ">Figure 6
<p>Stacked bar charts showing the percentage of votes on gesturing for StyleGestures (SG), baseline (BL), and ground truth conditions (GT) in study 2.</p>
Full article ">Figure 7
<p>Boxplots of human-likeness scores for listening behaviour.</p>
Full article ">Figure 8
<p>Stacked bar charts showing the percentage of votes on listening for baseline (BL), StyleGestures (SG), and ground truth (GT) in study 4.</p>
Full article ">Figure 9
<p>Mean and error bars for baseline (BL), StyleGestures (SG), and ground truth (GT) in study 5.</p>
Full article ">Figure 10
<p>Mean and error bars for baseline (BL), StyleGestures (SG), and ground truth (GT) in study 6.</p>
Full article ">
13 pages, 656 KiB  
Article
How the Spreading and Intensity of Interictal Epileptic Activity Are Associated with Visuo-Spatial Skills in Children with Self-Limited Focal Epilepsy with Centro-Temporal Spikes
by Pauline Dontaine, Coralie Rouge, Charline Urbain, Sophie Galer, Romain Raffoul, Antoine Nonclercq, Dorine Van Dyck, Simon Baijot and Alec Aeby
Brain Sci. 2023, 13(11), 1566; https://doi.org/10.3390/brainsci13111566 - 8 Nov 2023
Cited by 1 | Viewed by 1375
Abstract
This paper investigates brain–behaviour associations between interictal epileptic discharges and cognitive performance in a population of children with self-limited focal epilepsy with centro-temporal spikes (SeLECTS). Sixteen patients with SeLECTS underwent an extensive neuropsychological assessment, including verbal short-term and episodic memory, non-verbal short-term memory, [...] Read more.
This paper investigates brain–behaviour associations between interictal epileptic discharges and cognitive performance in a population of children with self-limited focal epilepsy with centro-temporal spikes (SeLECTS). Sixteen patients with SeLECTS underwent an extensive neuropsychological assessment, including verbal short-term and episodic memory, non-verbal short-term memory, attentional abilities and executive function. Two quantitative EEG indices were analysed, i.e., the Spike Wave Index (SWI) and the Spike Wave Frequency (SWF), and one qualitative EEG index, i.e., the EEG score, was used to evaluate the spreading of focal SW to other parts of the brain. We investigated associations between EEG indices and neuropsychological performance with non-parametric Spearman correlation analyses, including correction for multiple comparisons. The results showed a significant negative correlation between (i) the awake EEG score and the Block Tapping Test, a visuo-spatial short-term memory task, and (ii) the sleep SWI and the Tower of London, a visuo-spatial planning task (pcorr < 0.05). These findings suggest that, in addition to the usual quantitative EEG indices, the EEG analysis should include the qualitative EEG score evaluating the spreading of focal SW to other parts of the brain and that neuropsychological assessment should include visuo-spatial skills. Full article
Show Figures

Figure 1

Figure 1
<p>Significant association between the Block Tapping Test and the awake qualitative EEG score. (r<sub>s</sub> = −0.727; <span class="html-italic">p</span><sub>corr</sub> &lt; 0.05).</p>
Full article ">Figure 2
<p>Significant association between the Tower of London and the sleep Spike Wave Index (SWI; r<sub>s</sub> = −0.87; <span class="html-italic">p</span><sub>corr</sub> &lt; 0.05).</p>
Full article ">
22 pages, 1359 KiB  
Article
Identifying Which Relational Cues Users Find Helpful to Allow Tailoring of e-Coach Dialogues
by Sana Salman, Deborah Richards and Mark Dras
Multimodal Technol. Interact. 2023, 7(10), 93; https://doi.org/10.3390/mti7100093 - 2 Oct 2023
Cited by 2 | Viewed by 2140
Abstract
Relational cues are extracts from actual verbal dialogues that help build the therapist–patient working alliance and stronger bond through the depiction of empathy, respect and openness. ECAs (Embodied conversational agents) are human-like virtual agents that exhibit verbal and non-verbal behaviours. In the digital [...] Read more.
Relational cues are extracts from actual verbal dialogues that help build the therapist–patient working alliance and stronger bond through the depiction of empathy, respect and openness. ECAs (Embodied conversational agents) are human-like virtual agents that exhibit verbal and non-verbal behaviours. In the digital health space, ECAs act as health coaches or experts. ECA dialogues have previously been designed to include relational cues to motivate patients to change their current behaviours and encourage adherence to a treatment plan. However, there is little understanding of who finds specific relational cues delivered by an ECA helpful or not. Drawing the literature together, we have categorised relational cues into empowering, working alliance, affirmative and social dialogue. In this study, we have embedded the dialogue of Alex, an ECA, to encourage healthy behaviours with all the relational cues (empathic Alex) or with none of the relational cues (neutral Alex). A total of 206 participants were randomly assigned to interact with either empathic or neutral Alex and were also asked to rate the helpfulness of selected relational cues. We explore if the perceived helpfulness of the relational cues is a good predictor of users’ intention to change the recommended health behaviours and/or development of a working alliance. Our models also investigate the impact of individual factors, including gender, age, culture and personality traits of the users. The idea is to establish whether a certain group of individuals having similarities in terms of individual factors found a particular cue or group of cues helpful. This will establish future versions of Alex and allow Alex to tailor its dialogue to specific groups, as well as help in building ECAs with multiple personalities and roles. Full article
Show Figures

Figure 1

Figure 1
<p>Screenshot of interaction with Alex.</p>
Full article ">Figure 2
<p>One of Alex’s dialogues with color-coded relational cues (see <a href="#mti-07-00093-t001" class="html-table">Table 1</a>). A user either receives the empathic version with relational cues or the neutral one with text inside the brackets.</p>
Full article ">Figure A1
<p>Decision Tree classifier.</p>
Full article ">Figure A2
<p>Logistic Regression classifier.</p>
Full article ">Figure A2 Cont.
<p>Logistic Regression classifier.</p>
Full article ">Figure A3
<p>Input variables with their weightage in determining the target variable.</p>
Full article ">
24 pages, 614 KiB  
Article
How Does Abusive Supervision Affect Organisational Gossip? Understanding the Mediating Role of the Dark Triad
by Fatih Uçan and Salih Börteçine Avci
Behav. Sci. 2023, 13(9), 730; https://doi.org/10.3390/bs13090730 - 31 Aug 2023
Viewed by 1668
Abstract
According to the trait activation theory (TAT), personality characteristics are dormant until contextual elements stir them into action. Personality traits are expected to be activated in the context of abusive supervision. From this perspective, our paper examines whether abusive supervision affects organisational gossiping [...] Read more.
According to the trait activation theory (TAT), personality characteristics are dormant until contextual elements stir them into action. Personality traits are expected to be activated in the context of abusive supervision. From this perspective, our paper examines whether abusive supervision affects organisational gossiping behaviour through the dark triad. To this end, this study examines the mediating effects of the dark triad on the relationship between abusive supervision and organisational gossip based on cross-sectional data gathered from two separate samples. Using the results from structural equation modelling, it is evident that abusive supervision activates the dark triad, and its context influences organisational gossip in line with the TAT. In addition, our results show that abusive supervision positively affects gossip for information gathering and relationship building, with the dark triad proving to be completely mediating. This finding implies that abusive supervision is a contextual factor, and as such, behaviours such as consistent ill treatment and non-violent, verbal or non-verbal hostile acts will have long-term and lasting effects on organisational communication in many organisations. This study offers significant policy implications concerning behavioural issues within education-centred organisations. Full article
(This article belongs to the Special Issue Important Perspectives on Workplace Relationships)
Show Figures

Figure 1

Figure 1
<p>The conceptual model.</p>
Full article ">
24 pages, 2539 KiB  
Review
The Application of Biometric Approaches in Agri-Food Marketing: A Systematic Literature Review
by Lei Cong, Siqiao Luan, Erin Young, Miranda Mirosa, Phil Bremer and Damir D. Torrico
Foods 2023, 12(16), 2982; https://doi.org/10.3390/foods12162982 - 8 Aug 2023
Cited by 3 | Viewed by 2278
Abstract
A challenge in social marketing studies is the cognitive biases in consumers’ conscious and self-reported responses. To help address this concern, biometric techniques have been developed to obtain data from consumers’ implicit and non-verbal responses. A systematic literature review was conducted to explore [...] Read more.
A challenge in social marketing studies is the cognitive biases in consumers’ conscious and self-reported responses. To help address this concern, biometric techniques have been developed to obtain data from consumers’ implicit and non-verbal responses. A systematic literature review was conducted to explore biometric applications’ role in agri-food marketing to provide an integrated overview of this topic. A total of 55 original research articles and four review articles were identified, classified, and reviewed. It was found that there is a steady growth in the number of studies applying biometric approaches, with eye-tracking being the dominant method used to investigate consumers’ perceptions in the last decade. Most of the studies reviewed were conducted in Europe or the USA. Other biometric techniques used included facial expressions, heart rate, body temperature, and skin conductance. A wide range of scenarios concerning consumers’ purchase and consumption behaviour for agri-food products have been investigated using biometric-based techniques, indicating their broad applicability. Our findings suggest that biometric techniques are expanding for researchers in agri-food marketing, benefiting both academia and industry. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

Figure 1
<p>Flow chart of different phases of the systematic review.</p>
Full article ">Figure 2
<p>Number of selected articles by year of publication (<span class="html-italic">n</span> = 59).</p>
Full article ">Figure 3
<p>Geographical distribution of selected articles (<span class="html-italic">n</span> = 59).</p>
Full article ">Figure 4
<p>Distribution of utilisation frequency in selected research articles (<span class="html-italic">n</span> = 55).</p>
Full article ">Figure 5
<p>Research questions that can be addressed by biometric approaches across different scenarios of the purchase and consumption behaviour of agri-food products.</p>
Full article ">
17 pages, 2040 KiB  
Article
Behavioural Models of Risk-Taking in Human–Robot Tactile Interactions
by Qiaoqiao Ren, Yuanbo Hou, Dick Botteldooren and Tony Belpaeme
Sensors 2023, 23(10), 4786; https://doi.org/10.3390/s23104786 - 16 May 2023
Cited by 2 | Viewed by 1809
Abstract
Touch can have a strong effect on interactions between people, and as such, it is expected to be important to the interactions people have with robots. In an earlier work, we showed that the intensity of tactile interaction with a robot can change [...] Read more.
Touch can have a strong effect on interactions between people, and as such, it is expected to be important to the interactions people have with robots. In an earlier work, we showed that the intensity of tactile interaction with a robot can change how much people are willing to take risks. This study further develops our understanding of the relationship between human risk-taking behaviour, the physiological responses by the user, and the intensity of the tactile interaction with a social robot. We used data collected with physiological sensors during the playing of a risk-taking game (the Balloon Analogue Risk Task, or BART). The results of a mixed-effects model were used as a baseline to predict risk-taking propensity from physiological measures, and these results were further improved through the use of two machine learning techniques—support vector regression (SVR) and multi-input convolutional multihead attention (MCMA)—to achieve low-latency risk-taking behaviour prediction during human–robot tactile interaction. The performance of the models was evaluated based on mean absolute error (MAE), root mean squared error (RMSE), and R squared score (R2), which obtained the optimal result with MCMA yielding an MAE of 3.17, an RMSE of 4.38, and an R2 of 0.93 compared with the baseline of 10.97 MAE, 14.73 RMSE, and 0.30 R2. The results of this study offer new insights into the interplay between physiological data and the intensity of risk-taking behaviour in predicting human risk-taking behaviour during human–robot tactile interactions. This work illustrates that physiological activation and the intensity of tactile interaction play a prominent role in risk processing during human–robot tactile interaction and demonstrates that it is feasible to use human physiological data and behavioural data to predict risk-taking behaviour in human–robot tactile interaction. Full article
Show Figures

Figure 1

Figure 1
<p>The four interaction conditions used in the study [<a href="#B12-sensors-23-04786" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Model diagnostic plots, which consist of four subplots, with the top-left plot titled “Residuals vs. Fitted” used to evaluate the assumption of a linear relationship. The top-right plot titled “Normal Q-Q” was used to test the normality of the residuals. The bottom-left plot titled “Scale-Location (or Spread-Location)” was employed to examine the homogeneity of variance of the residuals. Finally, the bottom-right plot titled “Residuals vs. Leverage” was used to identify influential cases that may significantly impact the regression results when included or excluded from the analysis.</p>
Full article ">Figure 3
<p>Lasso coefficient path visualization. The red and blue vertical lines on the plot are the values of the minimum lambda and 1-standard-error lambda, respectively, and were obtained from a lasso regression model that had undergone cross-validation.</p>
Full article ">Figure 4
<p>Data flow of mixed effects model.</p>
Full article ">Figure 5
<p>Mixed effects model: actual risk-taking behaviour vs. predicted risk-taking behaviour.</p>
Full article ">Figure 6
<p>Data flow for SVR model.</p>
Full article ">Figure 7
<p>SVR model: actual risk-taking behaviour vs. predicted risk-taking behaviour.</p>
Full article ">Figure 8
<p>The proposed multi-input convolutional multihead attention (MCMA) model.</p>
Full article ">Figure 9
<p>The convolutional block in the proposed MCMA model.</p>
Full article ">
18 pages, 4295 KiB  
Article
Deep Learning-Based Cost-Effective and Responsive Robot for Autism Treatment
by Aditya Singh, Kislay Raj, Teerath Kumar, Swapnil Verma and Arunabha M. Roy
Drones 2023, 7(2), 81; https://doi.org/10.3390/drones7020081 - 23 Jan 2023
Cited by 79 | Viewed by 6737
Abstract
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not [...] Read more.
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not just in the classrooms but also in the in-house clinical practices. With the rapid advancement in deep learning techniques, robots became more capable of handling human behaviour. In this paper, we present a cost-efficient, socially designed robot called ‘Tinku’, developed to assist in teaching special needs children. ‘Tinku’ is low cost but is full of features and has the ability to produce human-like expressions. Its design is inspired by the widely accepted animated character ‘WALL-E’. Its capabilities include offline speech processing and computer vision—we used light object detection models, such as Yolo v3-tiny and single shot detector (SSD)—for obstacle avoidance, non-verbal communication, expressing emotions in an anthropomorphic way, etc. It uses an onboard deep learning technique to localize the objects in the scene and uses the information for semantic perception. We have developed several lessons for training using these features. A sample lesson about brushing is discussed to show the robot’s capabilities. Tinku is cute, and loaded with lots of features, and the management of all the processes is mind-blowing. It is developed in the supervision of clinical experts and its condition for application is taken care of. A small survey on the appearance is also discussed. More importantly, it is tested on small children for the acceptance of the technology and compatibility in terms of voice interaction. It helps autistic kids using state-of-the-art deep learning models. Autism Spectral disorders are being increasingly identified today’s world. The studies show that children are prone to interact with technology more comfortably than a with human instructor. To fulfil this demand, we presented a cost-effective solution in the form of a robot with some common lessons for the training of an autism-affected child. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematics of dynamixel servo driver.</p>
Full article ">Figure 2
<p>Single shot detector architecture [<a href="#B50-drones-07-00081" class="html-bibr">50</a>].</p>
Full article ">Figure 3
<p>Yolo V3-tiny architecture [<a href="#B54-drones-07-00081" class="html-bibr">54</a>].</p>
Full article ">Figure 4
<p>Flow chart of obstacle avoidance algorithm.</p>
Full article ">Figure 5
<p>Flow chart of the lesson.</p>
Full article ">Figure 6
<p>A survey report on appearance of Tinku. Note: The sample size was 60 and it belongs to both genders with variation of age from 15 to 62 years.</p>
Full article ">Figure 7
<p>Two versions of Tinku.</p>
Full article ">Figure 8
<p>Digital media sample, used to teach in the different lessons.</p>
Full article ">Figure 9
<p>Test results of lesson 3.</p>
Full article ">
17 pages, 547 KiB  
Article
A Study on the Role of Affective Feedback in Robot-Assisted Learning
by Gabriela Błażejowska, Łukasz Gruba, Bipin Indurkhya and Artur Gunia
Sensors 2023, 23(3), 1181; https://doi.org/10.3390/s23031181 - 20 Jan 2023
Cited by 8 | Viewed by 2776
Abstract
In recent years, there have been many approaches to using robots to teach computer programming. In intelligent tutoring systems and computer-aided learning, there is also some research to show that affective feedback to the student increases learning efficiency. However, a few studies on [...] Read more.
In recent years, there have been many approaches to using robots to teach computer programming. In intelligent tutoring systems and computer-aided learning, there is also some research to show that affective feedback to the student increases learning efficiency. However, a few studies on the role of incorporating an emotional personality in the robot in robot-assisted learning have found different results. To explore this issue further, we conducted a pilot study to investigate the effect of positive verbal encouragement and non-verbal emotive behaviour of the Miro-E robot during a robot-assisted programming session. The participants were tasked to program the robot’s behaviour. In the experimental group, the robot monitored the participants’ emotional state via their facial expressions, and provided affective feedback to the participants after completing each task. In the control group, the robot responded in a neutral way. The participants filled out a questionnaire before and after the programming session. The results show a positive reaction of the participants to the robot and the exercise. Though the number of participants was small, as the experiment was conducted during the pandemic, a qualitative analysis of the data was carried out. We found that the greatest affective outcome of the session was for students who had little experience or interest in programming before. We also found that the affective expressions of the robot had a negative impact on its likeability, revealing vestiges of the uncanny valley effect. Full article
(This article belongs to the Special Issue Recognition Robotics)
Show Figures

Figure 1

Figure 1
<p>Miro-E robot.</p>
Full article ">Figure 2
<p>An example program written in MiroCode’s visual interface.</p>
Full article ">Figure 3
<p>Architecture diagram. Please refer to the text for an explanation of the acronyms.</p>
Full article ">Figure 4
<p>Visualisation of the valence-arousal plane for the final result calculation.</p>
Full article ">Figure 5
<p>Heatmap showing the valence and arousal for the control group participants.</p>
Full article ">Figure 6
<p>Heatmap showing the valence and arousal for the experimental group participants.</p>
Full article ">Figure 7
<p>Graphs of valence and arousal over time.</p>
Full article ">
19 pages, 610 KiB  
Systematic Review
Clinical, Cognitive and Neurodevelopmental Profile in Tetrasomies and Pentasomies: A Systematic Review
by Giacomina Ricciardi, Luca Cammisa, Rossella Bove, Giorgia Picchiotti, Matteo Spaziani, Andrea M. Isidori, Franca Aceti, Nicoletta Giacchetti, Maria Romani and Carla Sogos
Children 2022, 9(11), 1719; https://doi.org/10.3390/children9111719 - 9 Nov 2022
Cited by 7 | Viewed by 2833
Abstract
Background: Sex chromosome aneuploidies (SCAs) are a group of disorders characterised by an abnormal number of sex chromosomes. Collective prevalence rate of SCAs is estimated to be around 1 in 400–500 live births; sex chromosome trisomies (e.g., XXX, XXY, XYY) are most [...] Read more.
Background: Sex chromosome aneuploidies (SCAs) are a group of disorders characterised by an abnormal number of sex chromosomes. Collective prevalence rate of SCAs is estimated to be around 1 in 400–500 live births; sex chromosome trisomies (e.g., XXX, XXY, XYY) are most frequent, while tetra- and pentasomies (e.g., XXXX, XXXXX, XXXY, XXXXY) are rarer, and the most common is 48, XXYY syndrome. The presence of additional X and/or Y chromosomes is believed to cause neurodevelopmental differences, with increased risk for developmental delays, language-based learning disabilities, cognitive impairments, executive dysfunction, and behavioural and psychological disorders. Aim of the Study: Our review has the purpose of analysing the neurocognitive, linguistical and behavioural profile of patients affected by sex chromosomes supernumerary aneuploidies (tetrasomy and pentasomy) to better understand the specific areas of weakness, in order to provide specific rehabilitation therapy. Methods: The literature search was performed by two authors independently. We used MEDLINE, PubMed, and PsycINFO search engines to identify sources of interest, without year or language restrictions. At the end of an accurate selection, 16 articles fulfilled the inclusion and exclusion criteria. Results and Conclusions: International literature has described single aspects of the neuropsychological profile of 48, XXYY and 49, XXXXY patients. In 48, XXYY patients, various degrees of psychosocial/executive functioning issues have been reported and there is an increased frequency of behavioural problems in childhood. Developmental delay and behavioural problems are the most common presenting problems, even if anxiety, depression and oppositional defiant disorder are also reported. They also show generalized difficulties with socialization and communication. Cognitive abilities are lower in measures of verbal IQ than in measures of performance IQ. Visuospatial skills are a relative strength compared to verbal skills. In patients with 49, XXXXY, both intellectual and adaptive functioning skills fall into the disability range, with better non-verbal cognitive performance. Speech and language testing reveals more deficits in expressive language than receptive language and comprehension. Anxiety, thought problems, internalizing and externalizing problems, and deficits in social cognition and communication are reported. Behavioural symptoms lessen from school age to adolescence, with the exception of thought problems and anxiety. Individuals affected by sex chromosome aneuploidies show testosterone deficiency, microorchidism, lack of pubertal progression and infertility. Hormone replacement therapy (HRT) is usually recommended for these patients: different studies have found that testosterone-based HRT benefit a wide range of areas initiated in these disorders, affecting not only neuromotor, cognitive and behavioural profile but also structural anomalies of the brain (i.e., increase of volume of grey temporal lobe matter). In conclusion, further studies are needed to better understand the neuropsychological profile with a complete evaluation, including neurocognitive and psychosocial aspects and to establish the real impact of HRT on improving the cognitive and behavioural profile of these patients. Full article
Show Figures

Figure 1

Figure 1
<p>Selection process.</p>
Full article ">
10 pages, 729 KiB  
Article
Neurobehavioral Associations with NREM and REM Sleep Architecture in Children with Autism Spectrum Disorder
by Jennifer Nguyen, Bo Zhang, Ellen Hanson, Dimitrios Mylonas and Kiran Maski
Children 2022, 9(9), 1322; https://doi.org/10.3390/children9091322 - 30 Aug 2022
Cited by 3 | Viewed by 2479
Abstract
Objective: Insomnia and daytime behavioral problems are common issues in pediatric autism spectrum disorder (ASD), yet specific underlying relationships with NonRapid Eye Movement sleep (NREM) and Rapid Eye Movement (REM) sleep architecture are understudied. We hypothesize that REM sleep alterations (REM%, REM EEG [...] Read more.
Objective: Insomnia and daytime behavioral problems are common issues in pediatric autism spectrum disorder (ASD), yet specific underlying relationships with NonRapid Eye Movement sleep (NREM) and Rapid Eye Movement (REM) sleep architecture are understudied. We hypothesize that REM sleep alterations (REM%, REM EEG power) are associated with more internalizing behaviors and NREM sleep deficits (N3%; slow wave activity (SWA) 0.5–3 Hz EEG power) are associated with increased externalizing behaviors in children with ASD vs. typical developing controls (TD). Methods: In an age- and gender-matched pediatric cohort of n = 23 ASD and n = 20 TD participants, we collected macro/micro sleep architecture with overnight home polysomnogram and daytime behavior scores with Child Behavior Checklist (CBCL) scores. Results: Controlling for non-verbal IQ and medication use, ASD and TD children have similar REM and NREM sleep architecture. Only ASD children show positive relationships between REM%, REM theta power and REM beta power with internalizing scores. Only TD participants showed an inverse relationship between NREM SWA and externalizing scores. Conclusion: REM sleep measures reflect concerning internalizing behaviours in ASD and could serve as a biomarker for mood disorders in this population. While improving deep sleep may help externalizing behaviours in TD, we do not find evidence of this relationship in ASD. Full article
(This article belongs to the Special Issue Sleep Disorders in Children with Neurodevelopmental Disorders)
Show Figures

Figure 1

Figure 1
<p>REM sleep percentage correlates with Child Behavior Checklist internalizing (CBCLi) score in ASD group but not TD. Bolded <span class="html-italic">p</span>-values represent significance <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">
22 pages, 8810 KiB  
Article
Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours
by Siméon Capy, Pablo Osorio, Shohei Hagane, Corentin Aznar, Dora Garcin, Enrique Coronado, Dominique Deuff, Ioana Ocnarescu, Isabelle Milleville and Gentiane Venture
Machines 2022, 10(8), 708; https://doi.org/10.3390/machines10080708 - 18 Aug 2022
Cited by 9 | Viewed by 2772
Abstract
Yōkobo is a robject; it was designed using the principle of slow technology and it aims to strengthen the bond between members (e.g., a couple). It greets people at the entrance and mirrors their interactions and the environment around them. It was constructed [...] Read more.
Yōkobo is a robject; it was designed using the principle of slow technology and it aims to strengthen the bond between members (e.g., a couple). It greets people at the entrance and mirrors their interactions and the environment around them. It was constructed by applying the notions of a human–robot–human interaction. Created by joint work between designers and engineers, the form factor (semi-abstract) and the behaviours (nonverbal) were iteratively formed from the early stage of the design process. Integrated into the smart home, Yōkobo uses expressive motion as a communication medium. Yōkobo was tested in our office to evaluate its technical robustness and motion perception ahead of future long-term experiments with the target population. The results show that Yōkobo can sustain long-term interaction and serve as a welcoming partner. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Parts and kinematic diagram of Yōkobo. Dimensions of the robot in centimeters: H = 33; L = 36; W = 24; ø = 15.</p>
Full article ">Figure 2
<p>Hardware architecture. The red arrows symbolise the power link, the blue ones the data, and the purple ones both.</p>
Full article ">Figure 3
<p>Finite state machines designed to command Yōkobo’s services and behaviours.</p>
Full article ">Figure 4
<p>Yōkobo in the experimental condition, and the <span class="html-italic">graffiti wall</span>, with, in this case, question 2 and the answers. The translations for the Japanese answers are: “I thought Yokobo thoughtfully stops its head as I approach to it so that I can pick stuff from the bowl easily.” and “It appears to be shaking the head hard from front point of view, however, it looks like waving the hand and waiting for people from side point of view.”</p>
Full article ">Figure 5
<p>Participants cumulative interaction time for both experiments (from T-L), <b>D</b> for days (from 1 to 12), <b>W1</b> and <b>W2</b> for weeks; each bar corresponds to a user. New participants for E2 are denoted as <b>P2</b> plus their respective tag numbers. The pattern for each participant represents the couple they belong to. Each bar corresponds to the cumulative interaction time that each SP and the V group had with Yōkobo per day. The V group is also considered since their interaction is valuable to differentiate between groups and how each one decides to interact with the robot. The graphs also show that, on average, each SP left at least one message per experiment. This result, in combination with the data acquired through the questionnaire, allows us to discern patterns associated with the robot’s interactions, robustness, or personal perceptions, which (later on) is beneficial to discover the pain points associated with Yōkobo’s interactions and qualifies the perceptual impacts it had on the user.</p>
Full article ">Figure 6
<p>Interaction time throughout both experiments. Each sample represents an interaction with Yōkobo that completes the states loop, i.e., <span class="html-italic">Wake-Up</span> trigger to the start of the <span class="html-italic">Go Back to Rest</span> state. This interaction time is measured second and composes every interaction without differentiating between SPs or V. The values associated with the minimum, maximum, and average times for both experiments are also shown. The maximum interaction time spent for E1 is 826 <math display="inline"><semantics> <mi mathvariant="normal">s</mi> </semantics></math>, the minimum is 16 <math display="inline"><semantics> <mi mathvariant="normal">s</mi> </semantics></math>, and an average of 76 <math display="inline"><semantics> <mi mathvariant="normal">s</mi> </semantics></math>. For E2 the average is <math display="inline"><semantics> <mrow> <mn>113.24</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="normal">s</mi> </semantics></math>, the maximum is 1307 <math display="inline"><semantics> <mi mathvariant="normal">s</mi> </semantics></math>, and the minimum is <math display="inline"><semantics> <mrow> <mn>13.39</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mi mathvariant="normal">s</mi> </semantics></math>.</p>
Full article ">Figure A1
<p>Picture of Yōkobo used in the questionnaire—to know the vocabulary used by participants to describe the robot. The numbered marks refer to the question in <a href="#machines-10-00708-t0A1" class="html-table">Table A1</a>.</p>
Full article ">
26 pages, 28805 KiB  
Article
Sound Feedback for Social Distance: The Case for Public Interventions during a Pandemic
by William Primett, Hugo Plácido Da Silva and Hugo Gamboa
Electronics 2022, 11(14), 2151; https://doi.org/10.3390/electronics11142151 - 9 Jul 2022
Viewed by 2509
Abstract
Within the field of movement sensing and sound interaction research, multi-user systems have gradually gained interest as a means to facilitate an expressive non-verbal dialogue. When tied with studies grounded in psychology and choreographic theory, we consider the qualities of interaction that foster [...] Read more.
Within the field of movement sensing and sound interaction research, multi-user systems have gradually gained interest as a means to facilitate an expressive non-verbal dialogue. When tied with studies grounded in psychology and choreographic theory, we consider the qualities of interaction that foster an elevated sense of social connectedness, non-contingent to occupying one’s personal space. Upon reflection of the newly adopted social distancing concept, we orchestrate a technological intervention, starting with interpersonal distance and sound at the core of interaction. Materialised as a set of sensory face-masks, a novel wearable system was developed and tested in the context of a live public performance from which we obtain the user’s individual perspectives and correlate this with patterns identified in the recorded data. We identify and discuss traits of the user’s behaviour that were accredited to the system’s influence and construct four fundamental design considerations for physically distanced sound interaction. The study concludes with essential technical reflections, accompanied by an adaptation for a pervasive sensory intervention that is finally deployed in an open public space. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Proximity sensor enclosure fitted onto the face mask with trigger (output) and echo (receiver) signals.</p>
Full article ">Figure 2
<p>Spectrogram sample of interactive soundscape showing “open” to “closed” transition.</p>
Full article ">Figure 3
<p>Visual representations for each scene. From left to right: (i) Geometric boundaries and interceptions, (ii) Interacting with the non-human, (iii) Participation from external users.</p>
Full article ">Figure 4
<p>Acceleration data recorded from scenes i–iii: the top row displays group median averages (1), with individual user data shown below (2). The final row aligns the peak cluster periods detected along the <span class="html-italic">x</span>-axis (3). A high-resolution version of the image is available in the <a href="#app1-electronics-11-02151" class="html-app">Supplementary Material</a>.</p>
Full article ">Figure 5
<p>Annotations from Scene <span class="html-italic">iii</span> rehearsal video. The arrows show the walking direction with a dashed line to trace the dispersion of mask-wearing group. A video recording is included in the <a href="#app1-electronics-11-02151" class="html-app">Supplementary Material</a>.</p>
Full article ">Figure 6
<p>Spectrogram recording from two scenes, <span class="html-italic">ii</span> and <span class="html-italic">iii</span>, marked with interruptions of the granular soundscape.</p>
Full article ">Figure 7
<p>Public installation using four proximity sensors placed inside foliage with hanging ribbon. Sensors are physically separated by sensing trajectories, identified by colour.</p>
Full article ">Figure 8
<p>User instructions for installation: (1) Access QR code; (2) Load the app and select a colour; (3) Increase the device volume, and (4) Walk towards the chosen colour to initiate sound.</p>
Full article ">
Back to TopTop