[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,064)

Search Parameters:
Keywords = emotion model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 240 KiB  
Article
A Comparative Study of Sentiment Analysis on Customer Reviews Using Machine Learning and Deep Learning
by Logan Ashbaugh and Yan Zhang
Computers 2024, 13(12), 340; https://doi.org/10.3390/computers13120340 (registering DOI) - 15 Dec 2024
Abstract
Sentiment analysis is a key technique in natural language processing that enables computers to understand human emotions expressed in text. It is widely used in applications such as customer feedback analysis, social media monitoring, and product reviews. However, sentiment analysis of customer reviews [...] Read more.
Sentiment analysis is a key technique in natural language processing that enables computers to understand human emotions expressed in text. It is widely used in applications such as customer feedback analysis, social media monitoring, and product reviews. However, sentiment analysis of customer reviews presents unique challenges, including the need for large datasets and the difficulty in accurately capturing subtle emotional nuances in text. In this paper, we present a comparative study of sentiment analysis on customer reviews using both deep learning and traditional machine learning techniques. The deep learning models include Convolutional Neural Network (CNN) and Recursive Neural Network (RNN), while the machine learning methods consist of Logistic Regression, Random Forest, and Naive Bayes. Our dataset is composed of Amazon product reviews, where we utilize the star rating as a proxy for the sentiment expressed in each review. Through comprehensive experiments, we assess the performance of each model in terms of accuracy and effectiveness in detecting sentiment. This study provides valuable insights into the strengths and limitations of both deep learning and traditional machine learning approaches for sentiment analysis. Full article
24 pages, 9053 KiB  
Article
An Ensemble Deep Learning Approach for EEG-Based Emotion Recognition Using Multi-Class CSP
by Behzad Yousefipour, Vahid Rajabpour, Hamidreza Abdoljabbari, Sobhan Sheykhivand and Sebelan Danishvar
Biomimetics 2024, 9(12), 761; https://doi.org/10.3390/biomimetics9120761 (registering DOI) - 14 Dec 2024
Viewed by 259
Abstract
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are [...] Read more.
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are critical for accurate emotion recognition. In this study, a novel approach is presented for classifying emotions into three categories, positive, negative, and neutral, using a custom-collected dataset. The dataset used in this study was specifically collected for this purpose from 16 participants, comprising EEG recordings corresponding to the three emotional states induced by musical stimuli. A multi-class Common Spatial Pattern (MCCSP) technique was employed for the processing stage of the EEG signals. These processed signals were then fed into an ensemble model comprising three autoencoders with Convolutional Neural Network (CNN) layers. A classification accuracy of 99.44 ± 0.39% for the three emotional classes was achieved by the proposed method. This performance surpasses previous studies, demonstrating the effectiveness of the approach. The high accuracy indicates that the method could be a promising candidate for future BCI applications, providing a reliable means of emotion detection. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

Figure 1
<p>Set of emotions based on the valence and arousal dimensions.</p>
Full article ">Figure 2
<p>The main framework of the proposed model.</p>
Full article ">Figure 3
<p>Validation of SAM test.</p>
Full article ">Figure 4
<p>The duration and order of the music tracks.</p>
Full article ">Figure 5
<p>Part of the EEG signal for positive, negative, and neutral stages of T3 and F8 channels for Subject 4.</p>
Full article ">Figure 6
<p>The MCCSP process flowchart.</p>
Full article ">Figure 7
<p>Proposed ensemble model.</p>
Full article ">Figure 8
<p>Visualization of the proposed architecture of the stacked autoencoder.</p>
Full article ">Figure 9
<p>Brain topography after applying MCCSP for different classes and frequency bands.</p>
Full article ">Figure 10
<p>Performance of the proposed model.</p>
Full article ">Figure 11
<p>ROC analysis (<b>a</b>) and confusion matrix (<b>b</b>) based on the proposed model.</p>
Full article ">Figure 12
<p>Visualization of a representation of the input data to the network and output of the third filter of the first and second layers for three classes: positive (<b>a</b>), negative (<b>b</b>), and neutral (<b>c</b>).</p>
Full article ">Figure 13
<p>Comparing the accuracy (<b>a</b>) and network training time (<b>b</b>) with different functions.</p>
Full article ">Figure 14
<p>Visual representation of examples for five different layers of the proposed network: input (<b>a</b>), output of autoencoders (<b>b</b>–<b>d</b>), and SoftMax output (<b>e</b>).</p>
Full article ">Figure 15
<p>Visual representation of examples for five different layers of the proposed network with −4 dB SNR: input (<b>a</b>), output of autoencoders (<b>b</b>–<b>d</b>), and SoftMax output (<b>e</b>).</p>
Full article ">Figure 16
<p>Traditional ensemble method framework.</p>
Full article ">
20 pages, 15343 KiB  
Article
Spontaneous Emergence of Agent Individuality Through Social Interactions in Large Language Model-Based Communities
by Ryosuke Takata, Atsushi Masumori and Takashi Ikegami
Entropy 2024, 26(12), 1092; https://doi.org/10.3390/e26121092 - 13 Dec 2024
Viewed by 296
Abstract
We study the emergence of agency from scratch by using Large Language Model (LLM)-based agents. In previous studies of LLM-based agents, each agent’s characteristics, including personality and memory, have traditionally been predefined. We focused on how individuality, such as behavior, personality, and memory, [...] Read more.
We study the emergence of agency from scratch by using Large Language Model (LLM)-based agents. In previous studies of LLM-based agents, each agent’s characteristics, including personality and memory, have traditionally been predefined. We focused on how individuality, such as behavior, personality, and memory, can be differentiated from an undifferentiated state. The present LLM agents engage in cooperative communication within a group simulation, exchanging context-based messages in natural language. By analyzing this multi-agent simulation, we report valuable new insights into how social norms, cooperation, and personality traits can emerge spontaneously. This paper demonstrates that autonomously interacting LLM-powered agents generate hallucinations and hashtags to sustain communication, which, in turn, increases the diversity of words within their interactions. Each agent’s emotions shift through communication, and as they form communities, the personalities of the agents emerge and evolve accordingly. This computational modeling approach and its findings will provide a new method for analyzing collective artificial intelligence. Full article
21 pages, 966 KiB  
Article
Pre-Separation Mother–Child Relationship and Adjustment Behaviors of Young Children Left Behind in Rural China: Pathways Through Distant Mothering and Current Mother–Child Relationship Quality
by Ruwen Liang and Karla Van Leeuwen
Behav. Sci. 2024, 14(12), 1193; https://doi.org/10.3390/bs14121193 - 13 Dec 2024
Viewed by 316
Abstract
In China, some rural parents do not live together with their children because they migrate to urban regions for work, and therefore they sometimes use a mobile phone in parenting their left-behind children (LBC), who are living with grandparents. This study used a [...] Read more.
In China, some rural parents do not live together with their children because they migrate to urban regions for work, and therefore they sometimes use a mobile phone in parenting their left-behind children (LBC), who are living with grandparents. This study used a serial mediation model to test the mediating roles of distant mothering and post-separation mother–child relationship quality in the link between recalled pre-separation mother–child relationship quality and social–emotional adjustment of 3-to-6-year-old LBC living in a rural context in China. Cross-sectional questionnaire data were collected from 185 triads, consisting of grandparents (rating child adjustment), migrant mothers (rating mother–child relationship qualities and distant mothering), and preschool teachers (rating child adjustment). The results showed that pre- and post-separation relationship qualities were positively related to each other and to positive distant mothering. There were no serial mediating effects, but a full individual mediating role of post-separation relationship quality and positive distant mothering was identified for the link between child prosocial behavior and externalizing problems, respectively. Despite the general decline in mother–child relationship quality after separation, mothers who perceived a higher quality of the pre-separation mother–child relationship showed a more cohesive relationship with their LBC, which might increase the prosocial behavior of the children. Additionally, a higher quality of the pre-separation relationship was associated with more distant mothering of positive characteristics, which went together with fewer children externalizing problems. These findings highlight the importance of a continuous high-quality mother–child bond and favorable maternal parenting practices in digital interactions for separated families. Full article
Show Figures

Figure 1

Figure 1
<p>Hypothetical serial mediation model regarding the associations between recalled pre-separation mother–child cohesion, positive distant mothering, current post-separation mother–child cohesion, and child adjustment.</p>
Full article ">Figure 2
<p>Standardized coefficients for the structural model examining the mediation effect of distant positive mothering and post-separation mother–child cohesion on the associations between pre-separation mother–child cohesion and child outcomes.</p>
Full article ">
10 pages, 569 KiB  
Article
Gender Differences in the Relation Between Suicidal Risk and Body Dissatisfaction Among Bariatric Surgery Patients: A Cross-Lagged Analysis
by Gil Goldzweig, Sigal Levy, Shay Ohayon, Sami Hamdan, Subhi Abu-Abeid and Shulamit Geller
Healthcare 2024, 12(24), 2524; https://doi.org/10.3390/healthcare12242524 - 13 Dec 2024
Viewed by 250
Abstract
Objectives: This study aimed to develop a gender-specific model to understand the causal relationship between body image dissatisfaction, emotional eating, and suicide risk among bariatric surgery patients. A secondary objective was to evaluate gender differences in the associations between these variables. It was [...] Read more.
Objectives: This study aimed to develop a gender-specific model to understand the causal relationship between body image dissatisfaction, emotional eating, and suicide risk among bariatric surgery patients. A secondary objective was to evaluate gender differences in the associations between these variables. It was hypothesized that, independent of objective weight loss, body dissatisfaction and emotional eating would lead to increased suicide risk. Methods: A total of 109 participants completed self-report measures of suicidal ideation, body image dissatisfaction, and emotional eating before and after bariatric surgery. Results: Cross-lagged analysis indicated that pre-surgery suicide ideation significantly predicts body dissatisfaction primarily among men, independent of the extent of weight loss. High levels of pre-surgery suicide risk correlated with post-surgery body image dissatisfaction in men. The autoregressive effect of suicide ideation was stronger than that of body dissatisfaction for both genders; however, the latter was stronger among women, indicating that past dissatisfaction levels significantly influenced future dissatisfaction. Conclusions: The complex interplay between gender, body dissatisfaction, emotional eating, and suicide risk warrants further research. Full article
Show Figures

Figure 1

Figure 1
<p>Coefficients and standard error values from cross-lagged, multi-group analysis. Note: *: <span class="html-italic">p</span> &lt; 0.05; ***: <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
12 pages, 584 KiB  
Article
Within- and Between-Person Correlates of Affect and Sleep Health Among Health Science Students
by Yueying Wang, Jiechao Yang, Jinjin Yuan, Bilgay Izci-Balserak, Yunping Mu, Pei Chen and Bingqian Zhu
Brain Sci. 2024, 14(12), 1250; https://doi.org/10.3390/brainsci14121250 - 13 Dec 2024
Viewed by 292
Abstract
Background/Objectives: To examine the relationships between state affect and sleep health at within- and between-person levels among health science students. Methods: A correlational design was used and 54 health science students were included. The participants completed baseline and 7-day ambulatory assessments in a [...] Read more.
Background/Objectives: To examine the relationships between state affect and sleep health at within- and between-person levels among health science students. Methods: A correlational design was used and 54 health science students were included. The participants completed baseline and 7-day ambulatory assessments in a free-living setting. Daily sleep and affect were measured using the Consensus Sleep Diary and Positive and Negative Affect Schedule. Mixed-effect models were used to examine the effects of affect on sleep health. Results: The participants were 19.8 (SD, 0.6) years and 92.6% were females. Approximately 40% had poor sleep quality. Controlling for the potential confounders (e.g., age, sex, and bedtime procrastination), higher within-person negative affect predicted shorter sleep duration, lower sleep efficiency, longer sleep onset latency, and less feeling rested. Higher between-person negative affect predicted shorter sleep duration. Higher within-person positive affect predicted longer sleep onset latency. Higher within- and between-person positive affect predicted more feeling rested. Conclusions: Negative affect was most consistently associated with sleep health at the individual level. Affect regulation should be considered when delivering personalized interventions targeting sleep health among health science students. Full article
(This article belongs to the Special Issue Relationships Between Disordered Sleep and Mental Health)
Show Figures

Figure 1

Figure 1
<p>The 7-day data collection protocol. Notes. NA, negative affect; PA, positive affect.</p>
Full article ">
9 pages, 457 KiB  
Article
Psychological Distress and Social Adjustment of a Working Adult Population with Single-Sided Deafness
by Enrico Apa, Riccardo Nocini, Andrea Ciorba, Luca Sacchetto, Chiara Gherpelli, Daniele Monzani and Silvia Palma
Audiol. Res. 2024, 14(6), 1105-1113; https://doi.org/10.3390/audiolres14060091 (registering DOI) - 12 Dec 2024
Viewed by 179
Abstract
Background: Hearing loss is a highly prevalent condition in the world population that determines emotional, social, and economic costs. In recent years, it has been definitely recognized that the lack of physiological binaural hearing causes alterations in the localization of sounds and [...] Read more.
Background: Hearing loss is a highly prevalent condition in the world population that determines emotional, social, and economic costs. In recent years, it has been definitely recognized that the lack of physiological binaural hearing causes alterations in the localization of sounds and reduced speech recognition in noise and reverberation. This study aims to explore the psycho-social profile of adult workers affected by single-sided deafness (SSD), without other major medical conditions and otological symptoms, through comparison to subjects with normal hearing. Methods: This is a cross-sectional, case-control study. Subjects aged between 24 and 65 years, all currently employed and affected by SSD, were enrolled. They were administered both disease-specific and psychometric tests, such as the Hearing Handicap Inventory for Adults (HHIA), the Profile Questionnaire for Rating Communicative Performance, the Psychological General Well-Being Index (PGWBI), and the Social Functioning Questionnaire (SFQ). Results: A total of 149 subjects (mean age = 49.9; SD ± 8.5) were enrolled in the period 2021–2023; 68 were males (45.6%), and 81 were females (54.4%). The normal hearing group was composed of 95 subjects, and the SSD sample was composed of 54 subjects. The results of our study show that the levels of psychological well-being and social functioning in subjects with SSD are statistically worse than in the group of subjects with normal hearing in most subscales. Conclusions: This study definitely outlined evidence for a significantly worse psychological health status and a poorer social attitude of working adults affected by SSD with respect to their normal-hearing counterparts. Understanding the impact of SSD on patients’ work environment suggests a multidisciplinary approach to such patients in order to increase their quality of life through adequate counseling, acceptance, and role modeling. Full article
Show Figures

Figure 1

Figure 1
<p>4fPTA in the better and worse ear in the two groups. Each box is included between the first and third quartile; the box’s height is equivalent to the inter-quartile range (IQR) and contains 50% of the measurements. Since no values deviated from the box by more than 1.5 of IQR upwards or downwards, no potential outliers were observed. The Independent <span class="html-italic">t</span>-test was used for statistical analysis. * <span class="html-italic">p</span>-value &lt; 0.05, ** <span class="html-italic">p</span>-value &lt; 0.005.</p>
Full article ">
30 pages, 11752 KiB  
Article
Optimizing Outdoor Micro-Space Design for Prolonged Activity Duration: A Study Integrating Rough Set Theory and the PSO-SVR Algorithm
by Jingwen Tian, Zimo Chen, Lingling Yuan and Hongtao Zhou
Buildings 2024, 14(12), 3950; https://doi.org/10.3390/buildings14123950 - 12 Dec 2024
Viewed by 320
Abstract
This study proposes an optimization method based on Rough Set Theory (RST) and Particle Swarm Optimization–Support Vector Regression (PSO-SVR), aimed at enhancing the emotional dimension of outdoor micro-space (OMS) design, thereby improving users’ outdoor activity duration preferences and emotional experiences. OMS, as a [...] Read more.
This study proposes an optimization method based on Rough Set Theory (RST) and Particle Swarm Optimization–Support Vector Regression (PSO-SVR), aimed at enhancing the emotional dimension of outdoor micro-space (OMS) design, thereby improving users’ outdoor activity duration preferences and emotional experiences. OMS, as a key element in modern urban design, significantly enhances residents’ quality of life and promotes public health. Accurately understanding and predicting users’ emotional needs is the core challenge in optimizing OMS. In this study, the Kansei Engineering (KE) framework is applied, using fuzzy clustering to reduce the dimensionality of emotional descriptors, while RST is employed for attribute reduction to select five key design features that influence users’ emotions. Subsequently, the PSO-SVR model is applied to establish the nonlinear mapping relationship between these design features and users’ emotions, predicting the optimal configuration of OMS design. The results indicate that the optimized OMS design significantly enhances users’ intention to stay in the space, as reflected by higher ratings for emotional descriptors and increased preferences for longer outdoor activity duration, all exceeding the median score of the scale. Additionally, comparative analysis shows that the PSO-SVR model outperforms traditional methods (e.g., BPNN, RF, and SVR) in terms of accuracy and generalization for predictions. These findings demonstrate that the proposed method effectively improves the emotional performance of OMS design and offers a solid optimization framework along with practical guidance for future urban public space design. The innovative contribution of this study lies in the proposed data-driven optimization method that integrates machine learning and KE. This method not only offers a new theoretical perspective for OMS design but also establishes a scientific framework to accurately incorporate users’ emotional needs into the design process. The method contributes new knowledge to the field of urban design, promotes public health and well-being, and provides a solid foundation for future applications in different urban environments. Full article
(This article belongs to the Special Issue Art and Design for Healing and Wellness in the Built Environment)
Show Figures

Figure 1

Figure 1
<p>Fundamental concepts of RST.</p>
Full article ">Figure 2
<p>Schematic diagram of SVR.</p>
Full article ">Figure 3
<p>PSO-SVR flowchart.</p>
Full article ">Figure 4
<p>The proposed research framework.</p>
Full article ">Figure 5
<p>The 60 OMS samples on collection.</p>
Full article ">Figure 6
<p>Morphological deconstruction of OMS.</p>
Full article ">Figure 7
<p>The fitness curve of “sense of coziness”.</p>
Full article ">Figure 8
<p>The fitting diagram of “sense of coziness”.</p>
Full article ">Figure 9
<p>The prediction error on the test set.</p>
Full article ">Figure 10
<p>The fitting diagram of “sense of dynamism”, “sense of covertness”, and “sense of order”.</p>
Full article ">Figure 11
<p>The parameter results of the emotional descriptors.</p>
Full article ">Figure 12
<p>Design concept modeling of OMS.</p>
Full article ">Figure 13
<p>Comparison of scatter plot; each row represents the performance of four models on the same dataset.</p>
Full article ">Figure 14
<p>Evaluation of the design scheme.</p>
Full article ">
19 pages, 761 KiB  
Article
Understanding Perceptions of Hepatitis C and Its Management Among People with Experience of Incarceration in Quebec, Canada: A Qualitative Study Guided by the Common Sense Self-Regulation Model
by Andrea Mambro, Sameh Mortazhejri, David Ortiz-Paredes, Andrea Patey, Guillaume Fontaine, Camille Dussault, Joseph Cox, Jeremy M. Grimshaw, Justin Presseau and Nadine Kronfli
Viruses 2024, 16(12), 1910; https://doi.org/10.3390/v16121910 - 12 Dec 2024
Viewed by 453
Abstract
Hepatitis C virus (HCV) disproportionately affects certain sub-populations, including people with experience of incarceration (PWEI). Little is known about how perceptions of HCV and treatment have changed despite simplifications in testing and treatment in carceral settings. Nineteen semi-structured interviews were conducted with people [...] Read more.
Hepatitis C virus (HCV) disproportionately affects certain sub-populations, including people with experience of incarceration (PWEI). Little is known about how perceptions of HCV and treatment have changed despite simplifications in testing and treatment in carceral settings. Nineteen semi-structured interviews were conducted with people living with or having a history of HCV infection released from Quebec provincial prison. Interviews were guided by the Common Sense Self-Regulation Model (CS-SRM) and aimed to explore cognitive and emotional representations of HCV and coping strategies. Among the 19 participants, seven (37%) were diagnosed with HCV in prison and 14 (74%) had previously received HCV treatment. Participants’ HCV illness perceptions were influenced by fear (of HCV transmission, death, and the well-being of family) and stigma (related to HCV, injection drug use, and incarceration). While some sought education and social and professional support, others self-isolated or engaged in high-risk behaviors to cope. Despite advances in HCV treatment, PWEI continue to experience various forms of stigma and fear surrounding their HCV diagnosis, resulting in delayed HCV care. These findings provide insights into how prison-based healthcare providers can better utilize HCV illness perceptions to evaluate willingness to engage in HCV care among PWEI. Full article
(This article belongs to the Special Issue Hepatitis C Virus Infection among People Who Inject Drugs)
Show Figures

Figure 1

Figure 1
<p>The Common Sense Self-Regulation Model (CS-SRM), adapted from Hagger and Orbell (2022).</p>
Full article ">
21 pages, 4242 KiB  
Article
A Learning Emotion Recognition Model Based on Feature Fusion of Photoplethysmography and Video Signal
by Xiaoliang Zhu, Zili He, Chuanyong Wang, Zhicheng Dai and Liang Zhao
Appl. Sci. 2024, 14(24), 11594; https://doi.org/10.3390/app142411594 - 12 Dec 2024
Viewed by 281
Abstract
The ability to recognize learning emotions facilitates the timely detection of students’ difficulties during the learning process, supports teachers in modifying instructional strategies, and allows for personalized student assistance. The detection of learning emotions through the capture of convenient, non-intrusive signals such as [...] Read more.
The ability to recognize learning emotions facilitates the timely detection of students’ difficulties during the learning process, supports teachers in modifying instructional strategies, and allows for personalized student assistance. The detection of learning emotions through the capture of convenient, non-intrusive signals such as photoplethysmography (PPG) and video offers good practicality; however, it presents new challenges. Firstly, PPG-based emotion recognition is susceptible to external factors like movement and lighting conditions, leading to signal quality degradation and recognition accuracy issues. Secondly, video-based emotion recognition algorithms may witness a reduction in accuracy within spontaneous scenes due to variations, occlusions, and uneven lighting conditions, etc. Therefore, on the one hand, it is necessary to improve the performance of the two recognition methods mentioned above; on the other hand, using the complementary advantages of the two methods through multimodal fusion needs to be considered. To address these concerns, our work mainly includes the following: (i) the development of a temporal convolutional network model incorporating channel attention to overcome PPG-based emotion recognition challenges; (ii) the introduction of a network model that integrates multi-scale spatiotemporal features to address the challenges of emotion recognition in spontaneous environmental videos; (iii) an exploration of a dual-mode fusion approach, along with an improvement of the model-level fusion scheme within a parallel connection attention aggregation network. Experimental comparisons demonstrate the efficacy of the proposed methods, particularly the bimodal fusion, which substantially enhances the accuracy of learning emotion recognition, reaching 95.75%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>PPG-based emotion recognition processing framework.</p>
Full article ">Figure 2
<p>Network model architecture diagram of proposed PPG-based emotion recognition. Blue circles represent input information; white circles mean middle layers; yellow circles describe output information.</p>
Full article ">Figure 3
<p>Proposed PPG signal preprocessing flowchart.</p>
Full article ">Figure 4
<p>Confusion matrix diagram of Random Forest emotion classification.</p>
Full article ">Figure 5
<p>PPG frequency analysis diagram of short-time Fourier transform time. Color indicates the amplitude of the corresponding frequency. Brighter colors represent larger values.</p>
Full article ">Figure 6
<p>CWT time–frequency analysis diagram. Color indicates the amplitude of the corresponding frequency. Brighter colors represent larger values.</p>
Full article ">Figure 7
<p>Confusion matrix and ROC curve of temporal convolutional network model based on channel attention. (<b>a</b>) Results for the scale dimension attention compression module following the TCN; (<b>b</b>) Results for the scale dimension attention compression module preceding the TCN.</p>
Full article ">Figure 8
<p>Framework of proposed multimodal fusion method.</p>
Full article ">Figure 9
<p>The architecture of the cascade attention-based facial expression recognition network model. We note that X represents the input to the network; Conv-1, Pool-1, Conv-2, Conv-3, Conv-4, and Conv-5 are the inner layers of the ResNeXt network; Pyramid denotes the pyramid feature extractor; Fpa(x) denotes the output of the pyramid feature extractor; Fconv-4(x<sup>T</sup>) and Fconv-5(x<sup>T</sup>) represent the output characteristics of Conv-4 and Conv-5 of the ResNeXt network, respectively; Fpc(x<sup>T</sup>) denotes the input characteristics of the cascaded attention module; and ⨁ denotes the superimposed fusion operation of the features. The face image in this figure is from the CK+ database “S113”.</p>
Full article ">Figure 10
<p>Multi-scale attention module diagram.</p>
Full article ">Figure 11
<p>PCAN architecture diagram. These entry arrows represent the input sources involved in the corresponding operation, and the output arrows represent the output results of the operation.</p>
Full article ">Figure 12
<p>Comparison of confusion matrix.</p>
Full article ">Figure 13
<p>ROC curve of PCAN (Add).</p>
Full article ">
17 pages, 328 KiB  
Article
Predictive Markers of Post-Stroke Cognitive Recovery and Depression in Ischemic Stroke Patients: A 6-Month Longitudinal Study
by Anna Tsiakiri, Spyridon Plakias, Pinelopi Vlotinou, Aikaterini Terzoudi, Aspasia Serdari, Dimitrios Tsiptsios, Georgia Karakitsiou, Evlampia Psatha, Sofia Kitmeridou, Efstratios Karavasilis, Nikolaos Aggelousis, Konstantinos Vadikolias and Foteini Christidi
Eur. J. Investig. Health Psychol. Educ. 2024, 14(12), 3056-3072; https://doi.org/10.3390/ejihpe14120200 - 11 Dec 2024
Viewed by 429
Abstract
The growing number of stroke survivors face physical, cognitive, and psychosocial impairments, making stroke a significant contributor to global disability. Various factors have been identified as key predictors of post-stroke outcomes. The aim of this study was to develop a standardized predictive model [...] Read more.
The growing number of stroke survivors face physical, cognitive, and psychosocial impairments, making stroke a significant contributor to global disability. Various factors have been identified as key predictors of post-stroke outcomes. The aim of this study was to develop a standardized predictive model that integrates various demographic and clinical factors to better predict post-stroke cognitive recovery and depression in patients with ischemic stroke (IS). We included IS patients during both the acute phase and six months post-stroke and considered neuropsychological measures (screening scales, individual tests, functional cognitive scales), stroke severity and laterality, as well as functional disability measures. The study identified several key predictors of post-stroke cognitive recovery and depression in IS patients. Higher education and younger age were associated with better cognitive recovery. Lower stroke severity, indicated by lower National Institutes of Health Stroke Scale (NIHSS) scores, also contributed to better cognitive outcomes. Patients with lower modified Rankin Scale (mRS) scores showed improved performance on cognitive tests and lower post-stroke depression scores. The study concluded that age, education, stroke severity and functional status are the most critical predictors of cognitive recovery and post-stroke emotional status in IS patients. Tailoring rehabilitation strategies based on these predictive markers can significantly improve patient outcomes. Full article
13 pages, 616 KiB  
Article
Perceptions and Experiences of Primary Care Providers on Their Role in Tobacco Treatment Delivery Based on Their Smoking Status: A Qualitative Study
by Stavros Stafylidis, Sophia Papadakis, Dimitris Papamichail, Christos Lionis and Emmanouil Smyrnakis
Healthcare 2024, 12(24), 2500; https://doi.org/10.3390/healthcare12242500 - 11 Dec 2024
Viewed by 286
Abstract
Introduction: Despite the well-documented benefits of smoking cessation interventions, the implementation and success of these programs in primary care settings often encounter significant barriers. A primary care provider’s personal smoking status has been identified as a potential barrier to tobacco treatment delivery. The [...] Read more.
Introduction: Despite the well-documented benefits of smoking cessation interventions, the implementation and success of these programs in primary care settings often encounter significant barriers. A primary care provider’s personal smoking status has been identified as a potential barrier to tobacco treatment delivery. The aim of this qualitative study is to explore the experiences and perspectives of primary care providers regarding their role in delivering smoking cessation interventions to patients based on their personal smoking status. Specifically, the study seeks to examine providers’ thoughts, emotions, and behaviors concerning their own smoking behavior and to understand their attitudes and actions when supporting patients who smoke and to explore their perspectives on the effectiveness of training programs designed to promote tobacco treatment. Materials and Methods: Semi-structured interviews were conducted with 22 primary care providers from six public primary care units in the Central Macedonia Region, Greece. Thematic analysis was used to analyze data. Results: Healthcare providers who are current smokers may face unique challenges in effectively counseling patients on smoking cessation. On the contrast, non-smoking and especially previous smoking healthcare providers were noted to exhibit greater confidence and efficacy in delivering cessation support, often serving as role models for patients aiming to quit smoking. Participating in structured cessation training programs often led healthcare professionals to reflect and reevaluate their own smoking behaviors. Conclusions: Personal smoking status of primary care providers impacts the delivery of tobacco treatment, affecting their credibility and effectiveness in providing cessation support. Educational programs positively impact attitudes and behaviors, underscoring their importance in improving both PCPs’ professional effectiveness and personal health outcomes. These findings suggest that addressing PCPs’ smoking habits and enhancing training opportunities are critical for optimizing smoking cessation services. Full article
Show Figures

Figure 1

Figure 1
<p>Thematic map of PCPs’ attitudes on personal smoking, their role in smoking cessation and their views on smoking cessation educational programs.</p>
Full article ">
14 pages, 769 KiB  
Article
Speech Emotion Recognition Using Multi-Scale Global–Local Representation Learning with Feature Pyramid Network
by Yuhua Wang, Jianxing Huang, Zhengdao Zhao, Haiyan Lan and Xinjia Zhang
Appl. Sci. 2024, 14(24), 11494; https://doi.org/10.3390/app142411494 - 10 Dec 2024
Viewed by 349
Abstract
Speech emotion recognition (SER) is important in facilitating natural human–computer interactions. In speech sequence modeling, a vital challenge is to learn context-aware sentence expression and temporal dynamics of paralinguistic features to achieve unambiguous emotional semantic understanding. In previous studies, the SER method based [...] Read more.
Speech emotion recognition (SER) is important in facilitating natural human–computer interactions. In speech sequence modeling, a vital challenge is to learn context-aware sentence expression and temporal dynamics of paralinguistic features to achieve unambiguous emotional semantic understanding. In previous studies, the SER method based on the single-scale cascade feature extraction module could not effectively preserve the temporal structure of speech signals in the deep layer, downgrading the sequence modeling performance. To address these challenges, this paper proposes a novel multi-scale feature pyramid network. The enhanced multi-scale convolutional neural networks (MSCNNs) significantly improve the ability to extract multi-granular emotional features. Experimental results on the IEMOCAP corpus demonstrate the effectiveness of the proposed approach, achieving a weighted accuracy (WA) of 71.79% and an unweighted accuracy (UA) of 73.39%. Furthermore, on the RAVDESS dataset, the model achieves an unweighted accuracy (UA) of 86.5%. These results validate the system’s performance and highlight its competitive advantage. Full article
Show Figures

Figure 1

Figure 1
<p>Functional diagram of SER system.</p>
Full article ">Figure 2
<p>The overview of proposed multi-scale feature pyramid network.</p>
Full article ">Figure 3
<p>Bottom-up pathway, where <math display="inline"><semantics> <mrow> <mi>k</mi> <mi>w</mi> </mrow> </semantics></math> denotes different kernel widths, and <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math> denotes convolutional self-attention.</p>
Full article ">Figure 4
<p>Backward fusion structure, where <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> represents the attention score calculation function as shown in Equation (<a href="#FD1-applsci-14-11494" class="html-disp-formula">1</a>), and <math display="inline"><semantics> <msub> <mi>F</mi> <mi>i</mi> </msub> </semantics></math> denotes the feature of the <span class="html-italic">i</span>-th layer.</p>
Full article ">Figure 5
<p>Convolutional self-attention (CSA) framework. (<b>a</b>) vanilla CSA; (<b>b</b>) improved CSA.</p>
Full article ">Figure 6
<p>The number of audio samples corresponding to each emotional label in IEMOCAP.</p>
Full article ">Figure 7
<p>The number of audio samples corresponding to each emotional label in RAVDESS.</p>
Full article ">Figure 8
<p>The t-SNE visualization of the proposed framework. (<b>a</b>) MSFPN; (<b>b</b>) DRN.</p>
Full article ">
19 pages, 674 KiB  
Article
Incremental Validity of ADHD Dimensions in the Predictions of Emotional Symptoms, Conduct Problems, and Peer Problems in Adolescents Based on Parent, Teacher, and Self-Ratings
by Rapson Gomez and Taylor Brown
Pediatr. Rep. 2024, 16(4), 1115-1133; https://doi.org/10.3390/pediatric16040095 - 10 Dec 2024
Viewed by 397
Abstract
Background: The present study investigated the incremental validity of the ADHD dimensions of inattention (IA), hyperactivity (HY), and impulsivity (IM) in the predictions of emotion symptoms (ESs), conduct problems (CPs), and peer problems (PPs) in adolescents based on parent, teacher, and self- ratings. [...] Read more.
Background: The present study investigated the incremental validity of the ADHD dimensions of inattention (IA), hyperactivity (HY), and impulsivity (IM) in the predictions of emotion symptoms (ESs), conduct problems (CPs), and peer problems (PPs) in adolescents based on parent, teacher, and self- ratings. Method: A total of 214 ratings were collected from adolescents, their parents, and teachers in Australia. A structural equation modeling approach was employed to evaluated incremental validity. Results: The findings revealed that, controlling for gender, IM contributed moderate, low, and low levels of variance in predicting ESs based on parent, teacher, and self-ratings, respectively. Additionally, IM contributed moderate, substantial, and moderate levels of variance to CP predictions based on parent, teacher, and self-ratings, respectively. Furthermore, after controlling for gender, IM, and HY, parent-rated IA contributed a low level of variance to the prediction of ESs, while teacher and self-rated IA did not contribute significantly to the prediction of ESs, CPs, or PPs. Conclusions: The findings underscore the differential predictive validity of ADHD dimensions across informants and outcomes, highlighting impulsivity’s stronger association with conduct problems and emotional symptoms. These results have theoretical and practical implications for understanding ADHD-related risks in adolescence and tailoring interventions accordingly. Full article
(This article belongs to the Special Issue Mental Health and Psychiatric Disorders of Children and Adolescents)
Show Figures

Figure 1

Figure 1
<p>Structural model diagram showing the incremental validity for the prediction of emotional symptoms, conduct problems, and peer problems by (in sequence) gender and ADHD factors of impulsivity, hyperactivity, and inattention. Note: This illustration involves one observed covariate (gender), three latent predictors (IA, HY, and IM), and three latent outcomes (ESs, CPs, and PPs). ES = Emotional Symptom; CP = Conduct Problem; PP = Peer Problem; IA = ADHD inattention symptom group; HY = ADHD hyperactivity symptom group; IM = ADHD impulsivity symptom group; s1 to s18 are the ADHD symptoms in the order presented in DSM-5-TR.</p>
Full article ">Figure 2
<p>Structural model diagram for the three-factor ADHD model. <span class="html-italic">Note</span>: s1 to s18 are the ADHD symptoms in the order presented in DSM-5-TR; IA = ADHD inattention factor; HY = ADHD hyperactivity factor; IM = ADHD impulsivity factor.</p>
Full article ">
23 pages, 12344 KiB  
Article
MuIm: Analyzing Music–Image Correlations from an Artistic Perspective
by Ubaid Ullah and Hyun-Chul Choi
Appl. Sci. 2024, 14(23), 11470; https://doi.org/10.3390/app142311470 - 9 Dec 2024
Viewed by 593
Abstract
Cross-modality understanding is essential for AI to tackle complex tasks that require both deterministic and generative capabilities, such as correlating music and visual art. The existing state-of-the-art methods of audio-visual correlation often rely on single-dimension information, focusing either on semantic or emotional attributes, [...] Read more.
Cross-modality understanding is essential for AI to tackle complex tasks that require both deterministic and generative capabilities, such as correlating music and visual art. The existing state-of-the-art methods of audio-visual correlation often rely on single-dimension information, focusing either on semantic or emotional attributes, thus failing to capture the full depth of these inherently complex modalities. Addressing this limitation, we introduce a novel approach that perceives music–image correlation as multilayered rather than as a direct one-to-one correspondence. To this end, we present a pioneering dataset with two segments: an artistic segment that pairs music with art based on both emotional and semantic attributes, and a realistic segment that links music with images through affective–semantic layers. In modeling emotional layers for the artistic segment, we found traditional 2D affective models inadequate, prompting us to propose a more interpretable hybrid-emotional rating system that serves both experts and non-experts. For the realistic segment, we utilize a web-based dataset with tags, dividing tag information into semantic and affective components to ensure a balanced and nuanced representation of music–image correlation. We conducted an in-depth statistical analysis and user study to evaluate our dataset’s effectiveness and applicability for AI-driven understanding. This work provides a foundation for advanced explorations into the complex relationships between auditory and visual art modalities, advancing the development of more sophisticated cross-modal AI systems. Full article
Show Figures

Figure 1

Figure 1
<p>Visual representation of multilayered information structures in music and visual data, demonstrating the potential for understanding complex correlations between these two modalities.</p>
Full article ">Figure 2
<p>(<b>a</b>) Illustrative example of semantically similar but contradicting emotional media, where visual data and music data are represented. (<b>b</b>) Illustrative comparison of 2D (VA) vs. 28-category dimensional emotional representation model.</p>
Full article ">Figure 3
<p>(<b>a</b>) Illustration of expert-guided art image collection interface and (<b>b</b>) images of 28-category emotion labeling in art using a 9-point Likert scale.</p>
Full article ">Figure 4
<p>Pipeline for web-based music–image tag processing with affective–semantic Tags.</p>
Full article ">Figure 5
<p>Illustrative figure of the dual-dimension music–image pairing strategy, approximating strong and mixed correlations across semantic and emotional dimensions. Subplots show individual projections on simplified semantic and emotional planes.</p>
Full article ">Figure 6
<p>Detailed summary of the adapted pipeline for pairing image-music data using dual-dimension information.</p>
Full article ">Figure 7
<p>Average emotional representation for (<b>a</b>) image and (<b>b</b>) music data in the artistic part.</p>
Full article ">Figure 8
<p>Cross music–image correlation coefficient to show the emotional similarity between the two modalities.</p>
Full article ">Figure 9
<p>Self-emotion correlation for the (<b>a</b>) music and (<b>b</b>) image modalities to show the significance of emotional response in the datasets.</p>
Full article ">Figure 10
<p>Semantic information for the music and image artistic data.</p>
Full article ">Figure 11
<p>Pairing result of the two modalities for 2D information.</p>
Full article ">Figure 12
<p>Emotion distribution of (<b>a</b>) image and (<b>b</b>) music modalities from the web-based realistic data.</p>
Full article ">Figure 13
<p>Word clouds for the semantic words in (<b>a</b>) image and (<b>b</b>) music data.</p>
Full article ">Figure 14
<p>t-SNE plot for 10% random sample paired dataset for the web-based realistic data.</p>
Full article ">Figure 15
<p>User-study overview showcasing participants’ distribution based on gender, followed by their age group and the corresponding number of participants.</p>
Full article ">Figure 16
<p>Comparative analysis of participants’ knowledge in music and image domains.</p>
Full article ">Figure 17
<p>Trends in semantic analysis responses. The combined average semantic rating histogram consolidates the results from the other three histograms.</p>
Full article ">Figure 18
<p>Trends in affective analysis responses. The average emotional rating histogram consolidates results by merging web-data variations.</p>
Full article ">
Back to TopTop