[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,825)

Search Parameters:
Keywords = behavior recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5548 KiB  
Article
Spatial Sense of Safety for Seniors in Living Streets Based on Street View Image Data
by Xuyang Sun, Xinlei Nie, Lu Wang, Zichun Huang and Ruiming Tian
Buildings 2024, 14(12), 3973; https://doi.org/10.3390/buildings14123973 (registering DOI) - 14 Dec 2024
Viewed by 273
Abstract
As the global population ages, the friendliness of urban spaces towards seniors becomes increasingly crucial. This research primarily investigates the environmental factors that influence the safety perception of elderly people in living street spaces. Taking Dingzigu Street in Tianjin, China, as an example, [...] Read more.
As the global population ages, the friendliness of urban spaces towards seniors becomes increasingly crucial. This research primarily investigates the environmental factors that influence the safety perception of elderly people in living street spaces. Taking Dingzigu Street in Tianjin, China, as an example, by employing deep learning fully convolutional network (FCN-8s) technology and the semantic segmentation method based on computer vision, the objective measurement data of street environmental elements are acquired. Meanwhile, the subjective safety perception evaluation data of elderly people are obtained through SD semantic analysis combined with the Likert scale. Utilizing Pearson correlation analysis and multiple linear regression analysis, the study comprehensively examines the impact of the physical environment characteristics of living street spaces on the spatial safety perception of seniors. The results indicate that, among the objective environmental indicators, ① the street greening rate is positively correlated with the spatial sense of security of seniors; ② there is a negative correlation between sky openness and interface enclosure; and ③ the overall safety perception of seniors regarding street space is significantly influenced by the spatial sense of security, the sense of security during walking behavior, and the security perception in visual recognition. This research not only uncovers the impact mechanism of the street environment on the safety perception of seniors, but also offers valuable references for the age-friendly design of urban spaces. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

Figure 1
<p>Current situation of living street space (author’s own photographs).</p>
Full article ">Figure 2
<p>Research scope (author’s own representation). (<b>a</b>) Research scope: Proportion of elderly population (aged 60 and above); (<b>b</b>) Research scope and road network situation in surrounding areas.</p>
Full article ">Figure 3
<p>Technical road map of street view image semantic segmentation (author’s own representation).</p>
Full article ">Figure 4
<p>Schematic diagram of measurable indicators for street space environment (author’s own representation).</p>
Full article ">Figure 5
<p>Semantic segmentation of street view images and spatial feature data collection process (author’s own representation).</p>
Full article ">Figure 6
<p>Seniors’ perception of street space safety based on Maslow’s hierarchy of needs theory (top image obtained from Abraham Maslow’s 1943 work <span class="html-italic">The Theory of Human Motivation</span> [<a href="#B30-buildings-14-03973" class="html-bibr">30</a>]; bottom picture created by the author).</p>
Full article ">Figure 7
<p>Score curve graph of street space safety perception evaluation based on SD semantic analysis method (author’s own representation).</p>
Full article ">
39 pages, 925 KiB  
Review
Machine Learning Techniques for Sensor-Based Human Activity Recognition with Data Heterogeneity—A Review
by Xiaozhou Ye, Kouichi Sakurai, Nirmal-Kumar C. Nair and Kevin I-Kai Wang
Sensors 2024, 24(24), 7975; https://doi.org/10.3390/s24247975 - 13 Dec 2024
Viewed by 245
Abstract
Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous computing, analyzing behaviors through multi-dimensional observations. Despite research progress, HAR confronts challenges, particularly in data distribution assumptions. Most studies assume uniform data distributions across datasets, contrasting with the varied nature of practical sensor data [...] Read more.
Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous computing, analyzing behaviors through multi-dimensional observations. Despite research progress, HAR confronts challenges, particularly in data distribution assumptions. Most studies assume uniform data distributions across datasets, contrasting with the varied nature of practical sensor data in human activities. Addressing data heterogeneity issues can improve performance, reduce computational costs, and aid in developing personalized, adaptive models with fewer annotated data. This review investigates how machine learning addresses data heterogeneity in HAR by categorizing data heterogeneity types, applying corresponding suitable machine learning methods, summarizing available datasets, and discussing future challenges. Full article
(This article belongs to the Special Issue Non-Intrusive Sensors for Human Activity Detection and Recognition)
20 pages, 631 KiB  
Article
Analyzing Crowd Behavior in Highly Dense Crowd Videos Using 3D ConvNet and Multi-SVM
by Mahmoud Elmezain, Ahmed S. Maklad, Majed Alwateer, Mohammed Farsi and Hani M. Ibrahim
Electronics 2024, 13(24), 4925; https://doi.org/10.3390/electronics13244925 - 13 Dec 2024
Viewed by 431
Abstract
Crowd behavior presents significant challenges due to intricate interactions. This research proposes an approach that combines the power of 3D Convolutional Neural Networks (ConvNet) and Multi-Support Vector Machines (Multi-SVM) to study and analyze crowd behavior in highly dense crowd videos. The proposed approach [...] Read more.
Crowd behavior presents significant challenges due to intricate interactions. This research proposes an approach that combines the power of 3D Convolutional Neural Networks (ConvNet) and Multi-Support Vector Machines (Multi-SVM) to study and analyze crowd behavior in highly dense crowd videos. The proposed approach effectively utilizes the temporal information captured by the 3D ConvNet, which accounts for the spatiotemporal characteristics of crowd movement. By incorporating the third dimension as a temporal stack of images forming a clip, the network can learn and comprehend the dynamics and patterns of crowd behavior over time. In addition, the learned features from the 3D ConvNet are classified and interpreted using Multi-SVM, enabling a comprehensive analysis of crowd behavior. This methodology facilitates the identification and categorization of various crowd dynamics, including merging, diverging, and dense flows. To evaluate the effectiveness of the approach, experiments are conducted on the Crowd-11 dataset, which comprises over 6000 video sequences with an average length of 100 frames per sequence. The dataset defines a total of 11 crowd motion patterns. The experimental results demonstrate promising recognition rates and achieve an accuracy of 89.8%. These findings provide valuable insights into the complex dynamics of crowd behavior, with potential applications in crowd management. Full article
Show Figures

Figure 1

Figure 1
<p>Crowd behavior investigation and analysis: roadmap for an intelligent approach.</p>
Full article ">Figure 2
<p>The structure of a single tensor: 3 color channels multiplied by 16 frames, further multiplied by 112 pixels in height and 112 pixels in width.</p>
Full article ">Figure 3
<p>The 3D ConvNet architecture consists of eight convolution layers, five max-pooling layers, and two fully connected layers. Subsequently, a softmax output layer is employed after these layers.</p>
Full article ">Figure 4
<p>Multiclass Support Vector Machines. (<b>a</b>) One-Versus-One. (<b>b</b>) One-Versus-All.</p>
Full article ">Figure 5
<p>Two distinct classifications of crowds: crowds that are dynamic as in the top figure and crowds with no perceivable streams as as in the bottom figure.</p>
Full article ">Figure 6
<p>The performance of networks with varying kernel temporal depths.</p>
Full article ">Figure 7
<p>Confusion3 matrix for the dataset crowd-11 using 3D ConvNets and multi-SVM with respect to depth-3.</p>
Full article ">
24 pages, 9314 KiB  
Article
Small Target Ewe Behavior Recognition Based on ELFN-YOLO
by Jianglin Wu, Shufeng Li, Baoqin Wen, Jing Nie, Na Liu, Honglei Cen, Jingbin Li and Shuangyin Liu
Agriculture 2024, 14(12), 2272; https://doi.org/10.3390/agriculture14122272 - 11 Dec 2024
Viewed by 344
Abstract
In response to the poor performance of long-distance small target recognition tasks and real-time intelligent monitoring, this paper proposes a deep learning-based recognition method aimed at improving the ability to recognize and monitor various behaviors of captive ewes. Additionally, we have developed a [...] Read more.
In response to the poor performance of long-distance small target recognition tasks and real-time intelligent monitoring, this paper proposes a deep learning-based recognition method aimed at improving the ability to recognize and monitor various behaviors of captive ewes. Additionally, we have developed a system platform based on ELFN-YOLO to monitor the behaviors of ewes. ELFN-YOLO enhances the overall performance of the model by combining ELFN with the attention mechanism CBAM. ELFN strengthens multiple layers with fewer parameters, while the attention mechanism further emphasizes the channel information interaction based on ELFN. It also improves the ability of ELFN to extract spatial information in small target occlusion scenarios, leading to better recognition results. The proposed ELFN-YOLO achieved an accuracy of 92.5%, an F1 score of 92.5%, and a [email protected] of 94.7% on the ewe behavior dataset built in commercial farms, which outperformed YOLOv7-Tiny by 1.5%, 0.8%, and 0.7% in terms of accuracy, F1 score, and [email protected], respectively. It also outperformed other baseline models such as Faster R-CNN, YOLOv4-Tiny, and YOLOv5s. The obtained results indicate that the proposed approach outperforms existing methods in scenarios involving multi-scale detection of small objects. The proposed method is of significant importance for strengthening animal welfare and ewe management, and it provides valuable data support for subsequent tracking algorithms to monitor the activity status of ewes. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Ewe’s four typical types of behavior. (<b>a</b>) Standing, (<b>b</b>) lying, (<b>c</b>) drinking, (<b>d</b>) eating.</p>
Full article ">Figure 2
<p>Localized structural plan of the ewe activity area. (<b>A</b>) Camera at a 30-degree angle to the horizontal plane. (<b>B</b>) Camera at a 60-degree angle to the horizontal plane. (<b>C</b>) Camera at a 45-degree angle to the horizontal plane.</p>
Full article ">Figure 3
<p>Data processing: (<b>a</b>) original image, (<b>b</b>) rotated by 30 degrees, (<b>c</b>) rotated by 60 degrees, (<b>d</b>) adaptive thresholding.</p>
Full article ">Figure 4
<p>The overall structure of ELAN and ELFN. Adding a red arrow corresponds to Group 1 in <a href="#sec3dot2-agriculture-14-02272" class="html-sec">Section 3.2</a>, and removing the red arrow corresponds to Group 2 in <a href="#sec3dot2-agriculture-14-02272" class="html-sec">Section 3.2</a>.</p>
Full article ">Figure 5
<p>Overall CBAM structure.</p>
Full article ">Figure 6
<p>Architecture diagram of ELFN-YOLO network model.</p>
Full article ">Figure 7
<p>Schematic of overall process for ewe behavior detection. The serial number represents the process of data processing in this study.</p>
Full article ">Figure 8
<p>Confusion matrix of ewe behavior characteristics based on random forest.</p>
Full article ">Figure 9
<p>Comparison of ELFN performance at different locations.</p>
Full article ">Figure 10
<p>(<b>a</b>) Confusion matrix based on YOLOv7-Tiny, (<b>b</b>) comparison of different behavior recognition effects between ELFN-YOLO V1 and ELAN.</p>
Full article ">Figure 11
<p>(<b>a</b>) Heat map based on YOLOv7-Tiny. (<b>b</b>) Heat map based on ELFN-YOLO V1. The darker the color, the higher the attention of ELFN-YOLO V1 to the area. The red dashed box is darker than the original image a, indicating that ELFN-YOLO V1 has better recognition ability for small targets at long distances.</p>
Full article ">Figure 12
<p>Structural diagram and parameter distribution diagram.</p>
Full article ">Figure 13
<p>Heat map of different attention mechanisms. (<b>a</b>) Heat map based on ELFN-YOLO. (<b>b</b>) Heat map based on ELFN-YOLO and CBAM. (<b>c</b>) Heat map based on ELFN-YOLO and CA. (<b>d</b>) Heat map based on ELFN-YOLO and ECANet.</p>
Full article ">Figure 14
<p>The impact of different modules on the model’s PR curve.</p>
Full article ">Figure 15
<p>Training results illustration of ELFN-YOLO and YOLOv7-Tiny with IoU threshold set to 0.5.</p>
Full article ">Figure 16
<p>(<b>a</b>) YOLOv7-Tiny test set PR curve. (<b>b</b>) ELFN-YOLO test set PR curve.</p>
Full article ">Figure 17
<p>Overall performance of ELFN-YOLO.</p>
Full article ">Figure 18
<p>Detection results of ELFN-YOLO under different working conditions.</p>
Full article ">Figure 19
<p>Comparison results of ELFN-YOLO and YOLOv7-Tiny under different occlusion degrees. (<b>A</b>) YOLOv7-Tiny detection effect graph. (<b>B</b>) ELFN-YOLO detection effect graph.</p>
Full article ">Figure 20
<p>(<b>a</b>) Ewe behavior detection system overall process. (<b>b</b>) WEB intelligent monitoring and analysis system based on ELFN-YOLO (the upper left corner is the video sampling time).</p>
Full article ">
17 pages, 4378 KiB  
Article
The Effect of Awe on Willingness to Pay for Construction Waste Recycled Products: A Functional Near-Infrared Spectroscopy Study
by Zhikun Ding, Tao Huang, Qifan Yang and Lian Duan
Sustainability 2024, 16(24), 10847; https://doi.org/10.3390/su162410847 - 11 Dec 2024
Viewed by 339
Abstract
The development of the construction industry has generated a large amount of construction waste, and resource utilization of construction waste is an effective means of recycling. However, such recycled construction waste products still lack market competitiveness and recognition. Consumers’ psychological activities are often [...] Read more.
The development of the construction industry has generated a large amount of construction waste, and resource utilization of construction waste is an effective means of recycling. However, such recycled construction waste products still lack market competitiveness and recognition. Consumers’ psychological activities are often influenced by emotions, and the sense of awe plays an important role in green consumption. This study aims to investigate how the sense of awe affects consumers’ willingness to pay for construction waste recycled products. The study used functional near-infrared spectroscopy (fNIRS) with a willingness-to-pay task paradigm for experiments, which aims to reveal how different types of awe affect willingness to pay for construction waste recycled products. The behavioral results showed that two conditions effectively induced awe and enhanced consumers’ willingness to pay, but the difference between nature awe and social awe was not significant. The neural activation results showed significant activation in the inferior prefrontal gyrus and dorsolateral prefrontal cortex. In particular, dorsolateral prefrontal cortex activity was significantly enhanced in the social awe condition. The functional connectivity results showed that, compared to the control condition experiment, the awe condition experiment triggered stronger functional connectivity. Therefore, exploring the effect of awe on the willingness to pay for construction waste recycled products can provide a basis reference for companies to develop marketing strategies and corporate pricing and promote the promotion and application of construction waste recycled products in the market. Full article
(This article belongs to the Section Psychology of Sustainability and Sustainable Development)
Show Figures

Figure 1

Figure 1
<p>Some examples of the experimental materials.</p>
Full article ">Figure 2
<p>The experimental procedure.</p>
Full article ">Figure 3
<p>An example trial of the willingness-to-pay measurement stage.</p>
Full article ">Figure 4
<p>Willingness to pay under different conditions.</p>
Full article ">Figure 5
<p>Brain activation under different conditions. The color bar on the right side of the figure represents the activation level (<span class="html-italic">t</span>-map) of each channel; red indicates a higher activation level, while blue indicates a higher deactivation level.</p>
Full article ">Figure 6
<p>Functional connectivity under different conditions. The green circles indicate the position of the channel and the red lines indicate a strong correlation between the two channels.</p>
Full article ">
40 pages, 20840 KiB  
Article
Facial Biosignals Time–Series Dataset (FBioT): A Visual–Temporal Facial Expression Recognition (VT-FER) Approach
by João Marcelo Silva Souza, Caroline da Silva Morais Alves, Jés de Jesus Fiais Cerqueira, Wagner Luiz Alves de Oliveira, Orlando Mota Pires, Naiara Silva Bonfim dos Santos, Andre Brasil Vieira Wyzykowski, Oberdan Rocha Pinheiro, Daniel Gomes de Almeida Filho, Marcelo Oliveira da Silva and Josiane Dantas Viana Barbosa
Electronics 2024, 13(24), 4867; https://doi.org/10.3390/electronics13244867 - 10 Dec 2024
Viewed by 425
Abstract
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, [...] Read more.
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, while temporal challenges involve discontinuities in motion observation due to high variability in poses and dynamic conditions such as rotation and translation. To enhance the analytical precision and validation reliability of FER systems, several datasets have been proposed. However, most of these datasets focus primarily on spatial characteristics, rely on static images, or consist of short videos captured in highly controlled environments. These constraints significantly reduce the applicability of such systems in real-world scenarios. This paper proposes the Facial Biosignals Time–Series Dataset (FBioT), a novel dataset providing temporal descriptors and features extracted from common videos recorded in uncontrolled environments. To automate dataset construction, we propose Visual–Temporal Facial Expression Recognition (VT-FER), a method that stabilizes temporal effects using normalized measurements based on the principles of the Facial Action Coding System (FACS) and generates signature patterns of expression movements for correlation with real-world temporal events. To demonstrate feasibility, we applied the method to create a pilot version of the FBioT dataset. This pilot resulted in approximately 10,000 s of public videos captured under real-world facial motion conditions, from which we extracted 22 direct and virtual metrics representing facial muscle deformations. During this process, we preliminarily labeled and qualified 3046 temporal events representing two emotion classes. As a proof of concept, these emotion classes were used as input for training neural networks, with results summarized in this paper and available in an open-source online repository. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>An illustration of the complete process, from face detection to the semantic level, where each face image is correlated with labeled events. The steps include: (1) face cropping, (2) facial landmark detection, (3) landmark normalization, (4) feature extraction, and (5) analysis and event correlation. Illustration created by the authors. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>].</p>
Full article ">Figure 2
<p>The proposed pipeline for generating the FBioT dataset consists of the following modules: Flow [A] (Indexer, Feature Extractor L1, Video Adjuster, Measure Maker), and Flow [B] (Manual and Automatic Labelers). Each module produces its own dataset as output. Indexing can be performed using streaming videos or local videos.</p>
Full article ">Figure 3
<p>The Feature Extractor L1 module extracts image features, including (1)–(2) from the region of interest, (3) the main facial features, and (4) facial landmarks. These features are utilized to (5) identify and standardize the biosignals, which each point has the coordinates X and Y. Illustration created by the authors. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>].</p>
Full article ">Figure 4
<p>An example illustrating how changes in image dimensionality occur due to camera movement along the Z axis in (2) and (3), with (1) demonstrating the effect of dimensionality normalization. Illustration created by the authors. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>].</p>
Full article ">Figure 5
<p>Example of the same open mouth seen from different poses resulting in distortions in the absolute pixel-by-pixel measurements (red line). With the Video Adjuster module it is possible to estimate these distortions and calculate that the measurements belong to the same mouth opening (blue line). Three-dimensional model by [<a href="#B60-electronics-13-04867" class="html-bibr">60</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Example of a schematic representation of the theoretical landmarks of the FACS system for action unit detection. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>]. (<b>b</b>) Diagram of the landmarks detected by the Dlib model, where the enumeration corresponds to the following: face contours (1–17); eyebrows left (18–22); eyebrows right (23–27); nose top (28–31); nose base (32–36); eye left (37–42); eye right (43–48); mouth and lips (49–68). Illustration adapted from [<a href="#B61-electronics-13-04867" class="html-bibr">61</a>].</p>
Full article ">Figure 7
<p>Example of measurement acquisition of vertical mouth opening (<math display="inline"><semantics> <msub> <mi>d</mi> <mi>n</mi> </msub> </semantics></math> = distance between 15–19 points, see <a href="#electronics-13-04867-f006" class="html-fig">Figure 6</a>) over time, which results in time–series measurements.</p>
Full article ">Figure 8
<p>The schematic flow of the manual labeling process consists of three steps. In step (1), representative measures are selected to label an expression. In (2), the start and end frames of the movement related to the expression are identified. In (3), the class name and the selected measurements are added to the labeling file on the rows corresponding to the selected interval frames. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 9
<p>To identify subseries similar to a given pattern, the Euclidean distance was calculated. In step (1), the measures of interest that characterize the expression are selected, and sequences that are the most similar to the patterns for each measure are identified individually. In step (2), a similarity filter is applied to select intervals where the patterns occurred in both measures of interest. Furthermore, in step (3), the class name and selected measures are added to the labeling file for the frames corresponding to the identified intervals.</p>
Full article ">Figure 10
<p>Schematic representation of the AU1 measurement process, which involves raising the eyebrows. Motion detection measures the distance between the landmarks on the eyebrows and the nasal bone. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 11
<p>Temporal evolution of AU9 movement. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 12
<p>The graph shows the rotation angles of the BIWI ground truth annotations compared to the results estimated using the model developed (Biosignals) and the OpenFace framework. The values correspond to the processing of video 22 from the BIWI dataset.</p>
Full article ">Figure 13
<p>Normalization result of the M3 measurement from a video, where it is possible to observe that the face remains stable even during translation movement. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 14
<p>Result of the dynamic effect of rotation around the Z axis throughout video ID = 2 of the dataset. Values before and after normalization are as follows: mean = −0.06, STD = 5.80; mean = −2.12, STD = 1.06, respectively.</p>
Full article ">Figure 15
<p>Mean and standard deviation for rotation variations across all videos in the dataset.</p>
Full article ">Figure 16
<p>Percentage of face Z axis translation normalization over time for video ID = 21. Values before and after normalization are as follows: mean = 0.36, STD = 0.20; mean = 0.46, STD = 0.07, respectively.</p>
Full article ">Figure 17
<p>Percentage of normalization (translation in Z axis), in terms of mean and standard deviation for all videos in the dataset.</p>
Full article ">Figure 18
<p>A smiling signature of video ID = 21 from the CK dataset with the main measurements over time. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 19
<p>Z rotation normalization for video 24 in CK+ dataset.</p>
Full article ">Figure 20
<p>Z translation normalization for video 24 in CK+ dataset.</p>
Full article ">Figure 21
<p>Similarity matrix obtained from the cross-test between the M1 (<b>left</b>) and M12 (<b>right</b>) measurements of each CK+ happy sample.</p>
Full article ">Figure 22
<p>Architecture of the network with reference data, consisting of four layers: sequential, LSTM, dropout, and dense.</p>
Full article ">Figure 23
<p>Training results: accuracy and loss curves for the reference network.</p>
Full article ">Figure 24
<p>ROC curve and confusion matrix.</p>
Full article ">Figure 25
<p>Z rotation normalization for video 70 in AFEW dataset.</p>
Full article ">Figure 26
<p>Z translation normalization for video 70 in AFEW dataset.</p>
Full article ">Figure 27
<p>Training process for arousal neural network. <b>Left</b>—accuracy; <b>right</b>—loss.</p>
Full article ">Figure 28
<p>Training process for valence neural network. <b>Left</b>—accuracy; <b>right</b>—loss.</p>
Full article ">Figure 29
<p>ROC curves for the AFEW reference neural network. <b>Left</b>—arousal; <b>right</b>—valence.</p>
Full article ">Figure 30
<p>The Manual Labeler L0 module comprises the following processes: (<b>a</b>) graphical analysis of time–series measures, (<b>b</b>) selection of the start and end frames of the expression through visualization of each frame, and (<b>c</b>) visualization of annotated data [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>]. Sample video accessible at [<a href="#B65-electronics-13-04867" class="html-bibr">65</a>].</p>
Full article ">Figure 31
<p>Similarity matrix generated by cross-testing measurements M1 (<b>left</b>) and M12 (<b>right</b>) of each automatically found seed from the FBioT dataset.</p>
Full article ">Figure 32
<p>Coincident samples found based on seed search from the Automatic Labeler module.</p>
Full article ">Figure 33
<p>Summarized results of the seed search, grouped in blocks of five units, versus the number of occurrences.</p>
Full article ">Figure 34
<p>Neural network architecture for the proposed dataset prototype.</p>
Full article ">Figure 35
<p>Training results: accuracy and loss curves for the neural network prototype of biosignals.</p>
Full article ">Figure 36
<p>ROC curve (<b>left</b>) and confusion matrix (<b>right</b>). The samples of happy are represented by 0 and the neutral by 1.</p>
Full article ">Figure 37
<p>Prototype for visualizing measurements from local video, with respective graphs of measurements over time. Legend of the measurements: Red (M3), Blue (M13), Purple (M8) and Yellow (E1). The facial expression image is adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>]. To exemplify the application’s functionality, it has been used a mirrored video of the expression acquired by the CK+ dataset to have a complete smile expression (onset-apex-offset).</p>
Full article ">Figure 38
<p>Prototype for visualizing time–series pattern inference from local video, for the measurement M12. The red sequence represents the corresponding neural network inference, while the blue represents other temporal measurements. Three-dimensional model by [<a href="#B60-electronics-13-04867" class="html-bibr">60</a>].</p>
Full article ">
21 pages, 12963 KiB  
Article
Young Goji Fruit Volatiles Regulate the Oviposition Behavior and Chemosensory Gene Expression of Gravid Female Neoceratitis asiatica
by Hongshuang Wei, Kexin Liu, Jingyi Zhang, Kun Guo, Sai Liu, Changqing Xu, Haili Qiao and Shuqian Tan
Int. J. Mol. Sci. 2024, 25(24), 13249; https://doi.org/10.3390/ijms252413249 - 10 Dec 2024
Viewed by 295
Abstract
The goji fruit fly, Neoceratitis asiatica, is a major pest on the well-known medicinal plant Lycium barbarum. Dissecting the molecular mechanisms of the oviposition selection of N. asiatica regarding the host plant will help to identify new strategies for pest fly [...] Read more.
The goji fruit fly, Neoceratitis asiatica, is a major pest on the well-known medicinal plant Lycium barbarum. Dissecting the molecular mechanisms of the oviposition selection of N. asiatica regarding the host plant will help to identify new strategies for pest fly control. However, the molecular mechanism of chemical communication between the goji fruit fly and the host goji remains unclear. Hence, our study found that young goji fruit volatiles induced the oviposition response of gravid female N. asiatica. After N. asiatica was exposed to young goji fruit volatiles, the expression of six chemosensory genes (NasiOBP56h3 and OBP99a1 in the antennae; OBP99a2, OBP99a3 and CSP2 in the legs; and OBP56a in the ovipositor) was significantly upregulated in different organs of female N. asiatica compared with the group without odor treatment according to transcriptome data. Further results of qPCR verification show that the expression levels of the six selected upregulated genes after the flies were exposed to host plant volatiles were mostly consistent with the results of transcriptome data. We concluded that six upregulated genes may be involved in the recognition of young goji fruit volatiles by gravid female N. asiatica. Our study preliminarily identifies young goji fruit volatiles as a key factor in the oviposition behavior of N. asiatica, which will facilitate further studies on the mechanisms of host oviposition selection in N. asiatica. Full article
(This article belongs to the Special Issue Molecular Interactions between Plants and Pests)
Show Figures

Figure 1

Figure 1
<p>The harmful symptom of female <span class="html-italic">N. asiatica</span> laying its eggs on a young goji fruit. The red circle indicates the oviposition site on the goji fruit. The red box indicates the egg of <span class="html-italic">N. asiatica</span>.</p>
Full article ">Figure 2
<p>The behavioral assay of the gravid female <span class="html-italic">N. asiatica</span>’s response to young goji fruit volatiles. (<b>a</b>) The behavioral responses in a Y-tube olfactometer. The mated female flies were tested 4 days after eclosion. The bars represent the overall percentages of female flies choosing either of the odor sources; the number in the bar indicates the total number of female flies choosing the arm. The gray bar represents hexane treatment (control), and the green bar represents fresh young goji fruit volatiles (volatile organic compounds, VOCs) (*** <span class="html-italic">p</span> &lt; 0.001, two-sided binominal test). (<b>b</b>) The number of eggs laid by the gravid female <span class="html-italic">N. asiatica</span> on agarose medium. Treatment groups were placed with a filter paper sheet (1 cm × 1 cm) containing 10 μL of young goji fruit VOCs (the volatiles dissolved into the paraffin oil). Control groups were placed with same-size filter paper containing paraffin oil. The asterisk (*) indicates a significant difference between control and treatment groups (mean ± SE, n = 3, Student’s <span class="html-italic">t</span>-test, * <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 3
<p>Distribution characteristics of differentially expressed genes (DEGs) in three different comparison groups. (<b>a</b>) The Venn diagram illustrates the number and proportion of DEGs that were commonly or uniquely expressed in pairwise comparisons. (<b>b</b>) A clustering analysis of DEGs’ transcript abundance across all samples is shown, with the numbers (such as the 1, 2, 3 mark to CK-HA) indicating the three biological replicates for each sample. HA: antennae; OV: ovipositor. TM represents the treatment group that was exposed to VOCs.</p>
Full article ">Figure 4
<p>Gene ontology (GO) enrichment analysis was conducted on differentially expressed genes (DEGs) across comparison pairs. The top 10 enriched terms with highly significant <span class="html-italic">p</span>-values (≤0.05) for each comparison are displayed. The bar graph illustrates the false discovery rate values. The left y-axis represents the percentage of DEGs relative to all Unigenes, while the right y-axis shows the number of DEGs and total Unigenes. The x-axis indicates the GO categories.</p>
Full article ">Figure 5
<p>Phylogenetic tree of candidate NasiOBPs with known Dipteran OBPs. The clades in different colors indicate the different OBP gene clades. The red triangle symbol marks <span class="html-italic">Neoceratitis asiatica</span>. The GenBank accession numbers of these OBPs involved in the construction of this tree are listed in <a href="#app1-ijms-25-13249" class="html-app">Table S2</a>.</p>
Full article ">Figure 6
<p>Conserved structural analysis of candidate OBPs in <span class="html-italic">N. asiatica</span>. (<b>a</b>) Classic OBP analysis. (<b>b</b>) Plus-C OBP analysis. (<b>c</b>) Minus-C analysis. The abbreviation “C” indicates cysteine residue. The abbreviation “P” indicates proline residue. The abbreviation “K” indicates lysine residue. The numbers 1, 2, 3, 4, 5 and 6 marked to the right of “C” indicate the six cysteine conserved structures in the sequence.</p>
Full article ">Figure 7
<p>The expression levels of 28 candidate <span class="html-italic">NasiOBPs</span> in the antennae, legs and ovipositor of female <span class="html-italic">N. asiatica</span> following odor exposure are presented as FPKM values from transcriptome data. The control group consisted of female adults that were not exposed to odors. Odor sources were obtained from the volatiles of young wolfberry fruits (green fruits, 5–7 days after flower drop). (<b>a</b>) <span class="html-italic">OBP</span> expression levels (FPKM) in the antennae, legs and ovipositor of female <span class="html-italic">N. asiatica</span>. (<b>b</b>) <span class="html-italic">OBP</span> expression levels (FPKM) in the antennae of female <span class="html-italic">N. asiatica</span> before/after odor exposure. (<b>c</b>) <span class="html-italic">OBP</span> expression levels (FPKM) in the legs of female <span class="html-italic">N. asiatica</span> before/after odor exposure. (<b>d</b>) <span class="html-italic">OBP</span> expression levels (FPKM) in the ovipositor of female <span class="html-italic">N. asiatica</span> before/after odor exposure. Bars labeled with different letters (a,b,c) indicate significant differences (mean ± SE, <span class="html-italic">n</span> = 3, Tukey’s HSD, <span class="html-italic">p</span> &lt; 0.05). An asterisk (*) denotes a significant difference between the control and odor exposure groups (mean ± SE, <span class="html-italic">n</span> = 3, Student’s <span class="html-italic">t</span>-test, * <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 8
<p>Conserved structural analysis of candidate CSPs in <span class="html-italic">N. asiatica</span>. The abbreviation “C” indicates cysteine residue. C1-X6-C2-X18-C3-X2-C4 indicates four cysteine conserved structures.</p>
Full article ">Figure 9
<p>Phylogenetic tree of candidate NasiCSPs with known Dipteran CSPs. The clades in different colors indicate the different CSP gene clades. The GenBank accession numbers of these CSPs involved in the construction of this tree are listed in <a href="#app1-ijms-25-13249" class="html-app">Table S2</a>.</p>
Full article ">Figure 10
<p>Expression levels of 4 candidate <span class="html-italic">NasiCSPs</span> in the antennae, legs and ovipositor of female <span class="html-italic">N. asiatica</span> following odor exposure are represented by FPKM values from transcriptome data. The control group consisted of female adults that were not exposed to any odors. Odor sources were derived from the volatiles of young goji fruits (green fruits, 5–7 days post-flower drop). An asterisk (*) indicates a significant difference between the control and odor exposure groups (mean ± SE, <span class="html-italic">n</span> = 3, Student’s <span class="html-italic">t</span>-test, * <span class="html-italic">p</span> &lt; 0.05). Bars marked with different letters (a,b,c,d) represent significant differences among groups (mean ± SE, <span class="html-italic">n</span> = 3, Tukey’s HSD, <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 11
<p>The expression profiles of six upregulated olfactory genes in different tissues of gravid female <span class="html-italic">N. asiatica</span> by RT-qPCR. Head indicates the head of <span class="html-italic">N. asiatica</span> female adult without antennae. Odor sources were obtained from the volatiles of young goji fruits (green fruits, 5–7 days post-flower drop). Body indicates a mixture of tissues from female <span class="html-italic">N. asiatica</span>, including thorax, abdomen and wings. Expression levels of each gene in the body (control) were used for normalization, with relative expression levels shown as fold changes compared with the transcript levels in the body (control). Data are represented as the means ± SEs. Bars with different letters indicate significant differences (<span class="html-italic">p</span> &lt; 0.05, ANOVA followed by Tukey’s HSD test, <span class="html-italic">n</span> = 3). An asterisk (*) denotes a significant difference between control and odor exposure groups (mean ± SE, <span class="html-italic">n</span> = 3, Student’s <span class="html-italic">t</span>-test, * <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">
28 pages, 4709 KiB  
Article
Multipopulation Whale Optimization-Based Feature Selection Algorithm and Its Application in Human Fall Detection Using Inertial Measurement Unit Sensors
by Haolin Cao, Bingshuo Yan, Lin Dong and Xianfeng Yuan
Sensors 2024, 24(24), 7879; https://doi.org/10.3390/s24247879 - 10 Dec 2024
Viewed by 314
Abstract
Feature selection (FS) is a key process in many pattern-recognition tasks, which reduces dimensionality by eliminating redundant or irrelevant features. However, for complex high-dimensional issues, traditional FS methods cannot find the ideal feature combination. To overcome this disadvantage, this paper presents a multispiral [...] Read more.
Feature selection (FS) is a key process in many pattern-recognition tasks, which reduces dimensionality by eliminating redundant or irrelevant features. However, for complex high-dimensional issues, traditional FS methods cannot find the ideal feature combination. To overcome this disadvantage, this paper presents a multispiral whale optimization algorithm (MSWOA) for feature selection. First, an Adaptive Multipopulation merging Strategy (AMS) is presented, which uses exponential variation and individual location information to divide the population, thus avoiding the premature aggregation of subpopulations and increasing candidate feature subsets. Second, a Double Spiral updating Strategy (DSS) is devised to break out of search stagnations by discovering new individual positions continuously. Last, to facilitate the convergence speed, a Baleen neighborhood Exploitation Strategy (BES) which mimics the behavior of whale tentacles is proposed. The presented algorithm is thoroughly compared with six state-of-the-art meta-heuristic methods and six promising WOA-based algorithms on 20 UCI datasets. Experimental results indicate that the proposed method is superior to other well-known competitors in most cases. In addition, the proposed method is utilized to perform feature selection in human fall-detection tasks, and extensive real experimental results further illustrate the superior ability of the proposed method in addressing practical problems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Variation curve of the number of subpopulations.</p>
Full article ">Figure 2
<p>Particle spiral trajectory curve.</p>
Full article ">Figure 3
<p>Flowchart of MSWOA.</p>
Full article ">Figure 4
<p>Seven methods’ convergence curves on 17 datasets.</p>
Full article ">Figure 4 Cont.
<p>Seven methods’ convergence curves on 17 datasets.</p>
Full article ">Figure 5
<p>Flowchart of the fall detection system.</p>
Full article ">Figure 6
<p>The data collection details of the SFall dataset.</p>
Full article ">Figure 7
<p>Signal changes in acceleration, angular velocity, and angle for three types of activities.</p>
Full article ">
14 pages, 2120 KiB  
Article
Flexible Polymer-Based Electrodes for Detecting Depression-Related Theta Oscillations in the Medial Prefrontal Cortex
by Rui Sun, Shunuo Shang, Qunchen Yuan, Ping Wang and Liujing Zhuang
Chemosensors 2024, 12(12), 258; https://doi.org/10.3390/chemosensors12120258 - 10 Dec 2024
Viewed by 430
Abstract
This study investigates neural activity changes in the medial prefrontal cortex (mPFC) of a lipopolysaccharide (LPS)-induced acute depression mouse model using flexible polymer multichannel electrodes, local field potential (LFP) analysis, and a convolutional neural network-long short-term memory (CNN-LSTM) classification model. LPS treatment effectively [...] Read more.
This study investigates neural activity changes in the medial prefrontal cortex (mPFC) of a lipopolysaccharide (LPS)-induced acute depression mouse model using flexible polymer multichannel electrodes, local field potential (LFP) analysis, and a convolutional neural network-long short-term memory (CNN-LSTM) classification model. LPS treatment effectively induced depressive-like behaviors, including increased immobility in the tail suspension and forced swim tests, as well as reduced sucrose preference. These behavioral outcomes validate the LPS-induced depressive phenotype, providing a foundation for neurophysiological analysis. Flexible polymer-based electrodes enabled the long-term recording of high-quality LFP and spike signals from the mPFC. Time-frequency and power spectral density (PSD) analyses revealed a significant increase in theta band (3–8 Hz) amplitude under depressive conditions. Using theta waveform features extracted via empirical mode decomposition (EMD), we classified depressive states with a CNN-LSTM model, achieving high accuracy in both training and validation sets. This study presents a novel approach for depression state recognition using flexible polymer electrodes, EMD, and CNN-LSTM modeling, suggesting that heightened theta oscillations in the mPFC may serve as a neural marker for depression. Future studies may explore theta coupling across brain regions to further elucidate neural network disruptions associated with depression. Full article
(This article belongs to the Special Issue Advancements of Chemical and Biosensors in China—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Behavioral assessment of depressive-like symptoms in mice following LPS injection. (<b>a</b>) Experimental timeline depicting saline or LPS injection, followed by behavioral tests at 24 h post-injection. (<b>b</b>–<b>d</b>) Open field test (OFT): (<b>c</b>) total distance traveled, and (<b>d</b>) time spent in the center zone. (<b>e</b>,<b>f</b>) Elevated plus maze (EPM): time spent in the open arms. (<b>g</b>,<b>h</b>) Sucrose preference test (SPT): percentage of sucrose preference. (<b>i</b>,<b>j</b>) Tail suspension test (TST): immobility time significantly increased in the LPS group. (<b>k</b>,<b>l</b>) Forced swim test (FST): immobility time significantly increased in the LPS group. All data are presented as means ± s.e.m. * <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01; n.s., no significance.</p>
Full article ">Figure 2
<p>Manufacturing and performance evaluation of MEA. (<b>a</b>) Schematic of the manufacturing process for MEA. (<b>b</b>) Impedance frequency sweep results of three electrodes (t1, t2, t3); inset shows the average impedance at 1 kHz. (<b>c</b>) Top: front view of the electrode demonstrating its flexibility; Bottom: side view showing the electrode bending. (<b>d</b>) LFP signals recorded using polymer electrodes, with consistent signals across different channels. (<b>e</b>) Comparison of neural spike signals recorded by polymer and silicon electrodes. (<b>f</b>) Spike waveforms recorded by polymer electrodes and PCA clustering results. (<b>g</b>) Spike waveforms from six channels, with different colors representing distinct unit clusters identified through clustering. (<b>h</b>) SNR comparison between polymer and silicon electrodes; SNR of polymer electrodes is significantly higher than that of silicon electrodes (*** <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 3
<p>Enhanced theta oscillations in the mPFC of LPS-induced depressive mice. (<b>a</b>,<b>b</b>) Time–frequency representations of LFP signals in the mPFC, comparing (<b>a</b>) baseline and (<b>b</b>) LPS conditions over a 300 s period in the 0–12 Hz frequency range. (<b>c</b>) Raw signals and band-pass filtered LFP signals under baseline (left) and LPS (right) conditions. (<b>d</b>) High-resolution 2 s time-frequency spectrograms in the 0–12 Hz range for baseline (left) and LPS (right) conditions. (<b>e</b>) PSD comparison plot (0–30 Hz). (<b>f</b>) Mean power across different frequency bands, with significantly elevated power in the delta and theta bands in the LPS-treated depressive group (<span class="html-italic">p</span> &lt; 0.01). All data are presented as means ± s.e.m. ** <span class="html-italic">p</span> &lt; 0.01; *** <span class="html-italic">p</span> &lt; 0.001; n.s., no significance.</p>
Full article ">Figure 4
<p>Depression state recognition based on EMD and machine learning. (<b>a</b>,<b>b</b>) EMD of LFP signals in baseline and LPS24h depression states in 4s, yielding multiple IMFs, with IMF-5 adaptively capturing theta band oscillations. (<b>c</b>) Averaged overlay of theta cycle waveforms (top) and distribution histogram of critical points (bottom) extracted through EMD, based on data collected within 300 s. (<b>d</b>) Comparison of averaged theta waveforms between the two states. (<b>e</b>) Phase-aligned theta waveforms. (<b>f</b>) Scatter plot of cycle average frequency versus average amplitude. (<b>g</b>) Confusion matrix of the machine learning classification model based on theta waveform features. (<b>h</b>,<b>i</b>) Classification accuracy and loss curves for the machine learning model on the training and validation sets.</p>
Full article ">
15 pages, 1663 KiB  
Article
Research on the Influencing Factors of Urban Community Residents’ Willingness to Segregate Waste Based on Structural Equation Model
by Wenjian Luo, Ziqin Yu, Panling Zhou, Yuanyuan Ren and Hua Lv
Sustainability 2024, 16(23), 10767; https://doi.org/10.3390/su162310767 - 9 Dec 2024
Viewed by 434
Abstract
Segregation of household waste is an important means of achieving resource recovery, minimization and harmlessness of waste, which is of great significance in addressing the dilemma of the “rubbish siege”. However, at present, urban community residents still face many challenges in the practice [...] Read more.
Segregation of household waste is an important means of achieving resource recovery, minimization and harmlessness of waste, which is of great significance in addressing the dilemma of the “rubbish siege”. However, at present, urban community residents still face many challenges in the practice of household waste classification, such as lack of classification knowledge, imperfect classification facilities and weak and persistent classification behavior, which seriously restrict the effective promotion of garbage classification work. In this paper, a model of the factors influencing community residents’ willingness to separate household waste was developed based on the theory of planned behavior and tested by using structural equation modeling (SEM) with a sample of 218 surveys conducted among residents of community X in Nanchang, Jiangxi Province. It was found that urban community residents were generally willing to sort their household waste subjectively. The five factors of waste sorting recognition, intrinsic moral constraints, group behavioral incentives, time and space factors and waste sorting facilities positively influenced urban community residents’ willingness to sort household waste. Government job satisfaction and legal and regulatory constraints had no significant influence on urban community residents’ willingness to sort household waste and did not reach a statistically significant level. Based on this, in the future, we should strengthen public education, enhance group behavioral incentives, improve supporting infrastructure, standardize and improve laws and regulations to improve residents’ willingness to separate household waste and promote the process of urban household waste segregation in China. Full article
Show Figures

Figure 1

Figure 1
<p>Theoretical model of planned behavior.</p>
Full article ">Figure 2
<p>Conceptual model of the factors influencing residents’ willingness to separate household waste in urban communities.</p>
Full article ">Figure 3
<p>Pathway diagram I of the influencing factors of household waste segregation of urban community residents.</p>
Full article ">Figure 4
<p>Path diagram II of the influencing factors of household waste segregation of urban community residents.</p>
Full article ">
11 pages, 948 KiB  
Article
Geographic Variation in Signal Preferences in the Tropical Katydid Neoconocephalus triops
by Oliver M. Beckers and Johannes Schul
Biology 2024, 13(12), 1026; https://doi.org/10.3390/biology13121026 - 7 Dec 2024
Viewed by 434
Abstract
In communication systems, the signal and preference for the signal have to match, limiting phenotypic variation. Yet, communication systems evolve, but the mechanisms of how phenotypic variation can come into existence while not disrupting the match are poorly understood. Geographic variation in communication [...] Read more.
In communication systems, the signal and preference for the signal have to match, limiting phenotypic variation. Yet, communication systems evolve, but the mechanisms of how phenotypic variation can come into existence while not disrupting the match are poorly understood. Geographic variation in communication can provide insights into the diversification of these systems. Females of the katydid Neoconocephalus triops use the pulse rate and call structure for call recognition. Using behavioral experiments, we determined preferences for pulse rate at two relevant ambient temperatures and preferences for call structure (continuous, versed) in females from Puerto Rico and Costa Rica. Puerto Rican females had closed preference at both tested temperatures, indicating high selectivity for pulse rate. In contrast, Costa Rican females had a closed preference only at 20 °C; at 25 °C the females were unselective toward higher than natural pulse rates. Additionally, Puerto Rican females were not selective for call structure, whereas Costa Rican females preferred versed calls. It is not clear whether the differences in pulse preference were due to neural constraints or different selective pressures, however, they may facilitate further divergence and reproductive isolation. Importantly, the reduced selectivity for call structure or pulse rate allows calls to display the necessary variation for the communication system to evolve. Full article
(This article belongs to the Section Zoology)
Show Figures

Figure 1

Figure 1
<p>Average phonotaxis scores (± SEM) of female <span class="html-italic">N. triops</span> from (<b>A</b>) Puerto Rico (<span class="html-italic">N</span> = 9) and (<b>B</b>) Costa Rica (<span class="html-italic">N</span> = 10) to call stimuli structured in verses (black bars) or continuous (gray bars).</p>
Full article ">Figure 2
<p>Phonotaxis scores of female <span class="html-italic">N. triops</span> from (<b>A</b>) Puerto Rico and (<b>B</b>) Costa Rica in response to varying double-pulse rate tested at 20 °C (black symbols and lines) and 25 °C (gray symbols and lines). The lines indicate the best-fit lines (see text) for the data distributions at each temperature.</p>
Full article ">
28 pages, 11635 KiB  
Article
Multi-Target Irregular Behavior Recognition of Chemical Laboratory Personnel Based on Improved DeepSORT Method
by Yunhuai Duan, Zhenhua Li and Bin Shi
Processes 2024, 12(12), 2796; https://doi.org/10.3390/pr12122796 - 7 Dec 2024
Viewed by 491
Abstract
The lack of safety awareness and the irregular behavior of chemical laboratory personnel are major contributors to laboratory accidents which pose significant risks to both the safety of laboratory environments and the efficiency of laboratory work. These issues can lead to accidents, equipment [...] Read more.
The lack of safety awareness and the irregular behavior of chemical laboratory personnel are major contributors to laboratory accidents which pose significant risks to both the safety of laboratory environments and the efficiency of laboratory work. These issues can lead to accidents, equipment damage, and jeopardize personnel health. To address this challenge, this study proposes a method for recognizing irregular behavior in laboratory personnel by utilizing an improved DeepSORT algorithm tailored to the specific characteristics of a chemical laboratory setting. The method first extracts skeletal keypoints from laboratory personnel using the Lightweight OpenPose algorithm to locate individuals. The enhanced DeepSORT algorithm tracks human targets and detects the positions of the relevant objects. Finally, an SKPT-LSTM network was employed to integrate tracking data for behavior recognition. This approach was designed to enhance the detection and prevention of unsafe behaviors in chemical laboratories. The experimental results on a self-constructed dataset demonstrate that the proposed method accurately identifies irregular behaviors, thereby contributing to the reduction in safety risks in laboratory environments. Full article
(This article belongs to the Section Advanced Digital and Other Processes)
Show Figures

Figure 1

Figure 1
<p>Method flowchart.</p>
Full article ">Figure 2
<p>Different types of chemical laboratory scenes: (<b>a</b>) Microbial chemical culture laboratory; (<b>b</b>) Chemical photocatalytic reaction laboratory; (<b>c</b>) Chemical engineering simulation operation training laboratory.</p>
Full article ">Figure 3
<p>Various Behaviors of Chemical laboratory Personnel: (<b>a</b>) Sleeping (On Table); (<b>b</b>) Sleeping (Lying Down); (<b>c</b>) Eating; (<b>d</b>) Playing with a mobile phone (Sitting); (<b>e</b>) Playing with a mobile phone (Making a Call); (<b>f</b>) Playing with a mobile phone (Standing); (<b>g</b>) Normal (Performing Experiment); (<b>h</b>) Normal (Observing Equipment); (<b>i</b>) Normal (Reading).</p>
Full article ">Figure 4
<p>Lightweight OpenPose network structure diagram.</p>
Full article ">Figure 5
<p>Human skeletal keypoints.</p>
Full article ">Figure 6
<p>High similarity behaviors (<b>a</b>) Playing with a mobile phone; (<b>b</b>) Reading a book.</p>
Full article ">Figure 7
<p>YOLOv5 structure diagram.</p>
Full article ">Figure 8
<p>An LSTM network unit.</p>
Full article ">Figure 9
<p>The structure of the SKPT-LSTM network.</p>
Full article ">Figure 10
<p>Improved DeepSORT algorithm.</p>
Full article ">Figure 11
<p>Key point detection results for various behaviors: (<b>a</b>) Sleeping; (<b>b</b>) Eating; (<b>c</b>) Using Phone; (<b>d</b>) Normal.</p>
Full article ">Figure 12
<p>Evaluation of the YOLOv5-s model.</p>
Full article ">Figure 13
<p>Integrated DeepSORT algorithm for tracking human poses and mobile phone objects. (<b>a</b>–<b>f</b>) Tracking of a behavior sample, showing results every 25 frames.</p>
Full article ">Figure 14
<p>Training loss and accuracy variation curves during training process.</p>
Full article ">Figure 15
<p>Confusion matrix of SKPT-LSTM.</p>
Full article ">Figure 16
<p>Improved DeepSORT algorithm identifies various behaviors of laboratory personnel: (<b>a</b>) Eating; (<b>b</b>) Sleeping (Desk Rest); (<b>c</b>) Sleeping (Chair Lean); (<b>d</b>) Phone Use (Sit); (<b>e</b>) Phone Use (Call); (<b>f</b>) Phone Use (Stand); (<b>g</b>) Normal (Experiment); (<b>h</b>) Normal (Walk); (<b>i</b>) Normal (Read).</p>
Full article ">Figure 17
<p>Confusion matrix (<b>a</b>) SKPT-LSTM; (<b>b</b>) Conv2D; (<b>c</b>) RNN; (<b>d</b>) LSTM.</p>
Full article ">Figure 18
<p>Flowchart of real-world application.</p>
Full article ">Figure 19
<p>Single target behavior recognition results: (<b>a</b>) Playing with a mobile phone; (<b>b</b>) Sleeping Behavior; (<b>c</b>) Eating Behavior; (<b>d</b>) Normal (Experimenting) Behavior.</p>
Full article ">Figure 20
<p>Multiple target behavior recognition: (<b>a</b>) Sleeping Behavior and Normal (Experimenting) Behavior; (<b>b</b>) Sleeping Behavior and Playing with a Mobile Phone Behavior; (<b>c</b>) Normal Behavior and Playing with a Mobile Phone Behavior; (<b>d</b>) Normal Behavior and Normal Behavior.</p>
Full article ">Figure 21
<p>Interference recognition of laboratory personnel: (<b>a</b>) Before Personnel Interference; (<b>b</b>) During Interference Process; (<b>c</b>) Just After Interference Ends; (<b>d</b>) Behavior Recognition After Interference Ends.</p>
Full article ">
15 pages, 14402 KiB  
Article
Pheromone-Binding Protein 1 Performs a Dual Function for Intra- and Intersexual Signaling in a Moth
by Yidi Zhan, Jiahui Zhang, Mengxian Xu, Frederic Francis and Yong Liu
Int. J. Mol. Sci. 2024, 25(23), 13125; https://doi.org/10.3390/ijms252313125 - 6 Dec 2024
Viewed by 305
Abstract
Moths use pheromones to ensure intraspecific communication. Nevertheless, few studies are focused on both intra- and intersexual communication based on pheromone recognition. Pheromone-binding proteins (PBPs) are generally believed pivotal for male moths in recognizing female pheromones. Our research revealed that PBP1 of Agriphila [...] Read more.
Moths use pheromones to ensure intraspecific communication. Nevertheless, few studies are focused on both intra- and intersexual communication based on pheromone recognition. Pheromone-binding proteins (PBPs) are generally believed pivotal for male moths in recognizing female pheromones. Our research revealed that PBP1 of Agriphila aeneociliella (AaenPBP1) serves a dual function in both intra- and intersexual pheromone recognition. Here, a total of 20 odorant-binding protein (OBP) family genes from A. aeneociliella were identified and subjected to transcriptional analysis. Among these, AaenPBP1 was primarily highly expressed in the antennae. Competitive fluorescence binding assays and molecular docking analyses demonstrated that AaenPBP1 exhibits a strong binding affinity for the female sex pheromone (Z)-9-Hexadecenyl acetate and the male pheromone 1-Nonanal. Notably, hydrogen bonds were observed between Ser56 and the ligands. The analysis of pheromone components and PBPs in lepidopteran lineage suggested that their strong and precise interactions, shaped by coevolution, may play a crucial role in facilitating reproductive isolation in moths. Our findings provide valuable insight into the functional significance of PBPs in invertebrates and support the development of behavioral regulation tools as part of an integrated pest management strategy targeting crambid pests. Full article
(This article belongs to the Special Issue Molecular Signalling in Multitrophic Systems Involving Arthropods)
Show Figures

Figure 1

Figure 1
<p>Multiple sequence alignment of <span class="html-italic">Agriphila aeneociliella</span> odorant-binding proteins (OBPs). Conserved amino acid residues are highlighted in black (highly conserved) and grayscale (moderately conserved). The asterisks indicate the count of amino acids.</p>
Full article ">Figure 2
<p>Phylogenetic analysis of odorant-binding proteins (OBPs) from <span class="html-italic">Agriphila aeneociliella</span> and other lepidopteran species. OBPs are categorized into subfamilies: typical OBPs (blue), Minus-C OBPs (green), Plus-C OBPs (red), and PBP/GOBP (yellow). Species abbreviations: Bmor (<span class="html-italic">Bombyx mori</span>), Slit (<span class="html-italic">Spodoptera littoralis</span>), Hvir (<span class="html-italic">Heliothis virescens</span>), Harm (<span class="html-italic">Helicoverpa armigera</span>), and Msex (<span class="html-italic">Manduca sexta</span>).</p>
Full article ">Figure 3
<p>MEME motif pattern analysis of <span class="html-italic">Agriphila aeneociliella</span> odorant-binding proteins (OBPs). The upper section illustrated the six motifs identified in lepidopteran OBPs, with each motif represented by a numbered box. The lower section displays the most commonly occurring motif patterns, with the numbers in the boxes corresponding to the motifs shown in the upper section.</p>
Full article ">Figure 4
<p>Transcript levels of odorant-binding protein (OBP) genes in various tissues of <span class="html-italic">Agriphila aeneociliella</span>. A: antennae; L: legs; Ab: abdomens. Data are presented as mean ± SE. Asterisks indicate statistically significant differences (*, <span class="html-italic">p</span> &lt; 0.05; **, <span class="html-italic">p</span> &lt; 0.01; ***, <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 5
<p>Competitive binding assays of AaenPBP1 to <span class="html-italic">Agriphila aeneociliella</span> male and female pheromones. (<b>a</b>) Binding curves and Scatchard plots of the probe 1-NPN to AaenPBP1 at pH 7.4 and 5.0. (<b>b</b>) Competitive binding properties of AaenPBP1 with female and male pheromones at pH 7.4 and 5.0. (<b>c</b>–<b>e</b>) Competitive binding curves of AaenPBP1 with six host-plant volatiles at pH 7.4 and 5.0: terpenoids (<b>c</b>), aldehyde (<b>d</b>), alcohols (<b>e</b>). (<b>f</b>) Comparison of the binding ability (1/Ki) of AaenPBP1 with three pheromones and six host-plant volatiles at pH 7.4 and 5.0.</p>
Full article ">Figure 6
<p>Sequence alignment of AaenPBP1 and AtraPBP1 pheromone-binding proteins. Conserved residues are highlighted, with the three disulfide bridges denoted by green numbers. The alignment highlights structural similarities between AaenPBP1 and the AtraPBP1 template (PDB ID: 4INW).</p>
Full article ">Figure 7
<p>Molecular interactions of AaenPBP1 with two female components and one male pheromone component. The 2D and 3D interaction diagrams illustrate the binding of AaenPBP1 with (Z)-9-Hexadecenyl acetate (<b>a</b>), (Z,Z,Z)-9,12,15-Octadecatrienal (<b>b</b>), and 1-Nonanal (<b>c</b>). Hydrogen bonds and hydrophobic interactions with specific amino acid residues are labeled. The distances of the hydrogen bonds are indicated in (<b>a</b>) and (<b>c</b>).</p>
Full article ">Figure 8
<p>Distribution of pheromones and phylogenetic analysis of PBPs in moths and butterflies. (<b>a</b>) The presence and utilization of three pheromones—(Z,Z,Z)-9,12,15-Octadecatrienal, (Z)-9-Hexadecenyl acetate, and 1-Nonanal—across moths and butterflies. “F” represents female sex pheromones, and “M” represents male sex pheromones. The tree topology follows Mitter et al. [<a href="#B21-ijms-25-13125" class="html-bibr">21</a>]. (<b>b</b>) Phylogenetic tree depicting the relationships of PBPs from various moths and butterflies, including <span class="html-italic">Agriphila aeneociliella</span>. Detailed information about the PBPs and pheromones for each species is provided in <a href="#app1-ijms-25-13125" class="html-app">Table S5</a>.</p>
Full article ">
17 pages, 25164 KiB  
Article
Sleeping and Eating Behavior Recognition of Horses Based on an Improved SlowFast Network
by Yanhong Liu, Fang Zhou, Wenxin Zheng, Tao Bai, Xinwen Chen and Leifeng Guo
Sensors 2024, 24(23), 7791; https://doi.org/10.3390/s24237791 - 5 Dec 2024
Viewed by 380
Abstract
The sleeping and eating behaviors of horses are important indicators of their health. With the development of the modern equine industry, timely monitoring and analysis of these behaviors can provide valuable data for assessing the physiological state of horses. To recognize horse behaviors [...] Read more.
The sleeping and eating behaviors of horses are important indicators of their health. With the development of the modern equine industry, timely monitoring and analysis of these behaviors can provide valuable data for assessing the physiological state of horses. To recognize horse behaviors in stalls, this study builds on the SlowFast algorithm, introducing a novel loss function to address data imbalance and integrating an SE attention module in the SlowFast algorithm’s slow pathway to enhance behavior recognition accuracy. Additionally, YOLOX is employed to replace the original target detection algorithm in the SlowFast network, reducing recognition time during the video analysis phase and improving detection efficiency. The improved SlowFast algorithm achieves automatic recognition of horse behaviors in stalls. The accuracy in identifying three postures—standing, sternal recumbency, and lateral recumbency—is 92.73%, 91.87%, and 92.58%, respectively. It also shows high accuracy in recognizing two behaviors—sleeping and eating—achieving 93.56% and 98.77%. The model’s best overall accuracy reaches 93.90%. Experiments show that the horse behavior recognition method based on the improved SlowFast algorithm proposed in this study is capable of accurately identifying horse behaviors in video data sequences, achieving recognition of multiple horses’ sleeping and eating behaviors. Additionally, this research provides data support for livestock managers in evaluating horse health conditions, contributing to advancements in modern intelligent horse breeding practices. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of data collection scenario.</p>
Full article ">Figure 2
<p>Dataset samples.</p>
Full article ">Figure 3
<p>Example of data enhancement.</p>
Full article ">Figure 4
<p>Overall technical route.</p>
Full article ">Figure 5
<p>The architecture of spatiotemporal convolutional network for horse posture and behavior recognition: The backbone network uses ResNet50, and the dimension size of the kernel is <math display="inline"><semantics> <mrow> <mfenced open="{" close="}" separators="|"> <mrow> <mi mathvariant="normal">T</mi> <mo>×</mo> <msup> <mrow> <mi mathvariant="normal">S</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>,</mo> <mi mathvariant="normal">C</mi> </mrow> </mfenced> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi mathvariant="normal">T</mi> </mrow> </semantics></math> represents the time dimension size, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="normal">S</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> represents the spatial dimension size, and <math display="inline"><semantics> <mrow> <mi mathvariant="normal">C</mi> </mrow> </semantics></math> represents the channel size.</p>
Full article ">Figure 6
<p>Structure diagram of SE Module.</p>
Full article ">Figure 7
<p>Structural diagram of YOLOX.</p>
Full article ">Figure 8
<p>The accuracy of YOLOX training.</p>
Full article ">Figure 9
<p>YOLOX vs. other versions of YOLO.</p>
Full article ">Figure 10
<p>Example of Slow pathway Feature Learning: Res<sub>2</sub>, Res<sub>3</sub>, Res<sub>4</sub>, Res<sub>5</sub> correspond to <a href="#sensors-24-07791-f005" class="html-fig">Figure 5</a>. Each feature map learned after the convolution operation has sizes: 56<sup>2</sup>, 28<sup>2</sup>, 14<sup>2</sup>, and 7<sup>2</sup>.</p>
Full article ">Figure 11
<p>Model performance comparison under different loss functions.</p>
Full article ">Figure 12
<p>Comparison of different algorithms for video frame detection and Spatio-Temporal Action Detection time.</p>
Full article ">Figure 13
<p>Examples of predicting horse postures and behaviors. (<b>a</b>) Predictions of horse postures. (<b>b</b>) Predictions of horse behaviors.</p>
Full article ">Figure 14
<p>Examples of misjudged and missed detections. (<b>a</b>–<b>c</b>) is misjudged, (<b>d</b>) is missed detections.</p>
Full article ">
16 pages, 7140 KiB  
Article
The Microenvironment in an Experimental Model of Acute Pancreatitis Can Modify the Formation of the Protein Corona of sEVs, with Implications on Their Biological Function
by Olga Armengol-Badia, Jaxaira Maggi, Carme Casal, Roser Cortés, Joaquín Abián, Montserrat Carrascal and Daniel Closa
Int. J. Mol. Sci. 2024, 25(23), 12969; https://doi.org/10.3390/ijms252312969 - 2 Dec 2024
Viewed by 476
Abstract
A considerable number of the physiological functions of extracellular vesicles are conditioned by the protein corona attached to their surface. The composition of this corona is initially defined during their intracellular synthesis, but it can be subsequently modified by interactions with the microenvironment. [...] Read more.
A considerable number of the physiological functions of extracellular vesicles are conditioned by the protein corona attached to their surface. The composition of this corona is initially defined during their intracellular synthesis, but it can be subsequently modified by interactions with the microenvironment. Here, we evaluated how the corona of small extracellular vesicles exposed to the inflammatory environment generated in acute pancreatitis is modified and what functional changes occur as a result of these modifications. Small extracellular vesicles obtained from a pancreatic cell line were incubated with the ascitic fluid generated in experimental acute pancreatitis in rats. Using proteomic techniques, we detected the appearance of new proteins and an increase the uptake of extracellular vesicles by certain cell types and the response induced in inflammatory cells. The inhibition of different pattern recognition receptors reversed this activation, indicating that some of these effects could be due to binding of damage-associated molecular patterns to the corona. All of this indicates that in pathologies such as acute pancreatitis, characterized by an inflammatory response and intense tissue damage, the microenvironment substantially influences the corona of extracellular vesicles, thus altering their behavior and enhancing their inflammatory activity. Full article
(This article belongs to the Section Molecular Biology)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Nanoparticle tracking analysis of sEVs obtained from BXPC3 cells. (<b>B</b>) sEVs and cell lysates (Lys) were analyzed via WB for TSG101, CD63 and Alix (EV biomarkers) and calnexin CNX (cellular biomarker). (<b>C</b>) TEM image of BXPC3-derived sEVs. (<b>D</b>) Nanoparticle tracking analysis of PAAF before (top) and after (bottom) removal of sEVs.</p>
Full article ">Figure 1 Cont.
<p>(<b>A</b>) Nanoparticle tracking analysis of sEVs obtained from BXPC3 cells. (<b>B</b>) sEVs and cell lysates (Lys) were analyzed via WB for TSG101, CD63 and Alix (EV biomarkers) and calnexin CNX (cellular biomarker). (<b>C</b>) TEM image of BXPC3-derived sEVs. (<b>D</b>) Nanoparticle tracking analysis of PAAF before (top) and after (bottom) removal of sEVs.</p>
Full article ">Figure 2
<p>The kinetics of the uptake of sEVs by THP1 macrophages revealed that the rate at which they were taken up was greatly increased if they had been pretreated with PAAF. sEVs were stained red with PKH26.</p>
Full article ">Figure 3
<p>An increase in the uptake of sEVs after being treated (6 h) with PAAF was detected in macrophages (THP1) and in endothelial cells (HUVEC). In contrast, in the case of epithelial cells (BXPC3) and keratinocytes (HaCaT), no changes in uptake level were observed. sEVs were labeled with PKH26 Red Fluorescent Cell Linker Dye, cells were labeled with PKH67 Green Fluorescent Cell Linker Dye, and the cell nucleus was stained blue with Hoechst33342.</p>
Full article ">Figure 4
<p>Changes in the expression of IL1β, TNFα, IL6 and MRC1 on THP1 macrophages incubated for 24 h with sEVs (10 µg/mL) pretreated with PBS (sEV-PBS) or PAAF (sEV-PAAF). * = <span class="html-italic">p</span> &lt; 0.05 vs. control; + = <span class="html-italic">p</span> &lt; 0.05 vs. sEV-PBS.</p>
Full article ">Figure 5
<p>The increase in the expression of IL1β on THP1 macrophages induced by PAAF-pretreated sEVs was partially prevented by treating the cells with the TLR4 inhibitor CLI-095, the RAGE inhibitor FPS-ZMI or the inflammasome inhibitor MCC950. By contrast, no effect was observed with the inhibitors of TLR3,7,8,9 Chloroquine (CQ) and TLR 1,2 CU-CPT22. * = <span class="html-italic">p</span> &lt; 0.05 vs. sEV-PBS; + = <span class="html-italic">p</span> &lt; 0.05 vs. sEV-PAAF.</p>
Full article ">Figure 6
<p>A heatmap of the 50 most abundant proteins in PAAF. The data are presented as the normalized abundance of proteins in PAAF. The range of values used in the graph is 1 × 10<sup>9</sup>–8.8 × 10<sup>10</sup>. The values outside the defined range are filled in dark orange. Proteins that were also found in PAAF-incubated sEVs are highlighted in purple.</p>
Full article ">Figure 7
<p>STRING interaction analysis of the proteins bound to the corona of PAAF-treated sEV samples. The line color indicates the type of interaction evidence.</p>
Full article ">Figure 8
<p>Frequency of proteins associated with different molecular functions (<b>A</b>), biological processes (<b>B</b>) and protein classes (<b>C</b>), based on the Gene Ontology database annotations for proteins on the surface of PAAF-treated sEV samples.</p>
Full article ">
Back to TopTop