[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (182)

Search Parameters:
Keywords = multi-cue

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2578 KiB  
Article
Dynamic Neural Network States During Social and Non-Social Cueing in Virtual Reality Working Memory Tasks: A Leading Eigenvector Dynamics Analysis Approach
by Pinar Ozel
Brain Sci. 2025, 15(1), 4; https://doi.org/10.3390/brainsci15010004 - 24 Dec 2024
Viewed by 317
Abstract
Background/Objectives: This research investigates brain connectivity patterns in reaction to social and non-social stimuli within a virtual reality environment, emphasizing their impact on cognitive functions, specifically working memory. Methods: Employing the LEiDA framework with EEG data from 47 participants, I examined dynamic brain [...] Read more.
Background/Objectives: This research investigates brain connectivity patterns in reaction to social and non-social stimuli within a virtual reality environment, emphasizing their impact on cognitive functions, specifically working memory. Methods: Employing the LEiDA framework with EEG data from 47 participants, I examined dynamic brain network states elicited by social avatars compared to non-social stick cues during a VR memory task. Through the integration of LEiDA with deep learning and graph theory analyses, unique connectivity patterns associated with cue type were discerned, underscoring the substantial influence of social cues on cognitive processes. LEiDA, conventionally utilized with fMRI, was creatively employed in EEG to detect swift alterations in brain network states, offering insights into cognitive processing dynamics. Results: The findings indicate distinct neural states for social and non-social cues; notably, social cues correlated with a unique brain state characterized by increased connectivity within self-referential and memory-processing networks, implying greater cognitive engagement. Moreover, deep learning attained approximately 99% accuracy in differentiating cue contexts, highlighting the efficacy of prominent eigenvectors from LEiDA in EEG analysis. Analysis of graph theory also uncovered structural network disparities, signifying enhanced integration in contexts involving social cues. Conclusions: This multi-method approach elucidates the dynamic influence of social cues on brain connectivity and cognition, establishing a basis for VR-based cognitive rehabilitation and immersive learning, wherein social signals may significantly enhance cognitive function. Full article
(This article belongs to the Special Issue The Application of EEG in Neurorehabilitation)
Show Figures

Figure 1

Figure 1
<p>Applied methods (LEiDA, graph theory, and deep learning classification).</p>
Full article ">Figure 2
<p>VR working memory task: selection and design schema.</p>
Full article ">Figure 3
<p>Depiction of the trial process (checkered pattern inspired by [<a href="#B45-brainsci-15-00004" class="html-bibr">45</a>]). Utilizing the parameters of the conventional central cueing paradigm, the cue persisted on the screen for the duration of the trial (e.g., [<a href="#B46-brainsci-15-00004" class="html-bibr">46</a>,<a href="#B47-brainsci-15-00004" class="html-bibr">47</a>]). Panel (<b>A</b>) shows the social avatar cue, and Panel (<b>B</b>) shows the non-social stick cue. Timings, as depicted in the figure, were synchronized across cue types. The inter-trial interval was 1000 ms, during which a fixation cross was displayed. The experiment was a free-viewing study, allowing participants to move their eyes freely. Panel (<b>C</b>) shows the six possible left and right locations for the four encoding targets.</p>
Full article ">Figure 4
<p>Extraction of EEG signal PL states. (<b>A</b>) For a given region, the EEG signal is first preprocessed. (<b>B</b>) Hilbert transformation is applied in order to acquire an analytic signal, whose phase can be represented over time and each TR (temporal resolution), which refers to the time interval between consecutive data samples, utilized for monitoring dynamic connectivity alterations. (<b>C</b>) The dPL(t) matrix quantifies the degree of phase synchronization between each pair of areas. The dominant eigenvector of the dPL(t) matrix, denoted as V(t), represents the primary direction of all phases. Each element in V(t) corresponds to the projection of the phase of each region onto V(t) (right). (<b>D</b>) The eigenvectors V(t) from all participants are combined and inputted into a k-means clustering algorithm, which separates the data points into a predetermined number of groups, k. (<b>E</b>) Every cluster centroid symbolizes a recurring PL state. dPL refers to dynamic phase-locking (Enhancing Clarity: Process Summary {1. Preprocessing →2. Hilbert Transformation →3. Dynamic Phase-Locking Matrix (dPL) →4. Leading Eigenvector Calculation →5. K-means Clustering →6. Identification of Recurrent Phase-Locking (PL) States}).</p>
Full article ">Figure 5
<p>Repertoire of functional network states assessed with LEiDA and association to working memory. For a clustering solution of k = 8, PL State #7 is significantly correlated with enhanced working memory scores (<span class="html-italic">p</span> = 0.0156, (* refers to the significant <span class="html-italic">p</span>-value)), highlighted in a red color in the row of probabilities. The error bars represent the standard error of the mean across all 47 participants. These results underscore the role of DFC when clustered into 8 states in understanding the neural underpinnings of working memory, because the states and their connectivity after clustering results are the representation of dynamic function connectivity during the working memory tasks. Heat maps of the connectivity matrix display phase-locking values (PLVs) between EEG channels under social and non-social cue conditions. Warmer hues signify elevated PLVs, denoting enhanced functional connectivity among brain regions. Examining the variations in connectivity patterns between the two conditions may elucidate areas of increased synchronization in reaction to social cues, thereby corroborating the hypothesis of cue-specific brain network activation (the nodes represent the electrode locations).</p>
Full article ">Figure 6
<p>PL state 7 significantly differs for social compared to non-social working memory dynamic response. (<b>Top</b>) PL state is represented in the cortical space, where functionally connected brain regions (represented as spheres) are colored in blue. (<b>Middle</b>) PL states are also represented as the outer product of Vc, which is a 64 × 64 matrix representing the number of electrode regions. (<b>Bottom</b>) Significant (p-FDR &lt; 0.05) differences in the percentage of occurrence between social compared to non-social working memory dynamic response. Dots represent individual data points; dark bars indicate the standard error of the mean. Analysis via non-parametric permutation-based <span class="html-italic">t</span>-test (N = 47 participants) (* refers to the significant <span class="html-italic">p</span>-value).</p>
Full article ">Figure 7
<p>Graphical representations of brain connectivity networks under social and non-social cue conditions. Each node signifies a brain region, while edges indicate substantial coherence-based connections between regions. Essential network metrics, such as clustering coefficient and degree distribution, are presented to highlight the structural disparities in network organization across conditions. A more compact or clustered network architecture indicates improved integration within specific brain networks in reaction to social stimuli.</p>
Full article ">
21 pages, 3420 KiB  
Article
Keypoints-Based Multi-Cue Feature Fusion Network (MF-Net) for Action Recognition of ADHD Children in TOVA Assessment
by Wanyu Tang, Chao Shi, Yuanyuan Li, Zhonglan Tang, Gang Yang, Jing Zhang and Ling He
Bioengineering 2024, 11(12), 1210; https://doi.org/10.3390/bioengineering11121210 - 29 Nov 2024
Viewed by 537
Abstract
Attention deficit hyperactivity disorder (ADHD) is a prevalent neurodevelopmental disorder among children and adolescents. Behavioral detection and analysis play a crucial role in ADHD diagnosis and assessment by objectively quantifying hyperactivity and impulsivity symptoms. Existing video-based action recognition algorithms focus on object or [...] Read more.
Attention deficit hyperactivity disorder (ADHD) is a prevalent neurodevelopmental disorder among children and adolescents. Behavioral detection and analysis play a crucial role in ADHD diagnosis and assessment by objectively quantifying hyperactivity and impulsivity symptoms. Existing video-based action recognition algorithms focus on object or interpersonal interactions, they may overlook ADHD-specific behaviors. Current keypoints-based algorithms, although effective in attenuating environmental interference, struggle to accurately model the sudden and irregular movements characteristic of ADHD children. This work proposes a novel keypoints-based system, the Multi-cue Feature Fusion Network (MF-Net), for recognizing actions and behaviors of children with ADHD during the Test of Variables of Attention (TOVA). The system aims to assess ADHD symptoms as described in the DSM-V by extracting features from human body and facial keypoints. For human body keypoints, we introduce the Multi-scale Features and Frame-Attention Adaptive Graph Convolutional Network (MSF-AGCN) to extract irregular and impulsive motion features. For facial keypoints, we transform data into images and employ MobileVitv2 for transfer learning to capture facial and head movement features. Ultimately, a feature fusion module is designed to fuse the features from both branches, yielding the final action category prediction. The system, evaluated on 3801 video samples of ADHD children, achieves 90.6% top-1 accuracy and 97.6% top-2 accuracy across six action categories. Additional validation experiments on public datasets NW-UCLA, NTU-2D, and AFEW-VA verify the network’s performance. Full article
Show Figures

Figure 1

Figure 1
<p>The pie chart of ADHD prevalence statistics. The total sample size is 3,277,590, which includes data from various studies conducted between 2007 and 2022. The overall prevalence rate is 8%. The prevalence rates for each subtype are as follows: ADHD-I (3%), ADHD-HI (2.95%), and ADHD-C (2.44%).</p>
Full article ">Figure 2
<p>The overall framework proposed in this paper. The system encompasses three components: Keypoints data extraction, Human body and facial keypoints feature extractor, and Holistic integration of multi-cue feature representations. The feature extraction process of MF-Net comprises three branches: MSF-AGCN (joints) for modeling joint movements, MSF-AGCN (bones) for capturing skeletal deformations, and Mobilevitv2 (face) dedicated to extracting facial keypoints features.</p>
Full article ">Figure 3
<p>Two different graph connections. (<b>a</b>) New Graph. This excludes the connections of the nose, mouth, and body. (<b>b</b>) Our Graph. This includes additional connections, where the red lines represent the connections between the nose and shoulders, and the blue lines represent the connections between the mouth and body.</p>
Full article ">Figure 4
<p>The structure of MSF-AGCN. The network architecture is built upon the Adaptive Graph Convolutional Network (AGCN) as its baseline, incorporating two novel components: the Multi-scale Spatial-Temporal Information Extraction Module and the Motion Attribute Encoder Module.</p>
Full article ">Figure 5
<p>The architectures of the Multi-scale Spatial-Temporal Information Extraction Module and the Motion Attribute Encoder Module proposed in the MSF-AGCN.</p>
Full article ">Figure 6
<p>t-SNE visualization of the original data, features extracted by AGCN, and features extracted by MSF-AGCN on the validation set. The points in six colors represent the feature data of six categories. Similar feature data points are closer together, while dissimilar feature data points are farther apart. (<b>a</b>) t-SNE visualization of the original data. (<b>b</b>) t-SNE visualization of features extracted by AGCN. (<b>c</b>) t-SNE visualization of features extracted by MSF-AGCN. numbers 1–6 respectively represent the following actions. 1: Turning the head and look around. 2: Shaking body. 3: Resting head on hand. 4: Displaying facial expressions of inattention. 5: Lying on the desk. 6: Exhibiting no significant ADHD symptoms. The clustering effect of MSF-AGCN is significantly better than AGCN in the “Shaking Body” category.</p>
Full article ">Figure 7
<p>Attention heatmaps of different output layers of AGCN and MSF-AGCN during an ADHD child’s ‘sudden move while resting on his head’ action. The red circle represents the area of enhanced attention compared to the original AGCN. Our designed modules effectively help the network focus on the relevant movement areas.</p>
Full article ">Figure 8
<p>Bar chart for the Motion Attribute Encoder Module Position and different Structural Graph Designs. (<b>a</b>) Accuracy bar chart of the Motion Attribute Encoder Module placed at different stages of the layers. (<b>b</b>) Accuracy bar chart of different networks combined with different graphs.</p>
Full article ">Figure 9
<p>A bar chart illustrating the top-1 and top-2 counts in six categories for AGCN, MSF-AGCN, and MF-Net on the validation set. (<b>a</b>) Bar chart of top-1 and top-2 counts for six categories in AGCN. (<b>b</b>) Bar chart of top-1 and top-2 counts for six categories in MSF-AGCN. (<b>c</b>) Bar chart of top-1 and top-2 counts for six categories in MF-Net. In the chart, numbers 1–6 respectively represent the following actions. 1: Turning the head and look around. 2: Shaking body. 3: Resting head on hand. 4: Displaying facial expressions of inattention. 5: Lying on the desk. 6: Exhibiting no significant ADHD symptoms. MF-Net exhibits substantial improvements in top-1 and top-2 count classes compared to AGCN.</p>
Full article ">
20 pages, 25584 KiB  
Article
LIDeepDet: Deepfake Detection via Image Decomposition and Advanced Lighting Information Analysis
by Zhimao Lai, Jicheng Li, Chuntao Wang, Jianhua Wu and Donghua Jiang
Electronics 2024, 13(22), 4466; https://doi.org/10.3390/electronics13224466 - 14 Nov 2024
Viewed by 745
Abstract
The proliferation of AI-generated content (AIGC) has empowered non-experts to create highly realistic Deepfake images and videos using user-friendly software, posing significant challenges to the legal system, particularly in criminal investigations, court proceedings, and accident analyses. The absence of reliable Deepfake verification methods [...] Read more.
The proliferation of AI-generated content (AIGC) has empowered non-experts to create highly realistic Deepfake images and videos using user-friendly software, posing significant challenges to the legal system, particularly in criminal investigations, court proceedings, and accident analyses. The absence of reliable Deepfake verification methods threatens the integrity of legal processes. In response, researchers have explored deep forgery detection, proposing various forensic techniques. However, the swift evolution of deep forgery creation and the limited generalizability of current detection methods impede practical application. We introduce a new deep forgery detection method that utilizes image decomposition and lighting inconsistency. By exploiting inherent discrepancies in imaging environments between genuine and fabricated images, this method extracts robust lighting cues and mitigates disturbances from environmental factors, revealing deeper-level alterations. A crucial element is the lighting information feature extractor, designed according to color constancy principles, to identify inconsistencies in lighting conditions. To address lighting variations, we employ a face material feature extractor using Pattern of Local Gravitational Force (PLGF), which selectively processes image patterns with defined convolutional masks to isolate and focus on reflectance coefficients, rich in textural details essential for forgery detection. Utilizing the Lambertian lighting model, we generate lighting direction vectors across frames to provide temporal context for detection. This framework processes RGB images, face reflectance maps, lighting features, and lighting direction vectors as multi-channel inputs, applying a cross-attention mechanism at the feature level to enhance detection accuracy and adaptability. Experimental results show that our proposed method performs exceptionally well and is widely applicable across multiple datasets, underscoring its importance in advancing deep forgery detection. Full article
(This article belongs to the Special Issue Deep Learning Approach for Secure and Trustworthy Biometric System)
Show Figures

Figure 1

Figure 1
<p>Imaging process of digital image.</p>
Full article ">Figure 2
<p>Process of image generation using generative adversarial networks.</p>
Full article ">Figure 3
<p>Architecture of the proposed method.</p>
Full article ">Figure 4
<p>Illustration of artifacts in deep learning-generated faces. The right-most image shows over-rendering around the nose area.</p>
Full article ">Figure 5
<p>Illustration of inconsistent iris colors in generated faces.</p>
Full article ">Figure 6
<p>Visualization of illumination maps for real images and four forgery methods from the FF++ database.</p>
Full article ">Figure 7
<p>Face material map after illumination normalization. Abnormal traces in the eye and mouth regions are more noticeable.</p>
Full article ">Figure 8
<p>Visualization of face material maps for the facial regions in real images and four forgery methods from the FF++ database for the same frame.</p>
Full article ">Figure 9
<p>Three-dimensional lighting direction vector.</p>
Full article ">Figure 10
<p>Two-dimensional lighting direction vector.</p>
Full article ">Figure 11
<p>Calculation process of lighting direction.</p>
Full article ">Figure 12
<p>Calculation the angle of lighting direction.</p>
Full article ">Figure 13
<p>Comparison of lighting direction angles between real videos and their corresponding Deepfake videos.</p>
Full article ">
18 pages, 7263 KiB  
Article
MPCTrans: Multi-Perspective Cue-Aware Joint Relationship Representation for 3D Hand Pose Estimation via Swin Transformer
by Xiangan Wan, Jianping Ju, Jianying Tang, Mingyu Lin, Ning Rao, Deng Chen, Tingting Liu, Jing Li, Fan Bian and Nicholas Xiong
Sensors 2024, 24(21), 7029; https://doi.org/10.3390/s24217029 - 31 Oct 2024
Viewed by 765
Abstract
The objective of 3D hand pose estimation (HPE) based on depth images is to accurately locate and predict keypoints of the hand. However, this task remains challenging because of the variations in hand appearance from different viewpoints and severe occlusions. To effectively address [...] Read more.
The objective of 3D hand pose estimation (HPE) based on depth images is to accurately locate and predict keypoints of the hand. However, this task remains challenging because of the variations in hand appearance from different viewpoints and severe occlusions. To effectively address these challenges, this study introduces a novel approach, called the multi-perspective cue-aware joint relationship representation for 3D HPE via the Swin Transformer (MPCTrans, for short). This approach is designed to learn multi-perspective cues and essential information from hand depth images. To achieve this goal, three novel modules are proposed to utilize features from multiple virtual views of the hand, namely, the adaptive virtual multi-viewpoint (AVM), hierarchy feature estimation (HFE), and virtual viewpoint evaluation (VVE) modules. The AVM module adaptively adjusts the angles of the virtual viewpoint and learns the ideal virtual viewpoint to generate informative multiple virtual views. The HFE module estimates hand keypoints through hierarchical feature extraction. The VVE module evaluates virtual viewpoints by using chained high-level functions from the HFE module. Transformer is used as a backbone to extract the long-range semantic joint relationships in hand depth images. Extensive experiments demonstrate that the MPCTrans model achieves state-of-the-art performance on four challenging benchmark datasets. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Existing challenges in HPE: (<b>a</b>–<b>c</b>) serious occlusions and (<b>d</b>–<b>f</b>) variations in hand appearance from different viewpoints. Some of or even most parts of the hand are missing in these scenarios, resulting in difficulties in HPE. The top line shows RGB images, while the bottom line presents the corresponding depth images.</p>
Full article ">Figure 2
<p>Two key characteristics are revealed from the observation of hand images. Characteristic I: (<b>a</b>,<b>b</b>) represent the inherent relationships between hand joints. Characteristic II: (<b>c</b>,<b>d</b>) indicate the multi-perspective cues of the hand. These relationships and cues can be computed using a self-attention mechanism.</p>
Full article ">Figure 3
<p>Overview of the MPCTrans model. First, hand depth images are converted into 3D point clouds via the AVM module, which employs adaptive learning at virtual viewpoints to generate optimal virtual multi-view depth images. Second, these adaptive virtual multi-view depth images are partitioned into windows, with attention mechanisms applied only within these partitions. Linear projection and position embedding techniques are utilized to transform patches into 1D vectors. Third, three HFE modules leverage information from lower layers and the output features from the final stage to estimate hand poses for each view. Fourth, the feature maps from the three HFE modules are concatenated. Subsequently, the VVE module assesses this concatenated feature map to assign a score to each view. Finally, the pose estimates and results from each view are fused to produce the final hand pose prediction.</p>
Full article ">Figure 4
<p>Illustration of the AVM module. Original depth images may not always provide the best perspective for pose estimation. Our method adaptively learns <span class="html-italic">M</span> optimal virtual views from <span class="html-italic">M</span> initial virtual views, where <span class="html-italic">M</span> is set as 25.</p>
Full article ">Figure 5
<p>Attention visualization of different perspectives in the VVE module. Each perspective contributes differently to HPE, resulting in different attention visualizations in the multi-head attention of the VVE module. Consequently, each perspective has a different rating.</p>
Full article ">Figure 6
<p>Comparison of the MPCTrans model with state-of-the-art methods (DeepPrior++ [<a href="#B5-sensors-24-07029" class="html-bibr">5</a>], HandPointNet [<a href="#B16-sensors-24-07029" class="html-bibr">16</a>], DenseReg [<a href="#B6-sensors-24-07029" class="html-bibr">6</a>], Point-to-Point [<a href="#B10-sensors-24-07029" class="html-bibr">10</a>], A2J [<a href="#B7-sensors-24-07029" class="html-bibr">7</a>], V2V [<a href="#B11-sensors-24-07029" class="html-bibr">11</a>], VVS [<a href="#B20-sensors-24-07029" class="html-bibr">20</a>]) on the NYU and ICVL datasets. (<b>a</b>,<b>b</b>) Mean joint error per hand joint and percentage of successful frames over different error thresholds in the NYU dataset. (<b>c</b>,<b>d</b>) Mean joint error per hand joint and percentage of successful frames across various error thresholds in the ICVL dataset.</p>
Full article ">Figure 7
<p>Viewpoint initialization scheme. (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>) represent the initial virtual viewpoint positions for 4, 9, 16, and 25 adaptive virtual multi-views, respectively.</p>
Full article ">Figure 8
<p>Visualization of different numbers of adapted virtual multi-views: (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>) represent the 4, 9, 16, and 25 adapted virtual multi-views, respectively, learned from the initial virtual viewpoint positions shown in <a href="#sensors-24-07029-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Comparison of the visualization results of the MPCTrans model with different numbers of adaptive virtual multi-views on the ICVL dataset. “ours-4views”, “ours-9views”, “ours-16views”, and “ours-25views” represent the results of the model with 4, 9, 16, and 25 adaptive virtual multi-views, respectively.</p>
Full article ">Figure 10
<p>Comparison of the visualization results of the MPCTrans model with different numbers of adaptive virtual multi-views on the NYU dataset.</p>
Full article ">
18 pages, 13017 KiB  
Article
DeployFusion: A Deployable Monocular 3D Object Detection with Multi-Sensor Information Fusion in BEV for Edge Devices
by Fei Huang, Shengshu Liu, Guangqian Zhang, Bingsen Hao, Yangkai Xiang and Kun Yuan
Sensors 2024, 24(21), 7007; https://doi.org/10.3390/s24217007 - 31 Oct 2024
Viewed by 733
Abstract
To address the challenges of suboptimal remote detection and significant computational burden in existing multi-sensor information fusion 3D object detection methods, a novel approach based on Bird’s-Eye View (BEV) is proposed. This method utilizes an enhanced lightweight EdgeNeXt feature extraction network, incorporating residual [...] Read more.
To address the challenges of suboptimal remote detection and significant computational burden in existing multi-sensor information fusion 3D object detection methods, a novel approach based on Bird’s-Eye View (BEV) is proposed. This method utilizes an enhanced lightweight EdgeNeXt feature extraction network, incorporating residual branches to address network degradation caused by the excessive depth of STDA encoding blocks. Meantime, deformable convolution is used to expand the receptive field and reduce computational complexity. The feature fusion module constructs a two-stage fusion network to optimize the fusion and alignment of multi-sensor features. This network aligns image features to supplement environmental information with point cloud features, thereby obtaining the final BEV features. Additionally, a Transformer decoder that emphasizes global spatial cues is employed to process the BEV feature sequence, enabling precise detection of distant small objects. Experimental results demonstrate that this method surpasses the baseline network, with improvements of 4.5% in the NuScenes detection score and 5.5% in average precision for detection objects. Finally, the model is converted and accelerated using TensorRT tools for deployment on mobile devices, achieving an inference time of 138 ms per frame on the Jetson Orin NX embedded platform, thus enabling real-time 3D object detection. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Overall framework of the network. DeployFusion introduces an improved EdgeNeXt feature extraction network, using residual branches to address degradation and deformable convolutions to increase the receptive field and reduce complexity. The feature fusion module aligns image and point cloud features to generate optimized BEV features. A Transformer decoder is used to process the sequence of BEV features, enabling accurate identification of small distant objects.</p>
Full article ">Figure 2
<p>Comparison of convolutional encoding block. (<b>a</b>) DW Encode. (<b>b</b>) DDW Encode.</p>
Full article ">Figure 3
<p>Feature channel separation attention.</p>
Full article ">Figure 4
<p>Feature channel separation attention.</p>
Full article ">Figure 5
<p>Transposed attention.</p>
Full article ">Figure 6
<p>Comparison of standard and variable convolution kernels in receptive field regions. (<b>a</b>) Receptive field area of standard convolutional kernel. (<b>b</b>) Receptive field area of deformable convolutional kernel.</p>
Full article ">Figure 7
<p>Experimental results of dynamic loss and NDS. (<b>a</b>) Dynamic loss graph. (<b>b</b>) Dynamic NDS score graph.</p>
Full article ">Figure 8
<p>Comparison of EdgeNeXt_DCN with other fusion networks of inference results.</p>
Full article ">Figure 9
<p>Comparisons of detection accuracy in different feature fusion networks. (<b>a</b>) Primitive feature extraction network. (<b>b</b>) EdgeNeXt_DCN feature extraction network.</p>
Full article ">Figure 10
<p>Results of object detection for each category.</p>
Full article ">Figure 11
<p>Comparison of detection results from multi-sensor fusion detection method in BEV.</p>
Full article ">Figure 12
<p>Performance of object detection in BEV of this method. (<b>a</b>) Scene 1. (<b>b</b>) Scene 2.</p>
Full article ">Figure 13
<p>Jetson Orin NX mobile device.</p>
Full article ">Figure 14
<p>Workflow of TensorRT.</p>
Full article ">Figure 15
<p>Comparison of computation time before and after operator fusion.</p>
Full article ">Figure 16
<p>Comparison of detection methods in various quantifiers and accuracy levels.</p>
Full article ">Figure 17
<p>Comparison of inference time before and after model quantification in detection.</p>
Full article ">Figure 18
<p>Detection result of method on mobile devices.</p>
Full article ">
25 pages, 3181 KiB  
Review
Smart Nanocomposite Hydrogels as Next-Generation Therapeutic and Diagnostic Solutions
by Anna Valentino, Sorur Yazdanpanah, Raffaele Conte, Anna Calarco and Gianfranco Peluso
Gels 2024, 10(11), 689; https://doi.org/10.3390/gels10110689 - 24 Oct 2024
Viewed by 827
Abstract
Stimuli-responsive nanocomposite gels combine the unique properties of hydrogels with those of nanoparticles, thus avoiding the suboptimal results of single components and creating versatile, multi-functional platforms for therapeutic and diagnostic applications. These hybrid materials are engineered to respond to various internal and external [...] Read more.
Stimuli-responsive nanocomposite gels combine the unique properties of hydrogels with those of nanoparticles, thus avoiding the suboptimal results of single components and creating versatile, multi-functional platforms for therapeutic and diagnostic applications. These hybrid materials are engineered to respond to various internal and external stimuli, such as temperature, pH, light, magnetic fields, and enzymatic activity, allowing precise control over drug release, tissue regeneration, and biosensing. Their responsiveness to environmental cues permits personalized medicine approaches, providing dynamic control over therapeutic interventions and real-time diagnostic capabilities. This review explores recent advances in stimuli-responsive hybrid gels’ synthesis and application, including drug delivery, tissue engineering, and diagnostics. Overall, these platforms have significant clinical potential, and future research is expected to lead to unique solutions to address unmet medical needs. Full article
(This article belongs to the Special Issue Designing Hydrogels for Sustained Delivery of Therapeutic Agents)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Releasing mechanism for stimuli-responsive hydrogels.</p>
Full article ">Figure 2
<p>Schematic illustration of the biomedical applications of internal stimuli-responsive hydrogels.</p>
Full article ">Figure 3
<p>Schematic illustration of the biomedical applications of external stimuli-responsive hydrogels.</p>
Full article ">Figure 4
<p>Example of hydrogel modified with light-activatable cell adhesive motifs. (<b>a</b>,<b>b</b>) Fluorescence images of live-dead staining of L929 fibroblasts encapsulated in PEG hydrogels modified with cyclo[RGDfC] modified with (<b>a</b>) and without (<b>b</b>) UCNP-PMAOs (5 mg/mL). Cells were labelled 24 h after irradiation with a 974 nm laser (10 W/cm<sup>2</sup>) for 12 min. Green color indicates living cells and red color dead cells. Scale bar: 50 μm. (<b>c</b>) Quantification of viability of L929 cells in (<b>a</b>,<b>b</b>). (<b>d</b>,<b>e</b>) Z-stack fluorescence images showing the morphology of L929 cultured in cyclo[RGD(PMNB)fC] modified PEG hydrogel containing UCNP-PMAOs (5 mg/mL). with (<b>d</b>) or without (<b>e</b>) NIR laser exposure. Green color indicates living cells. (<b>f</b>) Quantification of the aspect ratio (the ratio of the longest to shortest dimension) of L929 fibroblasts from (<b>d</b>,<b>e</b>). mean± s.d., n = 10 cells, * <span class="html-italic">p</span> &lt; 0.05. (<b>g</b>,<b>h</b>) Z-stack fluorescence images of Human Umbilical Vein Endothelial Cells (HUVECs) within cyclo[RGD(DMNPB)fC] modified PEG hydrogels containing UCNP-PMAOs (5 mg/mL) with (<b>g</b>) and without (<b>h</b>) NIR exposure. Nucleus was stained by DAPI (blue), actin fibers with Phalloidin (green), and cell body with PECAM-1 (red). (<b>i</b>) Quantification of vascular area coverage percentage for (<b>g</b>,<b>h</b>). mean ± s.d., n ≥ 9 ROI with totals of 200–500 cells analyzed, * <span class="html-italic">p</span> &lt; 0.05. Reproduced from Ref. [<a href="#B82-gels-10-00689" class="html-bibr">82</a>] with permission from The Royal Society of Chemistry.</p>
Full article ">Figure 5
<p>Example of magnetic to heath stimulus. (<b>a</b>) Snapshots and IR thermal images of light-response shape recovery processes; (<b>b</b>) snapshots of magnetic-and light-responsive controlled reconfiguration; (<b>c</b>) the evolution of bending behavior induced by magnetic response; (<b>d</b>) the evolution of bending behavior induced by light response. Reproduced from Ref. [<a href="#B112-gels-10-00689" class="html-bibr">112</a>] with permission from The Royal Society of Chemistry.</p>
Full article ">
16 pages, 10997 KiB  
Article
Non-Intrusive Water Surface Velocity Measurement Based on Deep Learning
by Guocheng An, Tiantian Du, Jin He and Yanwei Zhang
Water 2024, 16(19), 2784; https://doi.org/10.3390/w16192784 - 30 Sep 2024
Viewed by 857
Abstract
Accurate assessment of water surface velocity (WSV) is essential for flood prevention, disaster mitigation, and erosion control within hydrological monitoring. Existing image-based velocimetry techniques largely depend on correlation principles, requiring users to input and adjust parameters to achieve reliable results, which poses challenges [...] Read more.
Accurate assessment of water surface velocity (WSV) is essential for flood prevention, disaster mitigation, and erosion control within hydrological monitoring. Existing image-based velocimetry techniques largely depend on correlation principles, requiring users to input and adjust parameters to achieve reliable results, which poses challenges for users lacking relevant expertise. This study presents RivVideoFlow, a user-friendly, rapid, and precise method for WSV. RivVideoFlow combines two-dimensional and three-dimensional orthorectification based on Ground Control Points (GCPs) with a deep learning-based multi-frame optical flow estimation algorithm named VideoFlow, which integrates temporal cues. The orthorectification process employs a homography matrix to convert images from various angles into a top-down view, aligning the image coordinates with actual geographical coordinates. VideoFlow achieves superior accuracy and strong dataset generalization compared to two-frame RAFT models due to its more effective capture of flow velocity continuity over time, leading to enhanced stability in velocity measurements. The algorithm has been validated on a flood simulation experimental platform, in outdoor settings, and with synthetic river videos. Results demonstrate that RivVideoFlow can robustly estimate surface velocity under various camera perspectives, enabling continuous real-time dynamic measurement of the entire flow field. Moreover, RivVideoFlow has demonstrated superior performance in low, medium, and high flow velocity scenarios, especially in high-velocity conditions where it achieves high measurement precision. This method provides a more effective solution for hydrological monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Coordinate transformation using camera intrinsics and extrinsics.</p>
Full article ">Figure 2
<p>Types of radial distortion. (<b>a</b>) No distortion; (<b>b</b>) Barrel distortion; (<b>c</b>) Pincushion distortion.</p>
Full article ">Figure 3
<p>Orthorectification Impact on Braque River Imagery. (<b>a</b>) A sample frame of the Braque River captured by a camera mounted on a bridge; (<b>b</b>) The orthorectified version of the image in (<b>a</b>). The plus symbols depicted in the image represent the three-dimensional coordinates of the four Ground Control Points (GCPs) within the world coordinate system.</p>
Full article ">Figure 4
<p>Tri-frame optical flow framework of VideoFlow.</p>
Full article ">Figure 5
<p>Experimental setup. (<b>a</b>) Three sets of imaging devices with different resolutions, each consisting of one vertically oriented and one obliquely oriented camera; (<b>b</b>) On-site installation setup of the experimental apparatus.</p>
Full article ">Figure 6
<p>Distribution of GCPs.</p>
Full article ">Figure 7
<p>(<b>a</b>) Aerial view of the water surface in the flume captured by the overhead camera; (<b>b</b>) Surface velocity distribution obtained using the Fudda-LSPIV 1.7.3 software.</p>
Full article ">Figure 8
<p>(<b>a</b>) A sample frame of aerial video of the Freiberger Mulde River showing the ADCP surveyed transect CS1; (<b>b</b>) Surface velocity measurements along CS1 using ADCP and different optical flow methods.</p>
Full article ">Figure 9
<p>Averaged surface velocity distribution vector map. (<b>a</b>) The map is estimated using VideoFlow; (<b>b</b>) The map is estimated using RivVideoFlow.</p>
Full article ">Figure 10
<p>Comparative Analysis of Cross-sectional Velocity Distributions from Image-Based Flow Velocity Measurement Techniques with ADCP Data at Castor River. (<b>a</b>) Sample frame from the stationary shore-based camera at Castor River. (<b>b</b>) Image-based flow velocity measurements include PIV [<a href="#B3-water-16-02784" class="html-bibr">3</a>], Hydro-STIV [<a href="#B36-water-16-02784" class="html-bibr">36</a>], traditional Frameback optical flow method [<a href="#B35-water-16-02784" class="html-bibr">35</a>], and deep learning network-based optical flow methods such as Flownet2.0 [<a href="#B12-water-16-02784" class="html-bibr">12</a>] and RivVideoFlow.</p>
Full article ">Figure 11
<p>RivVideoFlow Algorithm Performance under High Flow Conditions in the Hurunui River. (<b>a</b>) A sample frame from the aerial video of the Hurunui River. (<b>b</b>) Averaged surface velocity distribution vector map of the Hurunui River. (<b>c</b>) Surface velocity measurements of the Hurunui River obtained by the RivVideoFlow Algorithm.</p>
Full article ">Figure 12
<p>Visualization of synthetic river flow videos. (<b>a</b>) synthetic river videos; (<b>b</b>) RAFT; (<b>c</b>) RivVideoFlow. The red arrow indicates the direction of river flow. The upper right corner of <a href="#water-16-02784-f012" class="html-fig">Figure 12</a>c for Scene 1 is color-coded for the flow field, with the color type indicating the direction of the flow field and the shade indicating flow magnitude.</p>
Full article ">
21 pages, 9523 KiB  
Article
A Hybrid Framework for Referring Image Segmentation: Dual-Decoder Model with SAM Complementation
by Haoyuan Chen, Sihang Zhou, Kuan Li, Jianping Yin and Jian Huang
Mathematics 2024, 12(19), 3061; https://doi.org/10.3390/math12193061 - 30 Sep 2024
Viewed by 856
Abstract
In the realm of human–robot interaction, the integration of visual and verbal cues has become increasingly significant. This paper focuses on the challenges and advancements in referring image segmentation (RIS), a task that involves segmenting images based on textual descriptions. Traditional approaches to [...] Read more.
In the realm of human–robot interaction, the integration of visual and verbal cues has become increasingly significant. This paper focuses on the challenges and advancements in referring image segmentation (RIS), a task that involves segmenting images based on textual descriptions. Traditional approaches to RIS have primarily focused on pixel-level classification. These methods, although effective, often overlook the interconnectedness of pixels, which can be crucial for interpreting complex visual scenes. Furthermore, while the PolyFormer model has shown impressive performance in RIS, its large number of parameters and high training data requirements pose significant challenges. These factors restrict its adaptability and optimization on standard consumer hardware, hindering further enhancements in subsequent research. Addressing these issues, our study introduces a novel two-branch decoder framework with SAM (segment anything model) for RIS. This framework incorporates an MLP decoder and a KAN decoder with a multi-scale feature fusion module, enhancing the model’s capacity to discern fine details within images. The framework’s robustness is further bolstered by an ensemble learning strategy that consolidates the insights from both the MLP and KAN decoder branches. More importantly, we collect the segmentation target edge coordinates and bounding box coordinates as input cues for the SAM model. This strategy leverages SAM’s zero-sample learning capabilities to refine and optimize the segmentation outcomes. Our experimental findings, based on the widely recognized RefCOCO, RefCOCO+, and RefCOCOg datasets, confirm the effectiveness of this method. The results not only achieve state-of-the-art (SOTA) performance in segmentation but are also supported by ablation studies that highlight the contributions of each component to the overall improvement in performance. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the referring image segmentation task, which in our framework will output the bounding box and split masks.</p>
Full article ">Figure 2
<p>In the images above, we see some of the segmentation errors from PolyFormer. These images exhibit the following characteristics: (1) the segmentation target position prediction is basically correct, but the edge point coordinates of the segmentation target cannot be successfully predicted; (2) the Euclidean metric between the edge point coordinates of the predicted segmentation target is very small.</p>
Full article ">Figure 3
<p>Theoverall structure of our hybrid framework for referring image segmentation: dual-decoder model with SAM complementation.</p>
Full article ">Figure 4
<p>Schematic of our KAN decoder branch training. During training, we freeze all parameters except for the KAN decoder branch and only use the KAN decoder predictions as output.</p>
Full article ">Figure 5
<p>The framework of the KAN decoder. It used a different prediction network and combined more feature information in the decoder input section.</p>
Full article ">Figure 6
<p>The framework of the SAM-based segmentation completion module. It selects the high-quality information from dual-decoder prediction results as the prompt input for SAM and obtains a new prediction mask based on this prompt.</p>
Full article ">Figure 7
<p>The above image is the segmentation result with noise predicted by SAM. The inputs to SAM are the top left and bottom right coordinates of the segmentation target bounding box or the coordinates of the bounding box and the coordinates of the randomly generated positive points. The green box is the bounding box predicted by our framework’s decoder. The green points are the randomly generated positive points, and the red arrow points to the segmentation noise predicted by SAM.</p>
Full article ">Figure 8
<p>Dis_N is the average Euclidean metric of the coordinate points of the edge of the predicted segmentation target in the image, and mIoU is the mIoU result of the corresponding image prediction result with Ground Truth. When Dis_N is less than 4, most of the predicted segmentation results are wrong.</p>
Full article ">Figure 9
<p>The result comparison of PolyFormer and our framework on the RefCOCO val set. The green box represents the segmentation target bounding box. The green points represent the positive points input to the SAM. The red points represent the segmentation target edge coordinate points predicted by the decoder. The red mask represents the final predicted segmentation mask of our framework. Our framework could perform accurate segmentation complementation on decoder-predicted basis.</p>
Full article ">Figure 10
<p>The result comparison of PolyFormer and our framework on RefCOCO+ val set. The green box represents the segmentation target bounding box. The green points represent the positive points input to the SAM. The red points represent the segmentation target edge coordinate points predicted by the decoder. The red mask represents the final predicted segmentation mask of our framework. Our framework could perform accurate segmentation complementation on a decoder-predicted basis.</p>
Full article ">Figure 11
<p>The result comparison of PolyFormer and our framework on RefCOCOg val set. The green box represents the segmentation target bounding box. The green points represent the positive points input to the SAM. The red points represent the segmentation target edge coordinate points predicted by the decoder. The red mask represents the final predicted segmentation mask of our framework. Our framework could perform accurate segmentation complementation on a decoder-predicted basis.</p>
Full article ">
21 pages, 5587 KiB  
Article
Dual-Stream Feature Collaboration Perception Network for Salient Object Detection in Remote Sensing Images
by Hongli Li, Xuhui Chen, Liye Mei and Wei Yang
Electronics 2024, 13(18), 3755; https://doi.org/10.3390/electronics13183755 - 21 Sep 2024
Viewed by 1012
Abstract
As the core technology of artificial intelligence, salient object detection (SOD) is an important approach to improve the analysis efficiency of remote sensing images by intelligently identifying key areas in images. However, existing methods that rely on a single strategy, convolution or Transformer, [...] Read more.
As the core technology of artificial intelligence, salient object detection (SOD) is an important approach to improve the analysis efficiency of remote sensing images by intelligently identifying key areas in images. However, existing methods that rely on a single strategy, convolution or Transformer, exhibit certain limitations in complex remote sensing scenarios. Therefore, we developed a Dual-Stream Feature Collaboration Perception Network (DCPNet) to enable the collaborative work and feature complementation of Transformer and CNN. First, we adopted a dual-branch feature extractor with strong local bias and long-range dependence characteristics to perform multi-scale feature extraction from remote sensing images. Then, we presented a Multi-path Complementary-aware Interaction Module (MCIM) to refine and fuse the feature representations of salient targets from the global and local branches, achieving fine-grained fusion and interactive alignment of dual-branch features. Finally, we proposed a Feature Weighting Balance Module (FWBM) to balance global and local features, preventing the model from overemphasizing global information at the expense of local details or from inadequately mining global cues due to excessive focus on local information. Extensive experiments on the EORSSD and ORSSD datasets demonstrated that DCPNet outperformed the current 19 state-of-the-art methods. Full article
Show Figures

Figure 1

Figure 1
<p>The network framework of DCPNet. DCPNet adopted a classic encoder–decoder architecture comprising a dual-stream feature extractor, Multi-path Complementary-aware Interaction Module (MCIM), Feature Weighting Balance Module (FWBM), and a decoder. Below, we detail the functionality of each component.</p>
Full article ">Figure 2
<p>Architecture of the MCIM. MCIM used spatial interaction on low-level feature maps and channel interaction on high-level feature maps.</p>
Full article ">Figure 3
<p>Structure diagram of FWBW. The blue feature map represents features from the global branch, and the red one represents features from the local branch.</p>
Full article ">Figure 4
<p>The results of visualization comparison between ours and other methods in different scenarios. Red indicates false positives, and blue indicates false negatives.</p>
Full article ">Figure 5
<p>Comparison of PR curve (<b>a</b>,<b>c</b>) and F-measure (<b>b</b>,<b>d</b>) curve on two datasets.</p>
Full article ">Figure 6
<p>Box chart visual comparison of eight metrics obtained from 600 test samples of EORSSD. (<b>a</b>–<b>h</b>) represent the box diagram distribution on 8 metrics respectively.</p>
Full article ">Figure 7
<p>Feature visualization at different stages.</p>
Full article ">Figure 8
<p>Comparison of EORSSD (<b>a</b>) and ORSSD (<b>b</b>) with and without data augmentation.</p>
Full article ">
15 pages, 1305 KiB  
Article
The Tumor Suppressor Par-4 Regulates Adipogenesis by Transcriptional Repression of PPARγ
by James Sledziona, Ravshan Burikhanov, Nathalia Araujo, Jieyun Jiang, Nikhil Hebbar and Vivek M. Rangnekar
Cells 2024, 13(17), 1495; https://doi.org/10.3390/cells13171495 - 5 Sep 2024
Viewed by 1192
Abstract
Prostate apoptosis response-4 (Par-4, also known as PAWR) is a ubiquitously expressed tumor suppressor protein that induces apoptosis selectively in cancer cells, while leaving normal cells unaffected. Our previous studies indicated that genetic loss of Par-4 promoted hepatic steatosis, adiposity, and insulin-resistance in [...] Read more.
Prostate apoptosis response-4 (Par-4, also known as PAWR) is a ubiquitously expressed tumor suppressor protein that induces apoptosis selectively in cancer cells, while leaving normal cells unaffected. Our previous studies indicated that genetic loss of Par-4 promoted hepatic steatosis, adiposity, and insulin-resistance in chow-fed mice. Moreover, low plasma levels of Par-4 are associated with obesity in human subjects. The mechanisms underlying obesity in rodents and humans are multi-faceted, and those associated with adipogenesis can be functionally resolved in cell cultures. We therefore used pluripotent mouse embryonic fibroblasts (MEFs) or preadipocyte cell lines responsive to adipocyte differentiation cues to determine the potential role of Par-4 in adipocytes. We report that pluripotent MEFs from Par-4−/− mice underwent rapid differentiation to mature adipocytes with an increase in lipid droplet accumulation relative to MEFs from Par-4+/+ mice. Knockdown of Par-4 in 3T3-L1 pre-adipocyte cultures by RNA-interference induced rapid differentiation to mature adipocytes. Interestingly, basal expression of PPARγ, a master regulator of de novo lipid synthesis and adipogenesis, was induced during adipogenesis in the cell lines, and PPARγ induction and adipogenesis caused by Par-4 loss was reversed by replenishment of Par-4. Mechanistically, Par-4 downregulates PPARγ expression by directly binding to its upstream promoter, as judged by chromatin immunoprecipitation and luciferase-reporter studies. Thus, Par-4 transcriptionally suppresses the PPARγ promoter to regulate adipogenesis. Full article
(This article belongs to the Special Issue The Role of PPARs in Disease - Volume III)
Show Figures

Figure 1

Figure 1
<p>Adipogenesis and PPARγ expression are inversely associated with Par-4 status. (<b>A</b>) Loss of Par-4 in MEFs enhances adipogenesis. Par-4<sup>+/+</sup> and Par-4<sup>−/−</sup> MEFs were grown in adipocyte differentiation media and subjected to Oil-red O (ORO) staining (left panel). Percentage of ORO-positive cells is shown (right panel). (<b>B</b>) Adipogenesis of 3T3-L1 cells was confirmed by growing them in adipocyte differentiation (AD) medium or control (Con) medium and performing ORO staining. Percentage of ORO-positive cells is shown. (<b>C</b>) Adipogenesis in 3T3-L1 cells is accelerated by Par-4 knockdown and prevented by PPARγ knockdown. Preadipocyte 3T3-L1 cells were transfected with siRNAs for Par-4 or PPARγ, or co-transfected with these siRNAs, and subjected to treatment with adipogenesis differentiation medium. For control, the cells were treated with scrambled siRNA and maintained in adipogenesis differentiation medium (left panels). After staining the cells with ORO, the percentage of cells with oil droplets was calculated (middle panel). Knockdown of Par-4 and PPARγ was confirmed by Western blot analysis (right panel). (<b>D</b>) Adipogenesis in 3T3-L1 cells accelerated by Par-4 knockdown is reversed by Par-4 re-expression. 3T3-L1 cells were transfected with siRNA duplexes for mouse Par-4 or control siRNA and then infected with rat Par-4-expressing adenovirus (P) or control GFP adenovirus (G). The cells were grown in differentiation medium and adipogenesis was examined via oil red O staining (top left panels) and quantified (top right panel). Western blot analysis confirmed Par-4 siRNA knockdown and Par-4 adenoviral expression (bottom panel). (<b>E</b>) Par-4 protein expression is downregulated during adipogenesis. Whole-cell extracts were prepared from Par-4<sup>+/+</sup> and Par-4<sup>−/−</sup> MEFs (left panel) or 3T3-L1 cells (right panel) grown in normal growth medium (control, C) or in adipocyte differentiation medium (AD) for up to 10 days and subjected to Western blot analysis. (<b>A</b>,<b>C</b>,<b>D</b>) Scale bar, 200 μm. (<b>A</b>–<b>D</b>) Mean <span class="underline">+</span> SEM of three independent experiments shown. Asterisks: (*) indicates <span class="html-italic">p</span> &lt; 0.05, (***) indicates <span class="html-italic">p</span> &lt; 0.005, and (****) indicates <span class="html-italic">p</span> &lt; 0.001; n.s. indicates not significant according to the Student’s <span class="html-italic">t</span> test. Molecular weights, β-actin: 42 kDa; Par-4: 40 kDa; GAPDH: 36 kDa; PPARγ: 53,57 kDa.</p>
Full article ">Figure 2
<p>PPARγ expression is inversely associated with Par-4 expression. MEFs or adult fibroblasts from Par-4<sup>+/+</sup> and Par-4<sup>−/−</sup> mice (<b>A</b>), human adipose-derived stem cells (ADSCs) differentiated into adipocytes by growing them in adipocyte differentiation (AD) medium or undifferentiated control cells (Con) (<b>B</b>), or MCF7 cells with CRISPR/Cas9 induced Par-4 knockout (Par-4 KO) or control cells (<b>C</b>) were lysed in RIPA buffer and the whole-cell lysates were subjected to Western blotting for Par-4, actin, and PPARγ.</p>
Full article ">Figure 3
<p>PPARγ gene transcription is inversely associated with Par-4 expression. (<b>A</b>) Par-4<sup>−/−</sup> MEFs display increased transcription of PPARγ. RNA was extracted from Par-4<sup>+/+</sup> and Par-4<sup>−/−</sup> MEFs and subjected to qPCR for Par-4, PPARγ, and GAPDH. Data normalized to corresponding GAPDH levels are shown. (<b>B</b>) PPARγ expression is inhibited by Par-4 overexpression. 3T3-L1 cells were infected with GFP or GFP-Par-4 producing adenovirus, and whole-cell lysates were subjected to Western blot analysis. (<b>C</b>) Generation of luciferase constructs containing PPARγ2 promoter deletion fragments 1, 2, and 3. The deletion fragments 1, 2, and 3 of the mouse PPARγ (isoform 2) promoter were cloned into pGL4 luciferase expression constructs (left panel). MEFs were transfected with either the luc constructs containing PPARγ promoter fragments or an empty pGL4, in the presence of a β-galactosidase (β-gal) expression construct. Whole-cell extracts were then subjected to luciferase activity assays. The luciferase activity normalized to β-gal activity is shown for Fragments (Frag) 1, 2, and 3 (right panel). (<b>D</b>) Deletion fragment 6 is necessary for Par-4-mediated regulation of the PPARγ2 promoter. PPARγ promoter Fragment 3 was subdivided into five smaller fragments (left panel), and the luc assay was repeated as above in MEFs. Luciferase activity normalized to β-gal is shown (right panel). (<b>E</b>) Nuclear entry is necessary for Par-4 mediated regulation of the PPARγ2 promoter. Par-4<sup>−/−</sup> MEFs were co-transfected with the luc construct containing fragment 6 along with a β-gal expression vector combined with (i) an empty pCB6 control plasmid, (ii) full length Par-4-expression plasmid, (iii) Par-4 plasmid containing deletion of NLS1 sequence (ΔNLS1), or (iv) Par-4 plasmid with deletion of both NLS1 and NLS2 (ΔNLS2). The whole-cell lysates were subjected to luciferase assays; luciferase activity normalized to β-gal is shown. (<b>A</b>,<b>C</b>–<b>E</b>) Means of 3 experiments <span class="underline">+</span> SEM are shown. Asterisks: (***) indicates <span class="html-italic">p</span> &lt; 0.005 and (****) indicates <span class="html-italic">p</span> &lt; 0.001 according to the Student’s <span class="html-italic">t</span> test.</p>
Full article ">Figure 4
<p>Par-4 binds to the PPARγ promoter. (<b>A</b>) Endogenous Par-4 protein binds the PPARγ2 promoter sequence in Fragment 6. NIH 3T3 cells were transfected with either an empty control vector, Fragment 6-containing plasmid, or Fragment 7-containing plasmid. These transfected cells were then subjected to ChIP with pull-down accomplished with either anti-Par-4 antibody (Ab) or IgG control Ab. Immunoprecipitated DNA fragments were analyzed using primers for Fragment 6, Fragment 7, or negative control primers. (<b>B</b>) Endogenous Par-4 protein binds the endogenous PPARγ2 promoter region. Non-transfected NIH 3T3 cells were subjected to ChIP analysis with either the anti-Par-4 antibody (Ab), IgG control Ab or C/EBPα Ab. Immunoprecipitated DNA fragments were analyzed using primers for Fragment 6, C/EBP positive-control primers, or negative control primers. (<b>A</b>,<b>B</b>) Means of 3 experiments + SEM are shown. Asterisk (****) indicates <span class="html-italic">p</span> &lt; 0.001 according to the Student’s <span class="html-italic">t</span> test.</p>
Full article ">
16 pages, 659 KiB  
Article
Combating the Co-Circulation of SARS-CoV-2 and Seasonal Influenza: Identifying Multi-Dimensional Factors Associated with the Uptake of Seasonal Influenza Vaccine among a Chinese National Sample
by Xiaoying Zhang, Pinpin Zheng, Xuewei Chen, Ang Li and Lixin Na
Vaccines 2024, 12(9), 1005; https://doi.org/10.3390/vaccines12091005 - 1 Sep 2024
Viewed by 1107
Abstract
Introduction: The co-circulation of COVID-19 and seasonal influenza highlighted the importance of promoting influenza vaccination. However, the influenza vaccination rate among the Chinese population is low and requires further promotion. This study examined multi-dimensional factors, such as knowledge of seasonal influenza, health perceptions, [...] Read more.
Introduction: The co-circulation of COVID-19 and seasonal influenza highlighted the importance of promoting influenza vaccination. However, the influenza vaccination rate among the Chinese population is low and requires further promotion. This study examined multi-dimensional factors, such as knowledge of seasonal influenza, health perceptions, cues to action, patient–provider relationships, and COVID-19 pandemic-related factors, in relation to the uptake of the seasonal influenza vaccine (SIV) among the Chinese population. Methods: A cross-sectional, self-administered online survey using a quota sampling method was conducted among Chinese adults 18 years and older between June and August 2022. Multivariate logistic regression was performed to explore factors associated with the 2021 SIV behavior. Results: A total of 3161 individuals from different regions of China were included in this study. The multivariate logistic regression demonstrated that perceived severity of influenza, perceived barriers to taking SIV, cues to action, a stable relationship with providers, worry about contracting COVID-19 in immunization settings, non-pharmaceutical interventions (NPIs), and awareness of the influenza vaccine in protecting against COVID-19 were significantly associated with the SIV uptake. Conclusions: This study examined multi-dimensional factors that may influence SIV uptake. Health promotion programs should incorporate multi-dimensional factors, including personal and environmental factors, related to SIV promotion during the co-circulation period. Full article
(This article belongs to the Special Issue Understanding and Addressing Vaccine Hesitancy)
Show Figures

Figure 1

Figure 1
<p>Multi-dimensional factors associated with seasonal influenza vaccination uptake.</p>
Full article ">
15 pages, 3459 KiB  
Article
Real-Time 3D Reconstruction for the Conservation of the Great Wall’s Cultural Heritage Using Depth Cameras
by Lingyu Xu, Yang Xu, Ziyan Rao and Wenbin Gao
Sustainability 2024, 16(16), 7024; https://doi.org/10.3390/su16167024 - 16 Aug 2024
Viewed by 1504
Abstract
The Great Wall, a pivotal part of Chinese cultural heritage listed on the World Heritage List since 1987, confronts challenges stemming from both natural deterioration and anthropogenic damage. Traditional conservation strategies are impeded by the Wall’s vast geographical spread, substantial costs, and the [...] Read more.
The Great Wall, a pivotal part of Chinese cultural heritage listed on the World Heritage List since 1987, confronts challenges stemming from both natural deterioration and anthropogenic damage. Traditional conservation strategies are impeded by the Wall’s vast geographical spread, substantial costs, and the inefficiencies associated with conventional surveying techniques such as manual surveying, laser scanning, and low-altitude aerial photography. These methods often struggle to capture the Wall’s intricate details, resulting in limitations in field operations and practical applications. In this paper, we propose a novel framework utilizing depth cameras for the efficient real-time 3D reconstruction of the Great Wall. To overcome the challenge of the high complexity of reconstruction, we generate multi-level geometric features from raw depth images for hierarchical computation guidance. On one hand, the local set of sparse features serve as basic cues for multi-view-based reconstruction. On the other hand, the global set of dense features are employed for optimization guidance during reconstruction. The proposed framework facilitates the real-time, precise 3D reconstruction of the Great Wall in the wild, thereby significantly enhancing the capabilities of traditional surveying methods for the Great Wall. This framework offers a novel and efficient digital approach for the conservation and restoration of the Great Wall’s cultural heritage. Full article
(This article belongs to the Special Issue Heritage Preservation and Tourism Development)
Show Figures

Figure 1

Figure 1
<p>The overall framework of 3D reconstruction based on a depth camera.</p>
Full article ">Figure 2
<p>Collection routes for the architectural remains of the Great Wall.</p>
Full article ">Figure 3
<p>Depth image, IR grayscale image, and RGB color image (computer screenshot).</p>
Full article ">Figure 4
<p>Wearable depth camera 3D reconstruction integrated acquisition device and principal schematic.</p>
Full article ">Figure 5
<p>The point cloud office processing process and results of north facade of the No.3 watching tower of the Juyongguan Great Wall.</p>
Full article ">Figure 6
<p>The point cloud office processing process and results of west facade of the No.3 watching tower of the Juyongguan Great Wall.</p>
Full article ">Figure 7
<p>The processing results of the Juyongguan Great Wall’s interior wall point cloud.</p>
Full article ">Figure 8
<p>The vectorized result of the 3D model of the current status of the enemy tower at the Juyongguan Great Wall.</p>
Full article ">Figure 9
<p>An example of a photogrammetry image and reconstructed point cloud of the Great Wall in the Xuliukou area.</p>
Full article ">Figure 10
<p>An example of a 3D model of the Great Wall in the Juyongguan area generated using the photogrammetry-based method.</p>
Full article ">Figure 11
<p>Rendered images from different perspectives of the Juyongguan 3D model obtained using the photogrammetry-based method.</p>
Full article ">
15 pages, 5537 KiB  
Article
Artificial Trabecular Meshwork Structure Combining Melt Electrowriting and Solution Electrospinning
by Maria Bikuna-Izagirre, Javier Aldazabal, Javier Moreno-Montañes, Elena De-Juan-Pardo, Elena Carnero and Jacobo Paredes
Polymers 2024, 16(15), 2162; https://doi.org/10.3390/polym16152162 - 30 Jul 2024
Viewed by 1051
Abstract
The human trabecular meshwork (HTM) is responsible for regulating intraocular pressure (IOP) by means of gradient porosity. Changes in its physical properties, like increases in stiffness or alterations in the extracellular matrix (ECM), are associated with increases in the IOP, which is the [...] Read more.
The human trabecular meshwork (HTM) is responsible for regulating intraocular pressure (IOP) by means of gradient porosity. Changes in its physical properties, like increases in stiffness or alterations in the extracellular matrix (ECM), are associated with increases in the IOP, which is the primary cause of glaucoma. The complexity of its structure limits the engineered models to one-layered and simple approaches, which do not accurately replicate the biological and physiological cues related to glaucoma. Here, a combination of melt electrowriting (MEW) and solution electrospinning (SE) is explored as a biofabrication technique used to produce a gradient porous scaffold that mimics the multi-layered structure of the native HTM. Polycaprolactone (PCL) constructs with a height of 20–710 µm and fiber diameters of 0.7–37.5 µm were fabricated. After mechanical characterization, primary human trabecular meshwork cells (HTMCs) were seeded over the scaffolds within the subsequent 14–21 days. In order to validate the system’s responsiveness, cells were treated with dexamethasone (Dex) and the rho inhibitor Netarsudil (Net). Scanning electron microscopy and immunochemistry staining were performed to evaluate the expected morphological changes caused by the drugs. Cells in the engineered membranes exhibited an HTMC-like morphology and a correct drug response. Although this work demonstrates the utility of combining MEW and SE in reconstructing complex morphological features like the HTM, new geometries and dimensions should be tested, and future works need to be directed towards perfusion studies. Full article
(This article belongs to the Special Issue Polymer Scaffold for Tissue Engineering Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the HTM. The anatomical structure of the tissue, which is formed by three layers: uveal, corneoscleral, and juxtacanalicular meshwork. Scanning electron microscopy image of the native trabecular meshwork tissue. Scale bar: 2 μm. Reprinted from M.Bikuna-Izagirre, J. Aldazabal et al. (2022) [<a href="#B6-polymers-16-02162" class="html-bibr">6</a>].</p>
Full article ">Figure 2
<p>(<b>A</b>) PCL scaffolds representing (i) TM1 and (ii) TM2. Upper row scale bar: 200 µm, lower row scale bar: 100 µm. (iii) SE scaffolds representing TM3. Upper row scale bar: 10 µm, lower row scale bar: 2 µm. (<b>B</b>) SEM image of TMFull. Scale bar: 200 µm. (<b>C</b>) Transversal cut of TMFull scaffold. Scale bar: 200 µm. (<b>D</b>) Displacement–stress curves for each scaffold layer and linear elastic modulus (average over <span class="html-italic">n</span> = 4 samples for each scaffold design) Blue: TM1, Red: TM2, Green: TM3, Purple: TMFull.</p>
Full article ">Figure 3
<p>Biological evaluation of the cells growing on different scaffolds. (<b>A</b>) Bar plot showing the viability of the cells based on live/dead assay staining. (<b>B</b>) Fluorescence images of live/dead assay. Red color indicates dead cells and green the live ones. Scale bar: 100 µm.</p>
Full article ">Figure 4
<p>SEM images of scaffolds with HTM cells on day 1, day 8, and day 14 for morphological evaluation of the cells. Two magnifications were taken: top with scale bars: 200 µm and bottom scale bar: 20 µm. For TM3 samples, the scale bars were 100 µm and 10 µm, respectively.</p>
Full article ">Figure 5
<p>(<b>A</b>) Confocal images of HTM cells cultured for 21 days in different scaffolds—TM1, TM2, TM3, and TMFull—treated with Dex and Net. In green, F-actin fibers (AlexaFluor 488); orange, vinculin; and in blue, DAPI for nuclei staining. Scale bar: 50 µm. (<b>B</b>) Quantification of confocal images and mean fluorescence intensity (MFI), and (<b>C</b>) nucleus aspect ratio (* <span class="html-italic">p</span>-value &lt; 0.05, ** <span class="html-italic">p</span>-value &lt; 0.01, *** <span class="html-italic">p</span>-value &lt; 0.001, and **** <span class="html-italic">p</span>-value &lt; 0.0001).</p>
Full article ">Figure 6
<p>SEM images of HTM cells for morphological evaluation. Cells were cultured with cell media for 14 days and the subsequent 7 days with drugs. First row, control samples using cell media. Second row, 15 nM of Dex. Third, 1 µM of Net. Scale bars: 100 µm and 20 µm for TM3 case.</p>
Full article ">
18 pages, 453 KiB  
Article
Bilingual–Visual Consistency for Multimodal Neural Machine Translation
by Yongwen Liu, Dongqing Liu and Shaolin Zhu
Mathematics 2024, 12(15), 2361; https://doi.org/10.3390/math12152361 - 29 Jul 2024
Viewed by 984
Abstract
Current multimodal neural machine translation (MNMT) approaches primarily focus on ensuring consistency between visual annotations and the source language, often overlooking the broader aspect of multimodal coherence, including target–visual and bilingual–visual alignment. In this paper, we propose a novel approach that effectively leverages [...] Read more.
Current multimodal neural machine translation (MNMT) approaches primarily focus on ensuring consistency between visual annotations and the source language, often overlooking the broader aspect of multimodal coherence, including target–visual and bilingual–visual alignment. In this paper, we propose a novel approach that effectively leverages target–visual consistency (TVC) and bilingual–visual consistency (BiVC) to improve MNMT performance. Our method leverages visual annotations depicting concepts across bilingual parallel sentences to enhance multimodal coherence in translation. We exploit target–visual harmony by extracting contextual cues from visual annotations during auto-regressive decoding, incorporating vital future context to improve target sentence representation. Additionally, we introduce a consistency loss promoting semantic congruence between bilingual sentence pairs and their visual annotations, fostering a tighter integration of textual and visual modalities. Extensive experiments on diverse multimodal translation datasets empirically demonstrate our approach’s effectiveness. This visually aware, data-driven framework opens exciting opportunities for intelligent learning, adaptive control, and robust distributed optimization of multi-agent systems in uncertain, complex environments. By seamlessly fusing multimodal data and machine learning, our method paves the way for novel control paradigms capable of effectively handling the dynamics and constraints of real-world multi-agent applications. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of our method.</p>
Full article ">Figure 2
<p>Learning curves and BLEU scores comparison for baseline EMMT and EMMT+TVC+BiVC models. (a) Learning curves comparison; (b) BLEU scores comparison. It shows the learning curves of loss scores for the baseline EMMT model and the EMMT+TVC+BiVC model on the En-De development set. It also presents the BLEU scores of both models on the En-De Test2016 and Test2017 test sets. The results are averaged over 5 training runs.</p>
Full article ">
9 pages, 293 KiB  
Article
Psychometric Properties of the Dimensional Yale Food Addiction Scale for Children 2.0 among Portuguese Adolescents
by Ana Matos, Sílvia Félix, Carol Coelho, Eva Conceição, Bárbara César Machado and Sónia Gonçalves
Nutrients 2024, 16(14), 2334; https://doi.org/10.3390/nu16142334 - 19 Jul 2024
Viewed by 1205
Abstract
The dimensional Yale Food Addiction Scale for Children 2.0 (dYFAS-C 2.0) was developed to provide a reliable psychometric measure for assessing food addiction in adolescents, in accordance with the updated addiction criteria proposed in the fifth edition of the Diagnostic and Statistical Manual [...] Read more.
The dimensional Yale Food Addiction Scale for Children 2.0 (dYFAS-C 2.0) was developed to provide a reliable psychometric measure for assessing food addiction in adolescents, in accordance with the updated addiction criteria proposed in the fifth edition of the Diagnostic and Statistical Manual (DSM-5). The present study aimed to evaluate the psychometric properties of the dYFAS-C 2.0 among Portuguese adolescents and pre-adolescents and to explore the relationship between food addiction and other eating behaviors such as grazing and intuitive eating. The participants were 131 Portuguese adolescents and pre-adolescents (53.4% female and 46.6% male) aged between 10 and 15 years (Mage = 11.8) and with a BMI between 11.3 and 35.3 (MBMI z-score = 0.42). Confirmatory Factor Analysis demonstrated an adequate fit for the original one-factor model (χ2 (104) = 182; p < 0.001; CFI = 0.97; TLI = 0.97; NFI = 0.94; SRMR = 0.101; RMSEA = 0.074; 95% CI [0.056; 0.091]). Food addiction was positively correlated with higher grazing (r = 0.69, p < 0.001) and negatively correlated with lower reliance on hunger/satiety cues (r = −0.22, p = 0.015). No significant association was found between food addiction and BMI z-score, or between food addiction and age. The results support the use of dYFAS-C 2.0 as a valid and reliable measure for assessing food addiction in Portuguese adolescents and pre-adolescents. Furthermore, the findings highlight that food addiction may be part of a spectrum of disordered eating behaviors associated with control impairment. Future research with a larger sample size could further elucidate the associations between food addiction and other variables, such as psychological distress and multi-impulsive spectrum behaviors. Full article
(This article belongs to the Special Issue Disordered Eating and Lifestyle Studies—2nd Edition)
Back to TopTop