[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (21,469)

Search Parameters:
Keywords = task performance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 359 KiB  
Article
Model Checking Using Large Language Models—Evaluation and Future Directions
by Sotiris Batsakis, Ilias Tachmazidis, Matthew Mantle, Nikolaos Papadakis and Grigoris Antoniou
Electronics 2025, 14(2), 401; https://doi.org/10.3390/electronics14020401 - 20 Jan 2025
Abstract
Large language models (LLMs) such as ChatGPT have risen in prominence recently, leading to the need to analyze their strengths and limitations for various tasks. The objective of this work was to evaluate the performance of large language models for model checking, which [...] Read more.
Large language models (LLMs) such as ChatGPT have risen in prominence recently, leading to the need to analyze their strengths and limitations for various tasks. The objective of this work was to evaluate the performance of large language models for model checking, which is used extensively in various critical tasks such as software and hardware verification. A set of problems were proposed as a benchmark in this work and three LLMs (GPT-4, Claude, and Gemini) were evaluated with respect to their ability to solve these problems. The evaluation was conducted by comparing the responses of the three LLMs with the gold standard provided by model checking tools. The results illustrate the limitations of LLMs in these tasks, identifying directions for future research. Specifically, the best overall performance (ratio of problems solved correctly) was 60%, indicating a high probability of reasoning errors by the LLMs, especially when dealing with more complex scenarios requiring many reasoning steps, and the LLMs typically performed better when generating scripts for solving the problems rather than solving them directly. Full article
(This article belongs to the Special Issue Advances in Information, Intelligence, Systems and Applications)
22 pages, 1109 KiB  
Article
Vigorous Exercise Enhances Verbal Fluency Performance in Healthy Young Adults
by Maya M. Khanna, Corey L. Guenther, Joan Eckerson, Dion Talamante, Mary Elizabeth Yeh, Megan Forby, Krystal Hopkins, Emmali Munger, Grace Rauh, Shringala Chelluri, Courtney Schmidt, Isabel Walocha and Matthew Sacco
Brain Sci. 2025, 15(1), 96; https://doi.org/10.3390/brainsci15010096 (registering DOI) - 20 Jan 2025
Abstract
Background/Objectives: We examined the effects of cardiovascular exercise on verbal fluency using a between-groups design. Methods: Within our experimental (i.e., exercise) group, participants performed phonemic and semantic verbal fluency tasks (VFTs) before, during, and after a vigorous 30 min bout of cycling. Participants [...] Read more.
Background/Objectives: We examined the effects of cardiovascular exercise on verbal fluency using a between-groups design. Methods: Within our experimental (i.e., exercise) group, participants performed phonemic and semantic verbal fluency tasks (VFTs) before, during, and after a vigorous 30 min bout of cycling. Participants within our control group also completed these VFTs before, during, and after a non-physical activity. We compared the VFT performance of the experimental (exercise) and control (no-exercise) groups of participants in terms of the characteristics of the words that they produced within the VFTs. In addition, we examined these aspects of VFT performance for each participant group across time within the experiment session. Conclusions: From these comparisons, we see that exercise influenced VFT performance. Most notably, participants engaged in exercise changed their VFT performance over time, while control group participants did not. Exercising participants produced more words over the course of their exercise session that contained fewer letters over time and were lower in frequency during and after exercise as compared to before exercise. Additionally, topic switches in the VFTs increased after exercise as compared to before exercise. Participants in the control group did not change their VFT performance over time according to any of these measures. These findings indicate that exercise impacted participants’ lexical access and that these VFT performance changes were not due to practice effects. Full article
Show Figures

Figure 1

Figure 1
<p>Procedure for experimental (exercise) and control (no exercise) groups; please note that Heart Rate is abbreviated HR and Verbal Fluency Task is abbreviated VFT.</p>
Full article ">Figure 2
<p>(<b>a</b>): Number of words produced in the phonemic VFT, at Time 1 (before), Time 2 (during), and Time 3 (after) for experimental (exercise) and control (non-exercise) participants. The experimental group produced more words during and after exercise compared to before exercise and compared to the control group participants. Standard error bars for each column are represented in the figure. (<b>b</b>): Number of words produced in the Semantic VFT, at Time 1 (before), Time 2 (during), and Time 3 (after) for participants in the experimental (exercise) and control (non-exercise) groups. Experimental group participants produced more words during and after exercise than before exercise and compared to the control participants. Standard error bars for each column appear in the figure.</p>
Full article ">Figure 3
<p>(<b>a</b>) Number of topic switches at Time 1 (before), Time 2 (during), and Time 3 (after) in the phonemic VFT. We found that the number of topic switches within the VFT increased over the course of exercise for the experimental group participants, but did not for the control group participants who were not exercising. (<b>b</b>) Number of topic switches at Time 1 (before), Time 2 (during), and Time 3 (after) in the semantic VFT. We found that the number of topic switches within the VFT increased over the course of exercise for the experimental group participants, but did not for the control group participants who were not exercising.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) Number of topic switches at Time 1 (before), Time 2 (during), and Time 3 (after) in the phonemic VFT. We found that the number of topic switches within the VFT increased over the course of exercise for the experimental group participants, but did not for the control group participants who were not exercising. (<b>b</b>) Number of topic switches at Time 1 (before), Time 2 (during), and Time 3 (after) in the semantic VFT. We found that the number of topic switches within the VFT increased over the course of exercise for the experimental group participants, but did not for the control group participants who were not exercising.</p>
Full article ">
29 pages, 802 KiB  
Article
Determining Priority Areas for the Technological Development of Oil Companies in Mexico
by Tatyana Semenova and Juan Yair Martínez Santoyo
Resources 2025, 14(1), 18; https://doi.org/10.3390/resources14010018 - 20 Jan 2025
Abstract
The technological development of oil companies in Mexico is essential for ensuring their economic sustainability. A mechanism for the effective management of the technological development of oil companies, and the industry as a whole, is to determine its priority areas. This article provides [...] Read more.
The technological development of oil companies in Mexico is essential for ensuring their economic sustainability. A mechanism for the effective management of the technological development of oil companies, and the industry as a whole, is to determine its priority areas. This article provides a calculation for the choice of planning directions for the development of the oil sector in Mexico and related studies. Currently, the most promising technologies are offshore drilling and production. To achieve the study goals, we analyzed the patent activity of the oil sector. The results showed an unfavorable trend: the number of private and public patents in Mexico is decreasing. For example, from 2017 to 2023, the number of patents for offshore technologies decreased by more than 10 times. This dynamic significantly hinders the development of the oil industry. Despite the general measures taken within the framework of energy policy, the volume of oil production is constantly declining. Thus, in order to ensure the continued reproduction potential of the oil sector, it is necessary to take into account the importance of research and development. The innovation rating of the Mexican Petroleum Institute, a state-funded research center for the hydrocarbon sector, has been declining, having fallen by more than 50% from 102 international patents in 2014 to 40 in 2024. Today, the Mexican Institute of Petroleum is in the 48th percentile in terms of research performance among research institutes. The present authors’ approach considers that the intensification of technological development, which is costly, should not be an end in itself but rather an important means of increasing the efficiency of the integrated activities of oil companies. To integrate the patent-technological component of the strategic planning of oil companies, the concept of sub-potentials is proposed. The potential for the functioning and development of an oil enterprise from the point of view of the systems approach is decomposed into the sub-potentials of reproduction, defense, management, and reserve, which, under adverse conditions, can transition to the sub-potentials of threat and containment. An important task is to determine these transition points. The patent-technological component is taken into account in the sub-potential of reproduction. The remaining components of company development are taken into account within the framework of other sub-potentials, which are not discussed in detail in this article. At the same time, due to the unified conceptual approach, the integration of goals and objectives for technological development into a single economic and socio-ecological strategy for oil enterprises is ensured, which is the most effective approach to ensure their sustainable development. The dynamics of patent generation are an important factor in assessing the technological component and, in general, the effectiveness of projects in the energy sector. Full article
(This article belongs to the Special Issue Assessment and Optimization of Energy Efficiency)
29 pages, 2710 KiB  
Article
Improvement of Propeller Hydrodynamic Prediction Model Based on Multitask ANN and Its Application in Optimization Design
by Liang Li, Yihong Chen, Lu Huang, Qing Hai, Denghai Tang and Chao Wang
J. Mar. Sci. Eng. 2025, 13(1), 183; https://doi.org/10.3390/jmse13010183 - 20 Jan 2025
Abstract
A multitask learning (MTL) model based on artificial neural networks (ANNs) is proposed in this study to improve the prediction accuracy and physical reliability of marine propeller hydrodynamic performance. The propeller’s comprehensive geometric features are used as inputs, and the coefficients of quadratic [...] Read more.
A multitask learning (MTL) model based on artificial neural networks (ANNs) is proposed in this study to improve the prediction accuracy and physical reliability of marine propeller hydrodynamic performance. The propeller’s comprehensive geometric features are used as inputs, and the coefficients of quadratic polynomials for the thrust coefficient (KT) and torque coefficient (10KQ) curves are predicted as outputs. The loss function is customized through a positive gradient penalty of the curves to accelerate training. When the single-task and multitask models were compared, the prediction errors were reduced; KT decreased from 2.61% to 2.07%, 10 KQ decreased from 3.58% to 2.31%, and the efficiency (η) decreased from 3.04% to 2.00%. Non-physical fluctuations in the performance curves were effectively mitigated by the multitask model, yielding predicted curvatures which closely matched the experimental data. Strong generalization was demonstrated when the model was tested on unseen propellers, with deviations of 2.2% for KT, 4.6% for 10 KQ, and 3.8% for η. Finally, the model was applied to optimize the propeller design for a 325,000 ton very large ore carrier ship, where a Pareto front with 58 non-dominant solutions for the maximum speed and fluctuating pressure was successfully generated and effectively verified by the model’s test results. The model enhanced the prediction of the propeller performance and contributed to optimization in the propeller’s design. Full article
(This article belongs to the Section Ocean Engineering)
17 pages, 5156 KiB  
Article
Plant Detection in RGB Images from Unmanned Aerial Vehicles Using Segmentation by Deep Learning and an Impact of Model Accuracy on Downstream Analysis
by Mikhail V. Kozhekin, Mikhail A. Genaev, Evgenii G. Komyshev, Zakhar A. Zavyalov and Dmitry A. Afonnikov
J. Imaging 2025, 11(1), 28; https://doi.org/10.3390/jimaging11010028 - 20 Jan 2025
Abstract
Crop field monitoring using unmanned aerial vehicles (UAVs) is one of the most important technologies for plant growth control in modern precision agriculture. One of the important and widely used tasks in field monitoring is plant stand counting. The accurate identification of plants [...] Read more.
Crop field monitoring using unmanned aerial vehicles (UAVs) is one of the most important technologies for plant growth control in modern precision agriculture. One of the important and widely used tasks in field monitoring is plant stand counting. The accurate identification of plants in field images provides estimates of plant number per unit area, detects missing seedlings, and predicts crop yield. Current methods are based on the detection of plants in images obtained from UAVs by means of computer vision algorithms and deep learning neural networks. These approaches depend on image spatial resolution and the quality of plant markup. The performance of automatic plant detection may affect the efficiency of downstream analysis of a field cropping pattern. In the present work, a method is presented for detecting the plants of five species in images acquired via a UAV on the basis of image segmentation by deep learning algorithms (convolutional neural networks). Twelve orthomosaics were collected and marked at several sites in Russia to train and test the neural network algorithms. Additionally, 17 existing datasets of various spatial resolutions and markup quality levels from the Roboflow service were used to extend training image sets. Finally, we compared several texture features between manually evaluated and neural-network-estimated plant masks. It was demonstrated that adding images to the training sample (even those of lower resolution and markup quality) improves plant stand counting significantly. The work indicates how the accuracy of plant detection in field images may affect their cropping pattern evaluation by means of texture characteristics. For some of the characteristics (GLCM mean, GLRM long run, GLRM run ratio) the estimates between images marked manually and automatically are close. For others, the differences are large and may lead to erroneous conclusions about the properties of field cropping patterns. Nonetheless, overall, plant detection algorithms with a higher accuracy show better agreement with the estimates of texture parameters obtained from manually marked images. Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
Show Figures

Figure 1

Figure 1
<p>A UAV launching from the launcher.</p>
Full article ">Figure 2
<p>Examples of images for the analysis. (<b>a</b>) A fragment of an orthomosaic before markup; (<b>b</b>) the same fragment with vector markup applied in QGIS; (<b>c</b>) a generated raster mask showing the location of plant centers.</p>
Full article ">Figure 3
<p>The architecture of the U-Net network used in this work for plant identification.</p>
Full article ">Figure 4
<p>The learning curves of the models corresponding to the experiments: (<b>a</b>) RN18-LQ; (<b>b</b>) RN18-HQ-LQ; (<b>c</b>) RN34-HQ-LQ; and (<b>d</b>) RN50-HQ-LQ. On the X-axis, the ID numbers of epochs during training are plotted. The Y-axis shows parameters characterizing the magnitude of error obtained with the training and validation samples (see the panels in the top-right corner of the graphs). Blue curve: change in the loss function on the training sample; green curve: change in the loss function on the validation sample; yellow curve: change in the <span class="html-italic">IoU</span> metric on the training sample; red curve: change in the <span class="html-italic">IoU</span> metric on the validation sample.</p>
Full article ">Figure 5
<p>Examples of RN50-HQ-LQ model performance on the test sample for different crops and high-resolution orthomosaics. (<b>a</b>) Sugar beet, Beet_marat_1; (<b>b</b>) sugar beet, UBONN_Sb3_2015; (<b>c</b>) potato, Stavropol_2_7; (<b>d</b>) potato, Stavropol_4_0; (<b>e</b>) potato, Stavropol_4_9. Images in rows from left to right: original (Field); manual plant marking (Mask); automatic marking by the RN50-HQ-LQ network.</p>
Full article ">Figure 6
<p>A comparison of crop texture characteristics estimates between markup obtained by the manual approach (X axis) and markup obtained by neural network algorithm (Y axis). The names of characteristics are shown at the top of the figure. (<b>a</b>) Stavropol_2_7, prediction by the RN50-HQ-LQ method; (<b>b</b>) Stavropol_2_7, prediction by the RN18-HQ method; (<b>c</b>) Beet_marat_1, prediction by the RN50-HQ-LQ method; (<b>d</b>) Beet_marat_1, prediction by the RN18-HQ method.</p>
Full article ">
26 pages, 6111 KiB  
Article
An Explainable CNN and Vision Transformer-Based Approach for Real-Time Food Recognition
by Kintoh Allen Nfor, Tagne Poupi Theodore Armand, Kenesbaeva Periyzat Ismaylovna, Moon-Il Joo and Hee-Cheol Kim
Nutrients 2025, 17(2), 362; https://doi.org/10.3390/nu17020362 - 20 Jan 2025
Abstract
Background: Food image recognition, a crucial step in computational gastronomy, has diverse applications across nutritional platforms. Convolutional neural networks (CNNs) are widely used for this task due to their ability to capture hierarchical features. However, they struggle with long-range dependencies and global feature [...] Read more.
Background: Food image recognition, a crucial step in computational gastronomy, has diverse applications across nutritional platforms. Convolutional neural networks (CNNs) are widely used for this task due to their ability to capture hierarchical features. However, they struggle with long-range dependencies and global feature extraction, which are vital in distinguishing visually similar foods or images where the context of the whole dish is crucial, thus necessitating transformer architecture. Objectives: This research explores the capabilities of the CNNs and transformers to build a robust classification model that can handle both short- and long-range dependencies with global features to accurately classify food images and enhance food image recognition for better nutritional analysis. Methods: Our approach, which combines CNNs and Vision Transformers (ViTs), begins with the RestNet50 backbone model. This model is responsible for local feature extraction from the input image. The resulting feature map is then passed to the ViT encoder block, which handles further global feature extraction and classification using multi-head attention and fully connected layers with pre-trained weights. Results: Our experiments on five diverse datasets have confirmed a superior performance compared to the current state-of-the-art methods, and our combined dataset leveraging complementary features showed enhanced generalizability and robust performance in addressing global food diversity. We used explainable techniques like grad-CAM and LIME to understand how the models made their decisions, thereby enhancing the user’s trust in the proposed system. This model has been integrated into a mobile application for food recognition and nutrition analysis, offering features like an intelligent diet-tracking system. Conclusion: This research paves the way for practical applications in personalized nutrition and healthcare, showcasing the extensive potential of AI in nutritional sciences across various dietary platforms. Full article
(This article belongs to the Special Issue Digital Transformations in Nutrition)
11 pages, 232 KiB  
Article
Performance Art in the Age of Extinction
by Gregorio Tenti
Philosophies 2025, 10(1), 13; https://doi.org/10.3390/philosophies10010013 - 20 Jan 2025
Abstract
This paper aims to map out the transformations in contemporary performance art during the current ‘age of extinction’. The first section extends Claire Bishop’s notion of “delegated performance” in order to categorize a turn towards the inclusion of other-than-human entities in the performance [...] Read more.
This paper aims to map out the transformations in contemporary performance art during the current ‘age of extinction’. The first section extends Claire Bishop’s notion of “delegated performance” in order to categorize a turn towards the inclusion of other-than-human entities in the performance field. This operation leads to the concept of ‘performative animism’, referring to the strategies of re-animation of reality through artistic performance. The second section works out the idea of ‘planetarization’ of the performance field, which designates its opening to spatial and temporal fluxes coming from a dimension that overcomes the scale of human experience, that is, the planetary dimension. The third and final section interprets the meaning of these two transformations by introducing the concepts of ‘exbodiment’ and ‘excarnation’, which tie closely to a new political task for performance art. Full article
(This article belongs to the Special Issue The Aesthetics of the Performing Arts in the Contemporary Landscape)
21 pages, 2867 KiB  
Article
A Resource-Efficient Multi-Entropy Fusion Method and Its Application for EEG-Based Emotion Recognition
by Jiawen Li, Guanyuan Feng, Chen Ling, Ximing Ren, Xin Liu, Shuang Zhang, Leijun Wang, Yanmei Chen, Xianxian Zeng and Rongjun Chen
Entropy 2025, 27(1), 96; https://doi.org/10.3390/e27010096 (registering DOI) - 20 Jan 2025
Abstract
Emotion recognition is an advanced technology for understanding human behavior and psychological states, with extensive applications for mental health monitoring, human–computer interaction, and affective computing. Based on electroencephalography (EEG), the biomedical signals naturally generated by the brain, this work proposes a resource-efficient multi-entropy [...] Read more.
Emotion recognition is an advanced technology for understanding human behavior and psychological states, with extensive applications for mental health monitoring, human–computer interaction, and affective computing. Based on electroencephalography (EEG), the biomedical signals naturally generated by the brain, this work proposes a resource-efficient multi-entropy fusion method for classifying emotional states. First, Discrete Wavelet Transform (DWT) is applied to extract five brain rhythms, i.e., delta, theta, alpha, beta, and gamma, from EEG signals, followed by the acquisition of multi-entropy features, including Spectral Entropy (PSDE), Singular Spectrum Entropy (SSE), Sample Entropy (SE), Fuzzy Entropy (FE), Approximation Entropy (AE), and Permutation Entropy (PE). Then, such entropies are fused into a matrix to represent complex and dynamic characteristics of EEG, denoted as the Brain Rhythm Entropy Matrix (BREM). Next, Dynamic Time Warping (DTW), Mutual Information (MI), the Spearman Correlation Coefficient (SCC), and the Jaccard Similarity Coefficient (JSC) are applied to measure the similarity between the unknown testing BREM data and positive/negative emotional samples for classification. Experiments were conducted using the DEAP dataset, aiming to find a suitable scheme regarding similarity measures, time windows, and input numbers of channel data. The results reveal that DTW yields the best performance in similarity measures with a 5 s window. In addition, the single-channel input mode outperforms the single-region mode. The proposed method achieves 84.62% and 82.48% accuracy in arousal and valence classification tasks, respectively, indicating its effectiveness in reducing data dimensionality and computational complexity while maintaining an accuracy of over 80%. Such performances are remarkable when considering limited data resources as a concern, which opens possibilities for an innovative entropy fusion method that can help to design portable EEG-based emotion-aware devices for daily usage. Full article
Show Figures

Figure 1

Figure 1
<p>The overall framework of the proposed resource-efficient multi-entropy fusion method for EEG-based emotion recognition.</p>
Full article ">Figure 2
<p>Channel and region locations in DEAP: (<b>a</b>) 32 EEG channels; (<b>b</b>) five brain regions.</p>
Full article ">Figure 3
<p>The 4-level DWT extracts five brain rhythms from emotional EEG signals.</p>
Full article ">Figure 4
<p>Two examples of ANOVA test box plots for the best entropy features that provide the highest accuracy from subject S1 in the DEAP dataset. (<b>a</b>) The alpha sample entropy (<span class="html-italic">α<sub>SE</sub></span>) in the P7 channel for arousal classification; (<b>b</b>) the beta singular-spectrum entropy (<span class="html-italic">β<sub>SSE</sub></span>) in the O2 channel for valence classification.</p>
Full article ">Figure 5
<p>Statistical frequency of the optimal time segment (s) for 32 subjects from the DEAP dataset: (<b>a</b>) arousal classification; (<b>b</b>) valence classification.</p>
Full article ">Figure 6
<p>Word clouds of the representative channels for 32 subjects from the DEAP dataset: (<b>a</b>) arousal classification; (<b>b</b>) valence classification.</p>
Full article ">
33 pages, 19016 KiB  
Article
Multitask Learning-Based Pipeline-Parallel Computation Offloading Architecture for Deep Face Analysis
by Faris S. Alghareb and Balqees Talal Hasan
Computers 2025, 14(1), 29; https://doi.org/10.3390/computers14010029 - 20 Jan 2025
Abstract
Deep Neural Networks (DNNs) have been widely adopted in several advanced artificial intelligence applications due to their competitive accuracy to the human brain. Nevertheless, the superior accuracy of a DNN is achieved at the expense of intensive computations and storage complexity, requiring custom [...] Read more.
Deep Neural Networks (DNNs) have been widely adopted in several advanced artificial intelligence applications due to their competitive accuracy to the human brain. Nevertheless, the superior accuracy of a DNN is achieved at the expense of intensive computations and storage complexity, requiring custom expandable hardware, i.e., graphics processing units (GPUs). Interestingly, leveraging the synergy of parallelism and edge computing can significantly improve CPU-based hardware platforms. Therefore, this manuscript explores levels of parallelism techniques along with edge computation offloading to develop an innovative hardware platform that improves the efficacy of deep learning computing architectures. Furthermore, the multitask learning (MTL) approach is employed to construct a parallel multi-task classification network. These tasks include face detection and recognition, age estimation, gender recognition, smile detection, and hair color and style classification. Additionally, both pipeline and parallel processing techniques are utilized to expedite complicated computations, boosting the overall performance of the presented deep face analysis architecture. A computation offloading approach, on the other hand, is leveraged to distribute computation-intensive tasks to the server edge, whereas lightweight computations are offloaded to edge devices, i.e., Raspberry Pi 4. To train the proposed deep face analysis network architecture, two custom datasets (HDDB and FRAED) were created for head detection and face-age recognition. Extensive experimental results demonstrate the efficacy of the proposed pipeline-parallel architecture in terms of execution time. It requires 8.2 s to provide detailed face detection and analysis for an individual and 23.59 s for an inference containing 10 individuals. Moreover, a speedup of 62.48% is achieved compared to the sequential-based edge computing architecture. Meanwhile, 25.96% speed performance acceleration is realized when implementing the proposed pipeline-parallel architecture only on the server edge compared to the sever sequential implementation. Considering classification efficiency, the proposed classification modules achieve an accuracy of 88.55% for hair color and style classification and a remarkable prediction outcome of 100% for face recognition and age estimation. To summarize, the proposed approach can assist in reducing the required execution time and memory capacity by processing all facial tasks simultaneously on a single deep neural network rather than building a CNN model for each task. Therefore, the presented pipeline-parallel architecture can be a cost-effective framework for real-time computer vision applications implemented on resource-limited devices. Full article
Show Figures

Figure 1

Figure 1
<p>Head detection dataset versus face detection dataset using nano-based YOLOv8.</p>
Full article ">Figure 2
<p>Sample images from the hair dataset used to train the hair color-style module.</p>
Full article ">Figure 3
<p>Selected image samples of the created face recognition and age estimation dataset.</p>
Full article ">Figure 4
<p>The general framework of the proposed deep face analysis architecture.</p>
Full article ">Figure 5
<p>Stages of the pipeline-multithreading architecture, showing four images being processed in parallel.</p>
Full article ">Figure 6
<p>Proposed pipeline-parallel architectures with thread distributions; (<b>a</b>) multithreading three MTL-based classifiers on a single edge device, (<b>b</b>) multithreading three MTL-based classifiers on a cluster containing three edge computing devices.</p>
Full article ">Figure 7
<p>Modified VGG-Face network to support the multitask classification approach.</p>
Full article ">Figure 8
<p>Offloading feature maps of detected heads to edge devices using multithreading.</p>
Full article ">Figure 9
<p>Multithreading of parallel modules on edge server and edge node processors.</p>
Full article ">Figure 10
<p>The framework of system deployment for the proposed deep face analysis.</p>
Full article ">Figure 11
<p>Training and validation performance of the YOLOv8 model for head detection. The x-axis represents the number of epochs.</p>
Full article ">Figure 12
<p>YOLOv8 testing performance; (<b>a</b>) confusion matrix, (<b>b</b>) precision, (<b>c</b>) recall, (<b>d</b>) precision-recall, and (<b>e</b>) F1 score confidence curve.</p>
Full article ">Figure 13
<p>Head detection result samples of YOLOv8, where a red box denotes a detected head with its corresponding confidence level.</p>
Full article ">Figure 14
<p>Confusion matrices for classification modules using STL and MTL; (<b>a</b>) hair color STL, (<b>b</b>) hair color MTL, (<b>c</b>) hairstyle STL, (<b>d</b>) hairstyle MTL, (<b>e</b>) gender STL, (<b>f</b>) gender MTL, (<b>g</b>) smile STL, (<b>h</b>) smile MTL, (<b>i</b>) Face STL, (<b>j</b>) Face MTL, (<b>k</b>) age STL, and (<b>l</b>) age MTL module.</p>
Full article ">Figure 15
<p>Speed performance evaluation of the proposed pipeline-parallel architecture; (<b>a</b>) execution time for pipeline-parallel configurations versus sequential implementation, (<b>b</b>) speedup comparisons of implemented configurations.</p>
Full article ">
21 pages, 630 KiB  
Article
Polynomial Exact Schedulability and Infeasibility Test for Fixed-Priority Scheduling on Multiprocessor Platforms
by Natalia Garanina, Igor Anureev and Dmitry Kondratyev
Appl. Syst. Innov. 2025, 8(1), 15; https://doi.org/10.3390/asi8010015 - 20 Jan 2025
Abstract
In this paper, we develop an exact schedulability test and sufficient infeasibility test for fixed-priority scheduling on multiprocessor platforms. We base our tests on presenting real-time systems as a Kripke model for dynamic real-time systems with sporadic non-preemptible tasks running on a multiprocessor [...] Read more.
In this paper, we develop an exact schedulability test and sufficient infeasibility test for fixed-priority scheduling on multiprocessor platforms. We base our tests on presenting real-time systems as a Kripke model for dynamic real-time systems with sporadic non-preemptible tasks running on a multiprocessor platform and an online scheduler using global fixed priorities. This model includes states and transitions between these states, allows us to formally justify a polynomial-time algorithm for an exact schedulability test using the idea of backward reachability. Using this algorithm, we perform the exact schedulability test for the above real-time systems, in which there is one more task than the processors. The main advantage of this algorithm is its polynomial complexity, while, in general, the problem of the exact schedulability testing of real-time systems on multiprocessor platforms is NP-hard. The infeasibility test uses the same algorithm for an arbitrary task-to-processor ratio, providing a sufficient infeasibility condition: if the real-time system under test is not schedulable in some cases, the algorithm detects this. We conduct an experimental study of our algorithms on the datasets generated with different utilization values and compare them to several state-of-the-art schedulability tests. The experiments show that the performance of our algorithm exceeds the performance of its analogues while its accuracy is similar. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

Figure 1
<p>Backward reachability-based case analysis.</p>
Full article ">Figure 2
<p>Performance of schedulability tests <tt>Alg1</tt>, <tt>LeeShin2014</tt>, and <tt>BaekLee2020</tt>: (<b>a</b>) an algorithm operation time, and (<b>b</b>) an acceptance ratio.</p>
Full article ">Figure 3
<p>Performance of our infeasibility test 2: (<b>a</b>) an algorithm operation time, and (<b>b</b>) an acceptance ratio.</p>
Full article ">
14 pages, 2716 KiB  
Article
Limitations of the Boston Carpal Tunnel Questionnaire in Assessing Severity in a Homogeneous Occupational Cohort
by Venera Cristina Dinescu, Marius Bica, Ramona Constantina Vasile, Andrei Gresita, Bogdan Catalin, Alexandra Daniela Rotaru-Zavaleanu, Florentin Ananu Vreju, Lorena Sas and Marius Bunescu
Life 2025, 15(1), 132; https://doi.org/10.3390/life15010132 - 20 Jan 2025
Abstract
Background: Carpal tunnel syndrome (CTS) is a common peripheral neuropathy, often assessed using the Boston Carpal Tunnel Questionnaire (BCTQ). The BCTQ evaluates symptom severity (SSS) and functional status (FSS) but has limitations in stratifying CTS severity, particularly in severe cases. Objective: [...] Read more.
Background: Carpal tunnel syndrome (CTS) is a common peripheral neuropathy, often assessed using the Boston Carpal Tunnel Questionnaire (BCTQ). The BCTQ evaluates symptom severity (SSS) and functional status (FSS) but has limitations in stratifying CTS severity, particularly in severe cases. Objective: This study aimed to evaluate the utility of the BCTQ in a homogeneous cohort of female workers engaged in repetitive manual tasks, exploring its correlation with objective clinical measures and its performance in detecting CTS severity. Methods: A cross-sectional study was conducted on 24 right-hand-dominant female workers with repetitive occupational tasks. CTS diagnosis was confirmed via clinical and electrodiagnostic criteria. Subjects completed the BCTQ, and correlations between BCTQ scores and objective measures such as median nerve cross-sectional area and nerve conduction studies were analyzed. Statistical analyses included comparisons across CTS severity groups and subgroup evaluations based on age and tenure. Results: The BCTQ demonstrated moderate correlations with objective measures, with a strong correlation between SSS and FSS scores (r = 0.86, p < 0.001). However, the sensitivity of the SSS and FSS was limited, particularly for severe CTS cases. Paradoxically lower scores in severe cases may reflect questionnaire limitations or adaptive responses. Targeted questions addressing pain and sensory symptoms showed better sensitivity (>80%) and may guide clinicians in identifying slight CTS cases. Conclusions: While the BCTQ remains a valuable tool for assessing CTS, its limitations necessitate complementary use of objective diagnostic tools, particularly for severe cases. Future refinements, such as tailored scoring systems and integration with clinical measures, could enhance its diagnostic utility and ensure comprehensive assessment of CTS severity. Full article
(This article belongs to the Special Issue Feature Paper in Physiology and Pathology: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The mean score for the questions in the BCSTQ SSS (* <span class="html-italic">p</span> &lt; 0.05—Statistically significant question compared to normal).</p>
Full article ">Figure 2
<p>Mean Symptom Severity Scale scores based on CTS severity. The bars represent the mean SSS scores for each category of carpal tunnel syndrome severity: Normal, Slight, Mild, and Severe. (* <span class="html-italic">p</span> &lt; 0.05 compared to Normal).</p>
Full article ">Figure 3
<p>Scores of questions from the BCTQ SSS questionnaire based on CTS severity. The lines represent the mean Symptom Severity Scale (SSS) scores for each question (S1–S11) across categories of carpal tunnel syndrome severity: Normal, Slight, Mild, and Severe. (◆ denotes scores significantly different from the Normal group (<span class="html-italic">p</span> &lt; 0.05)).</p>
Full article ">Figure 4
<p>Mean Functional Status Scale (FSS) scores based on CTS severity. The bars represent the mean FSS scores for each category of carpal tunnel syndrome severity: Normal, Slight, Mild, and Severe.</p>
Full article ">Figure 5
<p>Scores of questions from the BCTQ FSS questionnaire based on CTS severity. The lines represent the mean Functional Status Scale (FSS) scores for each question (F1–F8) across categories of carpal tunnel syndrome severity: Normal, Slight, Mild, and Severe. (◆ denote scores significantly different from the Normal group (<span class="html-italic">p</span> &lt; 0.05)).</p>
Full article ">Figure 6
<p>Relationship between SSS and FSS scores. (<b>A</b>): Mean Symptom Severity Scale and Functional Status Scale scores across carpal tunnel syndrome severity levels (Normal, Slight, Mild, and Severe). The left y-axis represents the SSS scores, and the right y-axis represents the FSS scores. (<b>B</b>): Correlation between SSS and FSS scores. The x-axis represents the SSS scores, and the y-axis represents the FSS scores. The blue circle indicates the 95% coverage probability, while the red line represents the linear correlation.</p>
Full article ">
28 pages, 13922 KiB  
Article
Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels
by Zhenye Niu, Yuxia Li, Yushu Gong, Bowei Zhang, Yuan He, Jinglin Zhang, Mengyu Tian and Lei He
Remote Sens. 2025, 17(2), 344; https://doi.org/10.3390/rs17020344 - 20 Jan 2025
Abstract
In the scenario of limited labeled remote-sensing datasets, the model’s performance is constrained by the insufficient availability of data. Generative model-based data augmentation has emerged as a promising solution to this limitation. While existing generative models perform well in natural scene domains (e.g., [...] Read more.
In the scenario of limited labeled remote-sensing datasets, the model’s performance is constrained by the insufficient availability of data. Generative model-based data augmentation has emerged as a promising solution to this limitation. While existing generative models perform well in natural scene domains (e.g., faces and street scenes), their performance in remote sensing is hindered by severe data imbalance and the semantic similarity among land-cover classes. To tackle these challenges, we propose the Multi-Class Guided GAN (MCGGAN), a novel network for generating remote-sensing images from semantic labels. Our model features a dual-branch architecture with a global generator that captures the overall image structure and a multi-class generator that improves the quality and differentiation of land-cover types. To integrate these generators, we design a shared-parameter encoder for consistent feature encoding across two branches, and a spatial decoder that synthesizes outputs from the class generators, preventing overlap and confusion. Additionally, we employ perceptual loss (LVGG) to assess perceptual similarity between generated and real images, and texture matching loss (LT) to capture fine texture details. To evaluate the quality of image generation, we tested multiple models on two custom datasets (one from Chongzhou, Sichuan Province, and another from Wuzhen, Zhejiang Province, China) and a public dataset LoveDA. The results show that MCGGAN achieves improvements of 52.86 in FID, 0.0821 in SSIM, and 0.0297 in LPIPS compared to the Pix2Pix baseline. We also conducted comparative experiments to assess the semantic segmentation accuracy of the U-Net before and after incorporating the generated images. The results show that data augmentation with the generated images leads to an improvement of 4.47% in FWIoU and 3.23% in OA across the Chongzhou and Wuzhen datasets. Experiments show that MCGGAN can be effectively used as a data augmentation approach to improve the performance of downstream remote-sensing image segmentation tasks. Full article
Show Figures

Figure 1

Figure 1
<p>The network structure of MCGGAN.</p>
Full article ">Figure 2
<p>The structure of shared-parameter encoder.</p>
Full article ">Figure 3
<p>The module structure of the multi-class generator.</p>
Full article ">Figure 4
<p>The three datasets used for MCGGAN.</p>
Full article ">Figure 5
<p>The schematic diagram of ablation experiment plan.</p>
Full article ">Figure 6
<p>The ablation experiment on the Chongzhou dataset.</p>
Full article ">Figure 7
<p>The ablation experiment on the Wuzhen dataset.</p>
Full article ">Figure 8
<p>The loss function for the three dual-branch models in ablation experiments.</p>
Full article ">Figure 9
<p>The partial DBGAN-generated images: left, semantic label; middle, generated image; right, real image. (<b>a</b>) Chongzhou and (<b>b</b>) Wuzhen.</p>
Full article ">Figure 10
<p>The generated results for the Chongzhou dataset.</p>
Full article ">Figure 11
<p>The generated results for the Wuzhen dataset.</p>
Full article ">Figure 12
<p>The generated results for the LoveDA dataset.</p>
Full article ">Figure 13
<p>CAM visualization results of different U-Net layers in real remote-sensing images. I, II, and III, respectively, represent the visual results of the second downsampling module, the bottleneck layer between the encoder and decoder, and the output features of the last upsampling module.</p>
Full article ">Figure 14
<p>CAM visualization results of different U-Net layers in generated images. I, II, and III, respectively, represent the visual results of the second downsampling module, the bottleneck layer between the encoder and decoder, and the output features of the last upsampling module.</p>
Full article ">
22 pages, 3956 KiB  
Article
Progressive Self-Prompting Segment Anything Model for Salient Object Detection in Optical Remote Sensing Images
by Xiaoning Zhang, Yi Yu, Daqun Li and Yuqing Wang
Remote Sens. 2025, 17(2), 342; https://doi.org/10.3390/rs17020342 - 20 Jan 2025
Viewed by 94
Abstract
With the continuous advancement of deep neural networks, salient object detection (SOD) in natural images has made significant progress. However, SOD in optical remote sensing images (ORSI-SOD) remains a challenging task due to the diversity of objects and the complexity of backgrounds. The [...] Read more.
With the continuous advancement of deep neural networks, salient object detection (SOD) in natural images has made significant progress. However, SOD in optical remote sensing images (ORSI-SOD) remains a challenging task due to the diversity of objects and the complexity of backgrounds. The primary challenge lies in generating robust features that can effectively integrate both global semantic information for salient object localization and local spatial details for boundary reconstruction. Most existing ORSI-SOD methods rely on pre-trained CNN- or Transformer-based backbones to extract features from ORSIs, followed by multi-level feature aggregation. Given the significant differences between ORSIs and the natural images used in pre-training, the generalization capability of these backbone networks is often limited, resulting in suboptimal performance. Recently, prompt engineering has been employed to enhance the generalization ability of networks in the Segment Anything Model (SAM), an emerging vision foundation model that has achieved remarkable success across various tasks. Despite its success, directly applying the SAM to ORSI-SOD without prompts from manual interaction remains unsatisfactory. In this paper, we propose a novel progressive self-prompting model based on the SAM, termed PSP-SAM, which generates both internal and external prompts to enhance the network and overcome the limitations of SAM in ORSI-SOD. Specifically, domain-specific prompting modules, consisting of both block-shared and block-specific adapters, are integrated into the network to learn domain-specific visual prompts within the backbone, facilitating its adaptation to ORSI-SOD. Furthermore, we introduce a progressive self-prompting decoder module that performs prompt-guided multi-level feature integration and generates stage-wise mask prompts progressively, enabling the prompt-based mask decoders outside the backbone to predict saliency maps in a coarse-to-fine manner. The entire network is trained end-to-end with parameter-efficient fine-tuning. Extensive experiments on three benchmark ORSI-SOD datasets demonstrate that our proposed network achieves state-of-the-art performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Prediction results of original SAM (i.e., zero-shot segmentation), the SAM with manual point prompts, and our method. “<span style="color: #FF0000">★</span>” denotes the point generated by manual interaction.</p>
Full article ">Figure 2
<p>Main framework of the proposed Progressive Self-prompting Segment Anything Model (PSP-SAM). Both internal and external prompts are generated in PSP-SAM. The domain-specific prompting module (DSPM), which adapts SAM to the ORSI-SOD field, is embedded after every transformer block to learn visual prompts within the backbone. The pre-processing module (PPM), consisting of multiple <math display="inline"><semantics> <msub> <mi>CBR</mi> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </msub> </semantics></math> layers, is utilized to reduce the feature channel dimensions. Furthermore, a progressive self-prompting decoder module (PSP-DM) is proposed to conduct prompt-guided multi-level feature fusion and generate stage-wise mask prompts outside the backbone. Through PSP-DM, the final prediction is produced in a coarse-to-fine manner.</p>
Full article ">Figure 3
<p>Details of the transformer block and the proposed DSPM. The parameters of the transformer block are frozen to preserve the general knowledge of the backbone, while the parameters of the DSPM are learned through training to adapt to the ORSI-SOD domain.</p>
Full article ">Figure 4
<p>Details of the prompt generator. <math display="inline"><semantics> <msub> <mi>CLG</mi> <mrow> <mn>2</mn> <mo>×</mo> <mn>2</mn> </mrow> </msub> </semantics></math> denotes a <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math> convolution followed by layer norm and GELU operations. s is the stride size and <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>n</mi> <mo>→</mo> <mi>o</mi> <mi>u</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math> means that channel number is changed from <math display="inline"><semantics> <mrow> <mi>i</mi> <mi>n</mi> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Detailed architecture of the Prompt-based mask decoder. Given the image embedding and the dense embedding, it can output both the saliency map and the prompt-enhanced feature.</p>
Full article ">Figure 6
<p>Comparison of PR curves and F-measure curves of different methods on the ORSSD dataset. (<b>a</b>) PR Curve. (<b>b</b>) F-measure Curve.</p>
Full article ">Figure 7
<p>Comparison of PR curves and F-measure curves of different methods on the EORSSD dataset. (<b>a</b>) PR Curve. (<b>b</b>) F-measure Curve.</p>
Full article ">Figure 8
<p>Comparison of PR curves and F-measure curves of different methods on the ORSI dataset. (<b>a</b>) PR Curve. (<b>b</b>) F-measure Curve.</p>
Full article ">Figure 9
<p>Visual comparisons with 10 representative state-of-the-art ORSI-SOD methods. Please zoom in for the best view, especially for the scene of multiple tiny objects.</p>
Full article ">Figure 10
<p>Visual comparisons for PSP-DM. The generated multi-level mask prompts and the final saliency map are displayed.</p>
Full article ">Figure 11
<p>Failure cases of PSP-SAM and other state-of-the-art models.</p>
Full article ">
22 pages, 2909 KiB  
Article
Research and Application of a Multi-Agent-Based Intelligent Mine Gas State Decision-Making System
by Yi Sun and Xinke Liu
Appl. Sci. 2025, 15(2), 968; https://doi.org/10.3390/app15020968 (registering DOI) - 20 Jan 2025
Viewed by 210
Abstract
To address the issues of low efficiency in manual processing and lack of accuracy in judgment within traditional mine gas safety inspections, this paper designs and implements the Intelligent Mine Gas State Decision-Making System based on large language models (LLMs) and a multi-agent [...] Read more.
To address the issues of low efficiency in manual processing and lack of accuracy in judgment within traditional mine gas safety inspections, this paper designs and implements the Intelligent Mine Gas State Decision-Making System based on large language models (LLMs) and a multi-agent system. The system aims to enhance the accuracy of gas over-limit alarms and improve the efficiency of generating judgment reports. The system integrates the reasoning capabilities of LLMs and optimizes task allocation and execution efficiency of agents through the study of the hybrid multi-agent orchestration algorithm. Furthermore, the system establishes a comprehensive gas risk assessment knowledge base, encompassing historical alarm data, real-time monitoring data, alarm judgment criteria, treatment methods, and relevant policies and regulations. Additionally, the system incorporates several technologies, including retrieval-augmented generation based on human feedback mechanisms, tool management, prompt engineering, and asynchronous processing, which further enhance the application performance of the LLM in the gas status judgment system. Experimental results indicate that the system effectively improves the efficiency of gas alarm processing and the quality of judgment reports in coal mines, providing solid technical support for accident prevention and management in mining operations. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of centralized, decentralized, and hybrid systems.</p>
Full article ">Figure 2
<p>The three-layered structure of the intelligent mine gas state decision-making system, comprises the data layer, core layer, and application layer.</p>
Full article ">Figure 3
<p>Schematic diagram of the LLMs workflow and knowledge base construction.</p>
Full article ">Figure 4
<p>Construction hierarchical structure diagram of functional modules.</p>
Full article ">Figure 5
<p>Comparison of intelligent Q&amp;A results for GPT-3.5-turbo/GPT4/GPT-4o.</p>
Full article ">Figure 6
<p>Intelligent judgment output results.</p>
Full article ">Figure 7
<p>Radar chart of expert review scores.</p>
Full article ">
15 pages, 1413 KiB  
Article
Effects of Caffeine Intake Combined with Self-Selected Music During Warm-Up on Anaerobic Performance: A Randomized, Double-Blind, Crossover Study
by Bopeng Qiu, Ziyu Wang, Yinkai Zhang, Yusong Cui, Penglin Diao, Kaiji Liu, Juan Del Coso and Chang Liu
Nutrients 2025, 17(2), 351; https://doi.org/10.3390/nu17020351 - 19 Jan 2025
Viewed by 308
Abstract
Background: Both listening to music during warm-up and consuming caffeine before exercise have been independently shown to enhance athletic performance. However, the potential synergistic effects of combining these strategies remain largely unexplored. To date, only two studies have reported additional benefits to combining [...] Read more.
Background: Both listening to music during warm-up and consuming caffeine before exercise have been independently shown to enhance athletic performance. However, the potential synergistic effects of combining these strategies remain largely unexplored. To date, only two studies have reported additional benefits to combining music during warm-up with a caffeine dose of 3 mg/kg on taekwondo-specific performance tasks. However, these studies did not evaluate whether this combination produces additive or synergistic effects on other types of sports performance. The present study aimed to assess the effects of listening to music alone or combined with caffeine intake on performance in the Wingate anaerobic test (WAnT) in physically active subjects. Methods: Twenty-four physically active male participants took part in this randomized, double-blind, crossover experiment. Participants underwent WAnT performance evaluations under three conditions: (i) no intervention (control; CON); (ii) music plus placebo (Mus + PLA), involving the intake of a placebo (maltodextrin) 60 min prior and self-selected high-tempo music during warm-up; and (iii) music plus caffeine (Mus + CAF), involving the intake of 3 mg/kg of caffeine 60 min prior and self-selected high-tempo music during warm-up. Under all conditions, participants wore the same Bluetooth headphones (with or without music), performed a 10 min standardized warm-up, and completed the 30 s WAnT with a load of 7.5% of their body weight on a calibrated ergometer. Power output was recorded at a frequency of 1 Hz throughout the exercise. The Feeling Scale was assessed both before and after the exercise test, while heart rate (HR) and the rating of perceived exertion (RPE) were measured immediately following the exercise. Results: Mus + PLA and Mus + CAF significantly improved peak power, mean power, and total work compared with CON (p < 0.05). Furthermore, peak power was higher in Mus + CAF than in Mus + PLA (p = 0.01). Post-exercise HR and RPE showed no significant differences across conditions (p > 0.05). Regarding the Feeling Scale (FS) before exercise, the Mus + PLA and Mus + CAF conditions showed significantly higher scores than CON (p < 0.05), while no differences were found after exercise. The perceived fitness metrics displayed no significant differences among conditions (p > 0.05), except for self-perceived power, which was higher in Mus + CAF than in CON (p = 0.03). Conclusions: Self-selected music during warm-up, either alone or combined with caffeine, significantly enhanced several WAnT performance metrics, including peak power, mean power, and total work. Remarkably, combining music with caffeine further improved peak power and increased self-perceived power compared with music alone. While listening to self-selected music during warm-up provided measurable benefits on anaerobic exercise performance, the combination of music and caffeine demonstrated additive effects, making it the optimal strategy for maximizing anaerobic performance. Full article
(This article belongs to the Special Issue Caffeine Intake for Human Health and Exercise Performance)
Show Figures

Figure 1

Figure 1
<p>Experimental design. min = minute.</p>
Full article ">Figure 2
<p>Effects of self-selected music combined with placebo (Mus + PLA) or caffeine supplementation (Mus + CAF) on Wingate anaerobic test (WAnT) performance, heart rate, and ratings of perceived exertion (RPEs) in physically active subjects compared with a control condition without music or caffeine (CON). (<b>A</b>) peak power; (<b>B</b>) mean power; (<b>C</b>) total work; (<b>D</b>) fatigue index; (<b>E</b>) heart rate; (<b>F</b>) RPE. ns = no statistical difference, * = <span class="html-italic">p</span> ≤ 0.05, and *** = <span class="html-italic">p</span> &lt; 0.001 compared with other conditions.</p>
Full article ">Figure 3
<p>Effects of self-selected music combined with placebo (Mus + PLA) or caffeine supplementation (Mus + CAF) on the Feeling Scale measured before and after the Wingate anaerobic test (WAnT) in physically active subjects compared with a control condition without music or caffeine (CON). (<b>A</b>) Feeling Scale ratings before the WAnT; (<b>B</b>) Feeling Scale ratings after the WAnT. ns = no statistical difference, * = <span class="html-italic">p</span> ≤ 0.05, and *** = <span class="html-italic">p</span> &lt; 0.001 compared with other conditions.</p>
Full article ">
Back to TopTop