[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,080)

Search Parameters:
Keywords = confusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 21447 KiB  
Case Report
Air Travel-Triggered Tension Pneumocephalus Caused by a Frontal Sinus Osteoma: Case Report
by Aleksandar Djurdjevic, Milan Lepic, Jovana Djurdjevic, Svetozar Stankovic and Goran Pavlicevic
Reports 2025, 8(1), 10; https://doi.org/10.3390/reports8010010 (registering DOI) - 18 Jan 2025
Abstract
Background and Clinical Significance: Pneumocephalus, an accumulation of air within the cranial cavity, typically arises from trauma or iatrogenic causes. However, spontaneous occurrences of this are rare and linked to various pathologies affecting the paranasal sinuses, the ear, or the skull base. [...] Read more.
Background and Clinical Significance: Pneumocephalus, an accumulation of air within the cranial cavity, typically arises from trauma or iatrogenic causes. However, spontaneous occurrences of this are rare and linked to various pathologies affecting the paranasal sinuses, the ear, or the skull base. The impact of air travel on individuals with pneumocephalus remains uncertain despite ongoing research. We report a unique case of spontaneous tension pneumocephalus attributed to a frontal sinus osteoma during air travel. Case Presentation: A 55-year-old man presented with headache and dizziness, initiated during a nine-hour international flight two weeks prior. The symptoms abated after landing but recurred on his return flight, accompanied by confusion the following day. A neurological examination revealed no deficits. CT and MRI scans indicated the presence of intraparenchymal air collection in the right frontal lobe, attributed to a frontal sinus osteoma causing a dural tear. Surgical intervention included duroplasty and osteoma removal, with postoperative recovery free of complications. Conclusions: Frontal sinus osteoma-induced tension pneumocephalus is exceedingly rare, with only limited cases reported in the literature. This case shows that air travel may exacerbate intracranial gas dynamics that lead to development of tension pneumocephalus with a potentially fatal outcome for patients. Full article
21 pages, 2352 KiB  
Article
Decoding Subjective Understanding: Using Biometric Signals to Classify Phases of Understanding
by Milan Lazic, Earl Woodruff and Jenny Jun
AI 2025, 6(1), 18; https://doi.org/10.3390/ai6010018 - 17 Jan 2025
Viewed by 225
Abstract
The relationship between the cognitive and affective dimensions of understanding has remained unexplored due to the lack of reliable methods for measuring emotions and feelings during learning. Focusing on five phases of understanding—nascent understanding, misunderstanding, confusion, emergent understanding, and deep understanding—this study introduces [...] Read more.
The relationship between the cognitive and affective dimensions of understanding has remained unexplored due to the lack of reliable methods for measuring emotions and feelings during learning. Focusing on five phases of understanding—nascent understanding, misunderstanding, confusion, emergent understanding, and deep understanding—this study introduces an AI-driven solution to measure subjective understanding by analyzing physiological activity manifested in facial expressions. To investigate these phases, 103 participants remotely worked on 15 riddles while their facial expressions were video recorded. Action units (AUs) for each phase instance were measured using AFFDEX software. AU patterns associated with each phase were then identified through the application of six supervised machine learning algorithms. Distinct AU patterns were found for all five phases, with gradient boosting machine and random forest models achieving the highest predictive accuracy. These findings suggest that physiological activity can be leveraged to reliably measure understanding. Further, they advance a novel approach for measuring and fostering understanding in educational settings, as well as developing adaptive learning technologies and personalized educational interventions. Future studies should explore how physiological signatures of understanding phases both reflect and influence their associated cognitive processes, as well as the generalizability of this study’s findings across diverse populations and learning contexts (A suite of AI tools was employed in the development of this paper: (1) ChatGPT4o (for writing clarity and reference checking), (2) Grammarly (for grammar and editorial corrections), and (3) ResearchRabbit (reference management)). Full article
13 pages, 532 KiB  
Article
Inflammatory Markers and Severity in COVID-19 Patients with Clostridioides Difficile Co-Infection: A Retrospective Analysis Including Subgroups with Diabetes, Cancer, and Elderly
by Teodor Cerbulescu, Flavia Ignuta, Uma Shailendri Rayudu, Maliha Afra, Ovidiu Rosca, Adrian Vlad and Stana Loredana
Biomedicines 2025, 13(1), 227; https://doi.org/10.3390/biomedicines13010227 - 17 Jan 2025
Viewed by 238
Abstract
Background and Objectives: The interplay of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and Clostridioides difficile infection (CDI) poses a critical clinical challenge. The resultant inflammatory milieu and its impact on outcomes remain incompletely understood, especially among vulnerable subgroups such as elderly [...] Read more.
Background and Objectives: The interplay of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and Clostridioides difficile infection (CDI) poses a critical clinical challenge. The resultant inflammatory milieu and its impact on outcomes remain incompletely understood, especially among vulnerable subgroups such as elderly patients, those with diabetes, and individuals with cancer. This study aimed to characterize inflammatory markers and composite inflammatory severity scores—such as Acute Physiology and Chronic Health Evaluation II (APACHE II), Confusion, Urea, Respiratory rate, Blood pressure, and age ≥ 65 years (CURB-65), National Early Warning Score (NEWS), and the Systemic Immune-Inflammation Index (SII)—in hospitalized Coronavirus Disease 2019 (COVID-19) patients with and without CDI, and to evaluate their prognostic implications across key clinical subgroups. Methods: We conducted a retrospective, single-center study of 240 hospitalized adults with Reverse Transcription Polymerase Chain Reaction (RT-PCR)-confirmed COVID-19 between February 2021 and March 2023. Of these, 98 had concurrent CDI. We collected baseline demographics, comorbidities, and laboratory parameters including C-reactive protein (CRP), Interleukin-6 (IL-6), ferritin, neutrophil and lymphocyte counts, albumin, platelet counts, and calculated indices (C-reactive protein to Albumin Ratio (CAR), Neutrophil-to-Lymphocyte Ratio (NLR), Prognostic Nutritional Index (PNI), SII). Patients were stratified by CDI status and analyzed for inflammatory marker distributions, severity scores (APACHE II, CURB-65, NEWS), and outcomes (Intensive Care Unit (ICU) admission, mechanical ventilation, mortality). Subgroup analyses included diabetes, elderly (≥65 years), and cancer patients. Statistical comparisons employed t-tests, chi-square tests, and logistic regression models. Results: Patients with CDI demonstrated significantly higher CRP, IL-6, SII, and CAR, coupled with lower albumin and PNI (p < 0.05). They also had elevated APACHE II, CURB-65, and NEWS scores. CDI-positive patients experienced increased ICU admission (38.8% vs. 20.5%), mechanical ventilation (24.5% vs. 12.9%), and mortality (22.4% vs. 10.6%, all p < 0.05). Subgroup analyses revealed more pronounced inflammatory derangements and worse outcomes in elderly, diabetic, and cancer patients with CDI. Conclusions: Concurrent CDI intensifies systemic inflammation and adverse clinical trajectories in hospitalized COVID-19 patients. Elevations in inflammatory markers and severity scores predict worse outcomes, especially in high-risk subgroups. Early recognition and targeted interventions, including infection control and supportive measures, may attenuate disease severity and improve patient survival. Full article
(This article belongs to the Section Microbiology in Human Health and Disease)
28 pages, 14219 KiB  
Article
Classification and Analysis of Agaricus bisporus Diseases with Pre-Trained Deep Learning Models
by Umit Albayrak, Adem Golcuk, Sinan Aktas, Ugur Coruh, Sakir Tasdemir and Omer Kaan Baykan
Agronomy 2025, 15(1), 226; https://doi.org/10.3390/agronomy15010226 - 17 Jan 2025
Viewed by 270
Abstract
This research evaluates 20 advanced convolutional neural network (CNN) architectures for classifying mushroom diseases in Agaricus bisporus, utilizing a custom dataset of 3195 images (2464 infected and 731 healthy mushrooms) captured under uniform white-light conditions. The consistent illumination in the dataset enhances [...] Read more.
This research evaluates 20 advanced convolutional neural network (CNN) architectures for classifying mushroom diseases in Agaricus bisporus, utilizing a custom dataset of 3195 images (2464 infected and 731 healthy mushrooms) captured under uniform white-light conditions. The consistent illumination in the dataset enhances the robustness and practical usability of the assessed models. Using a weighted scoring system that incorporates precision, recall, F1-score, area under the ROC curve (AUC), and average precision (AP), ResNet-50 achieved the highest overall score of 99.70%, demonstrating outstanding performance across all disease categories. DenseNet-201 and DarkNet-53 followed closely, confirming their reliability in classification tasks with high recall and precision values. Confusion matrices and ROC curves further validated the classification capabilities of the models. These findings underscore the potential of CNN-based approaches for accurate and efficient early detection of mushroom diseases, contributing to more sustainable and data-driven agricultural practices. Full article
Show Figures

Figure 1

Figure 1
<p>General View and Lighting System Details of the Portable Imaging Apparatus. (<b>a</b>) General View of the Portable Imaging Apparatus, highlighting the modular design, power supply compartment, and lighting platform setup. (<b>b</b>) Detailed View of the Lighting System of the Portable Imaging Apparatus, highlighting the specially designed 45-degree angled lighting channels equipped with diffusers to minimize glare and ensure uniform illumination, specifically tailored for optimal imaging conditions of the mushroom specimen.</p>
Full article ">Figure 2
<p>Top and Front View of the Custom Portable Imaging Apparatus, illustrating the internal lighting platform, smartphone-based imaging system, and mushroom placement.</p>
Full article ">Figure 3
<p>Development Process of the Portable Imaging Apparatus, illustrating key stages from development to final design.</p>
Full article ">Figure 4
<p>Example Images of <span class="html-italic">Agaricus bisporus</span> Classes (Healthy, Bacterial Blotch, Dry Bubble, Cobweb, Wet Bubble), Captured Under Controlled Conditions for Dataset Creation.</p>
Full article ">Figure 5
<p>Image acquisition process showing mushrooms photographed from random angles (α°) and in an upright position (90°) for dataset creation. The dotted lines indicate the camera’s field of view.</p>
Full article ">Figure 6
<p>Workflow Diagram for <span class="html-italic">Agaricus bisporus</span> Disease Classification: From Image Acquisition to Model Evaluation Using CNN Architectures.</p>
Full article ">Figure 7
<p>Confusion Matrices for Evaluated CNN Models. Confusion matrices illustrating the classification performance of evaluated CNN models across the five categories: w0hl (Healthy), w<sub>b</sub>b (Bacterial Blotch), w<sub>d</sub>b (Dry Bubble), w<sub>c</sub>w(Cobweb) and w<sub>w</sub>b (Wet Bubble). Each matrix highlights the model’s ability to distinguish between true and predicted labels, with minimal misclassifications across all disease categories and the healthy class.</p>
Full article ">Figure 8
<p>ROC Curves for Evaluated CNN Models. Receiver Operating Characteristic (ROC) curves illustrating the classification performance of evaluated CNN models across the five categories five categories: w0hl (Healthy), w<sub>b</sub>b (Bacterial Blotch), w<sub>d</sub>b (Dry Bubble), w<sub>c</sub>w (Cobweb) and w<sub>w</sub>b (Wet Bubble). The curves display the relationship between the true positive rate (sensitivity) and false positive rate for each class, highlighting the models’ ability to discriminate between diseased and healthy samples, with AUC values indicating overall performance.</p>
Full article ">Figure 9
<p>AUC Heatmap for Classifiers and Classes, Showing the Area Under the Curve (AUC) Across Disease Categories for Various Models.</p>
Full article ">Figure 10
<p>F1-Score Heatmap for Classifiers and Classes, Highlighting the Balance Between Precision and Recall Across Disease Categories for Various Models.</p>
Full article ">Figure 11
<p>Precision Heatmap for Classifiers and Classes, Depicting the Accuracy of Positive Predictions Across Disease Categories for Various Models.</p>
Full article ">Figure 12
<p>Recall Heatmap for Classifiers and Classes, Representing the Sensitivity in Identifying True Positives Across Disease Categories for Various Models.</p>
Full article ">Figure 13
<p>Specificity Heatmap for Classifiers and Classes, Displaying the Ability to Identify True Negatives Across Disease Categories for Various Models.</p>
Full article ">Figure 14
<p>AP Heatmap for Classifiers and Classes, Illustrating Average Precision Across Disease Categories for Various Models.</p>
Full article ">Figure 15
<p>Overall Average Precision (AP) for Classifiers.</p>
Full article ">Figure 16
<p>Overall Area Under the Curve (AUC) for Classifiers.</p>
Full article ">Figure 17
<p>Overall F1-Score for Classifiers.</p>
Full article ">Figure 18
<p>Overall Precision for Classifiers.</p>
Full article ">Figure 19
<p>Overall Recall for Classifiers.</p>
Full article ">Figure 20
<p>Overall Specificity for Classifiers.</p>
Full article ">
25 pages, 6330 KiB  
Article
FSDN-DETR: Enhancing Fuzzy Systems Adapter with DeNoising Anchor Boxes for Transfer Learning in Small Object Detection
by Zhijie Li, Jiahui Zhang, Yingjie Zhang, Dawei Yan, Xing Zhang, Marcin Woźniak and Wei Dong
Mathematics 2025, 13(2), 287; https://doi.org/10.3390/math13020287 - 17 Jan 2025
Viewed by 300
Abstract
The advancement of Transformer models in computer vision has rapidly spurred numerous Transformer-based object detection approaches, such as DEtection TRansformer. Although DETR’s self-attention mechanism effectively captures the global context, it struggles with fine-grained detail detection, limiting its efficacy in small object detection where [...] Read more.
The advancement of Transformer models in computer vision has rapidly spurred numerous Transformer-based object detection approaches, such as DEtection TRansformer. Although DETR’s self-attention mechanism effectively captures the global context, it struggles with fine-grained detail detection, limiting its efficacy in small object detection where noise can easily obscure or confuse small targets. To address these issues, we propose Fuzzy System DNN-DETR involving two key modules: Fuzzy Adapter Transformer Encoder and Fuzzy Denoising Transformer Decoder. The fuzzy Adapter Transformer Encoder utilizes adaptive fuzzy membership functions and rule-based smoothing to preserve critical details, such as edges and textures, while mitigating the loss of fine details in global feature processing. Meanwhile, the Fuzzy Denoising Transformer Decoder effectively reduces noise interference and enhances fine-grained feature capture, eliminating redundant computations in irrelevant regions. This approach achieves a balance between computational efficiency for medium-resolution images and the accuracy required for small object detection. Our architecture also employs adapter modules to reduce re-training costs, and a two-stage fine-tuning strategy adapts fuzzy modules to specific domains before harmonizing the model with task-specific adjustments. Experiments on the COCO and AI-TOD-V2 datasets show that FSDN-DETR achieves an approximately 20% improvement in average precision for very small objects, surpassing state-of-the-art models and demonstrating robustness and reliability for small object detection in complex environments. Full article
(This article belongs to the Special Issue Image Processing and Machine Learning with Applications)
Show Figures

Figure 1

Figure 1
<p><b>Small Object Detection Performance and Transfer Learning Capabilities of Different Models.</b> Pre-trained on COCO and Fine-tuned with Varying Percentages of AI-TOD-V2 Data. The graph illustrates the Average Precision (AP) values across different training data usage percentages, comparing models DINO-DETR, DQ-DETR, and FSDN-DETR in terms of AP for very tiny, tiny, and medium object detection categories.</p>
Full article ">Figure 2
<p><b>Architecture of the FSDN-DETR Framework.</b> The model leverages the FATE to enhance feature representation and adaptability, while the FDTD mitigates noise and refines object detection results. This integration aims to improve the robustness of object detection under complex and noisy environmental conditions.</p>
Full article ">Figure 3
<p><b>FATE Module Architecture.</b> FATE is introduced after the patch embedding stage in the Vision Transformer and routes features to both the Transformer Encoder and the FACM. The FST and AMS are integrated to handle uncertainties, facilitating robust cross-domain learning.</p>
Full article ">Figure 4
<p><b>FDTD Module Architecture.</b> The FDTD integrates FSA in the transformer decoder using adaptive attention scores based on noise levels to prioritize less noisy information, improving detection robustness in noisy environments.</p>
Full article ">Figure 5
<p><b>Training Strategy of FSDN-DETR Model.</b> The strategy consists of two phases: pretraining to initialize the backbone with general features and staged fine-tuning to adapt the model to specific datasets through selective tuning of fuzzy logic and transformer components.</p>
Full article ">Figure 6
<p><b>Sample of Error and Correct Detection in Low-Light Scenario.</b> Red boxes indicate misidentified regions, while green boxes represent correctly detected areas.</p>
Full article ">
17 pages, 3111 KiB  
Article
Quality Improvement Project to Change Prescribing Habits of Surgeons from Combination Opioids Such as Hydrocodone/Acetaminophen to Single-Agent Opioids Such as Oxycodone in Pediatric Postop Pain Management
by Muhammad Aishat, Alicia Segovia, Throy Campbell, Lorrainea Williams, Kristy Reyes, Tyler Hamby, David Farbo, Meredith Rockeymoore Brooks and Artee Gandhi
Anesth. Res. 2025, 2(1), 3; https://doi.org/10.3390/anesthres2010003 - 17 Jan 2025
Viewed by 322
Abstract
Background: While multimodal analgesia is the standard of care for postoperative pain relief, opioid medications continue to be a part of the treatment regimen, especially for more invasive surgeries such as spinal fusion, craniofacial reconstruction, laparotomy, and others. In pediatric patients, safe [...] Read more.
Background: While multimodal analgesia is the standard of care for postoperative pain relief, opioid medications continue to be a part of the treatment regimen, especially for more invasive surgeries such as spinal fusion, craniofacial reconstruction, laparotomy, and others. In pediatric patients, safe usage, storage, and dosing are especially important, along with clear instructions to caregivers on how to manage their child’s pain. Combination opioids such as hydrocodone with acetaminophen and acetaminophen with codeine are the most commonly prescribed opioid medications for postoperative pain control. However, these combination products can lead to acetaminophen toxicity, limit the ability to prescribe acetaminophen or ibuprofen, and add to caregiver confusion. Administering acetaminophen and ibuprofen individually rather than in combination products allows the maximal dosing of these nonopioid medications. The primary aim of this quality improvement (QI) project was to increase the utilization of single-agent opioids for postoperative pain control, primarily oxycodone, by the various surgical groups here at Cook Children’s Medical Center (CCMC). Methods: The project setting was a tertiary-level children’s hospital with a level 2 trauma center, performing over 20,000 surgeries annually. The opioid stewardship committee (OSC) mapped the steps and overlapping activities in the intervention that led to changes in providers’ prescription practices. A Plan–Do–Study–Act continuous improvement cycle allowed for an assessment and modification of implementation strategies. Statistical control process charts were used to detect the average percentage change in surgical specialties using single-agent opioid therapy. Data were monitored for three periods: one-year pre-intervention, one-year post-intervention, and one-year sustainment periods. Results: There were 4885 (41%) pre-intervention procedures, 3973 (33%) post-intervention procedures, and 3180 (26%) sustainment period procedures that received opioids. During the pre-intervention period, the average proportion of single-agent opioids prescribed was 8%. This average shifted to 89% for the first five months of the post-intervention period, then to 91% for the remainder of the study. Conclusions: The methodical application of process improvement strategies can result in a sustained change from outpatient post-surgical combination opioid prescriptions to single-agent opioid prescriptions in multiple surgical departments. Full article
Show Figures

Figure 1

Figure 1
<p>Key driver diagram.</p>
Full article ">Figure 2
<p>Educational content on postoperative pain given to all families after a surgical procedure. The education contains information on how to assess a child’s pain, ways to manage pain without medication, the primary pain medications acetaminophen and ibuprofen, along with opioid information for breakthrough pain, including safe dosing, storage, and disposal.</p>
Full article ">Figure 2 Cont.
<p>Educational content on postoperative pain given to all families after a surgical procedure. The education contains information on how to assess a child’s pain, ways to manage pain without medication, the primary pain medications acetaminophen and ibuprofen, along with opioid information for breakthrough pain, including safe dosing, storage, and disposal.</p>
Full article ">Figure 2 Cont.
<p>Educational content on postoperative pain given to all families after a surgical procedure. The education contains information on how to assess a child’s pain, ways to manage pain without medication, the primary pain medications acetaminophen and ibuprofen, along with opioid information for breakthrough pain, including safe dosing, storage, and disposal.</p>
Full article ">Figure 2 Cont.
<p>Educational content on postoperative pain given to all families after a surgical procedure. The education contains information on how to assess a child’s pain, ways to manage pain without medication, the primary pain medications acetaminophen and ibuprofen, along with opioid information for breakthrough pain, including safe dosing, storage, and disposal.</p>
Full article ">Figure 3
<p>Educational content on opioid safety given to all families after being prescribed opioid medication. The education contains information on the use of opioids for acute pain, the dosing of opioid medications, the storage of opioid medications, and the safe disposal of unused medications.</p>
Full article ">Figure 3 Cont.
<p>Educational content on opioid safety given to all families after being prescribed opioid medication. The education contains information on the use of opioids for acute pain, the dosing of opioid medications, the storage of opioid medications, and the safe disposal of unused medications.</p>
Full article ">Figure 3 Cont.
<p>Educational content on opioid safety given to all families after being prescribed opioid medication. The education contains information on the use of opioids for acute pain, the dosing of opioid medications, the storage of opioid medications, and the safe disposal of unused medications.</p>
Full article ">Figure 4
<p>Process control chart for the monthly mean percentage (%) of single-agent opioids prescribed across all surgical departments represented by the centerline (CL) during the pre-intervention, post-intervention, and sustainment periods. Three SD upper control (UCLs) and lower control limits (LCLs) are displayed over three years. The blue line represents the use of single agent opioid medications, the increase and sustained use of the medication.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) Table graph depicting the overall decrease in number of opioid doses per postoperative opioid discharge prescription from 2018 to 2019 and 2021-2023. The data from 2020 is similar to that of 2021. Times of physician surgeon meetings noted in (<b>a</b>).</p>
Full article ">
13 pages, 527 KiB  
Article
Classifying Aviation Safety Reports: Using Supervised Natural Language Processing (NLP) in an Applied Context
by Michael D. New and Ryan J. Wallace
Safety 2025, 11(1), 7; https://doi.org/10.3390/safety11010007 - 16 Jan 2025
Viewed by 379
Abstract
This paper presents a practical approach to classifying aviation safety reports in an operational context. The goals of the research are as follows: (a) successfully demonstrate a replicable, practical methodology leveraging Natural Language Processing (NLP) to classify aviation safety report narratives; (b) determine [...] Read more.
This paper presents a practical approach to classifying aviation safety reports in an operational context. The goals of the research are as follows: (a) successfully demonstrate a replicable, practical methodology leveraging Natural Language Processing (NLP) to classify aviation safety report narratives; (b) determine the number of reports (per class) required to train the NLP model to achieve an F1 performance score greater than 0.90 consistently; and, (c) demonstrate the model could be implemented locally, within the confines of a typical corporate infrastructure (i.e., behind the firewall) to allay information security concerns. The authors purposefully sampled 425 safety reports from 2019 to 2021 from a university flight training program. The authors varied the number of reports used to train an NLP model to classify narrative safety reports into three separate event categories. The NLP model’s performance was evaluated both with and without distractor data, running 30 iterations at each training level. NLP model success was measured using a confusion matrix and calculating Macro Average F1-Scores. Parametric testing was conducted on macro average F1 score performance using an ANOVA and post hoc Levene statistic. We determined that 60 training samples were required to consistently achieve a macro average F1-Score above the established 0.90 performance threshold. In future studies, we intend to expand this line of research to include multi-tiered analysis to support classification within a safety taxonomy, enabling improved root cause analysis. Full article
(This article belongs to the Special Issue Aviation Safety—Accident Investigation, Analysis and Prevention)
11 pages, 1211 KiB  
Article
Simultaneous Classification of Objects with Unknown Rejection (SCOUR) Using Infra-Red Sensor Imagery
by Adam Cuellar, Daniel Brignac, Abhijit Mahalanobis and Wasfy Mikhael
Sensors 2025, 25(2), 492; https://doi.org/10.3390/s25020492 - 16 Jan 2025
Viewed by 206
Abstract
Recognizing targets in infra-red images is an important problem for defense and security applications. A deployed network must not only recognize the known classes, but it must also reject any new or unknown objects without confusing them to be one of the known [...] Read more.
Recognizing targets in infra-red images is an important problem for defense and security applications. A deployed network must not only recognize the known classes, but it must also reject any new or unknown objects without confusing them to be one of the known classes. Our goal is to enhance the ability of existing (or pretrained) classifiers to detect and reject unknown classes. Specifically, we do not alter the training strategy of the main classifier so that its performance on known classes remains unchanged. Instead, we introduce a second network (trained using regression) that uses the decision of the primary classifier to produce a class conditional score that indicates whether an input object is indeed a known object. This is performed in a Bayesian framework where the classification confidence of the primary network is combined with the class-conditional score of the secondary network to accurately separate the unknown objects from the known target classes. Most importantly, our method does not require any examples of OOD imagery to be used for training the second network. For illustrative purposes, we demonstrate the effectiveness of the proposed method using the CIFAR-10 dataset. Ultimately, our goal is to classify known targets in infra-red images while improving the ability to reject unknown classes. Towards this end, we train and test our method on a public domain medium-wave infra-red (MWIR) dataset provided by the US Army for the development of automatic target recognition (ATR) algorithms. The results of this experiment show that the proposed method outperforms other state-of-the-art methods in rejecting the unknown target types while accurately classifying the known ones. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed SCOUR framework for simultaneous object classification and unknown rejection.</p>
Full article ">Figure 2
<p>Example target chips from the DSIAC dataset. The images represent different vehicle classes, including both military and civilian vehicles.</p>
Full article ">Figure 3
<p>ROC curves for open-set recognition on the DSIAC dataset where tracked vehicles are known and wheeled vehicles are unknown, comparing our approach with ARPL + CS, CAC, and Good Classifier.</p>
Full article ">Figure 4
<p>ROC curves for open-set recognition on the DSIAC dataset where wheeled vehicles are known and tracked vehicles are unknown, comparing our approach with ARPL + CS, CAC, and Good Classifier.</p>
Full article ">Figure 5
<p>ROC curves for open-set recognition on the DSIAC dataset comparing our approach with ARPL + CS, CAC, and Good Classifier.</p>
Full article ">
12 pages, 543 KiB  
Protocol
The Mental Health of Older People Living in Nursing Homes in Northern Portugal: A Cross-Sectional Study Protocol
by Cláudia Rodrigues, Sandra Carreira, Rui Novais, Fátima Braga, Silvana Martins and Odete Araújo
Nurs. Rep. 2025, 15(1), 24; https://doi.org/10.3390/nursrep15010024 - 16 Jan 2025
Viewed by 271
Abstract
Background/Objectives: In Portugal, evidence regarding the mental health of institutionalized older people is limited, leaving this area poorly described and the mental health needs of this population largely unknown. This research aims to describe the mental health of older persons residing in [...] Read more.
Background/Objectives: In Portugal, evidence regarding the mental health of institutionalized older people is limited, leaving this area poorly described and the mental health needs of this population largely unknown. This research aims to describe the mental health of older persons residing in nursing homes in Northern Portugal. Methods: A cross-sectional study will be conducted. We estimate that 567 participants will be recruited through convenience sampling. Potential participants must live in nursing homes in Northern Portugal, be aged 65 years or older, and exhibit cognitive impairment at an initial or intermediate stage. Ten web survey questionnaires will be administered to the participants, including one sociodemographic and health questionnaire and nine mental health assessment instruments evaluating fear of falling; sleep quality; frailty; anxiety, depression, and stress; loneliness and social isolation; risk of acute confusion; cognition; emotional literacy; and perceived hope. Data will be analyzed by employing descriptive, cluster, inferential, and bivariate analyses, with multiple regression models included. The study and the research protocol were submitted to and approved by the Ethics Committee of a major public university in Northern Portugal (CEICVS 007/2025). Expected Results: This is a pioneering study in Portugal, representing the first attempt to assess the mental health of older nursing home residents. Our study will enhance the understanding of the mental and multifactorial health needs of this population through a comprehensive description of their mental health, and sociodemographic and health characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Procedure before data collection.</p>
Full article ">
14 pages, 3208 KiB  
Article
Advancing Hydrogel-Based 3D Cell Culture Systems: Histological Image Analysis and AI-Driven Filament Characterization
by Lucio Assis Araujo Neto, Alessandra Maia Freire and Luciano Paulino Silva
Biomedicines 2025, 13(1), 208; https://doi.org/10.3390/biomedicines13010208 - 15 Jan 2025
Viewed by 374
Abstract
Background: Machine learning is used to analyze images by training algorithms on data to recognize patterns and identify objects, with applications in various fields, such as medicine, security, and automation. Meanwhile, histological cross-sections, whether longitudinal or transverse, expose layers of tissues or tissue [...] Read more.
Background: Machine learning is used to analyze images by training algorithms on data to recognize patterns and identify objects, with applications in various fields, such as medicine, security, and automation. Meanwhile, histological cross-sections, whether longitudinal or transverse, expose layers of tissues or tissue mimetics, which provide crucial information for microscopic analysis. Objectives: This study aimed to employ the Google platform “Teachable Machine” to apply artificial intelligence (AI) in the interpretation of histological cross-section images of hydrogel filaments. Methods: The production of 3D hydrogel filaments involved different combinations of sodium alginate and gelatin polymers, as well as a cross-linking agent, and subsequent stretching until rupture using an extensometer. Cross-sections of stretched and unstretched filaments were created and stained with hematoxylin and eosin. Using the Teachable Machine platform, images were grouped and trained for subsequent prediction. Results: Over six hundred histological cross-section images were obtained and stored in a virtual database. Each hydrogel combination exhibited variations in coloration, and some morphological structures remained consistent. The AI efficiently identified and differentiated images of stretched and unstretched filaments. However, some confusion arose when distinguishing among variations in hydrogel combinations. Conclusions: Therefore, the image prediction tool for biopolymeric hydrogel histological cross-sections using Teachable Machine proved to be an efficient strategy for distinguishing stretched from unstretched filaments. Full article
(This article belongs to the Special Issue 3D Cell Culture Systems for Biomedical Research)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the teachable machine tool used as a machine learning-based artificial intelligence for predicting image classifications of filaments. (<b>A</b>) Initial model structure showing the positions of each structure and its specific function. (<b>B</b>) Encoding used for the training and prediction of images of stretched (FE) and unstretched (F) filament sections. (<b>C</b>) Encoding used for the training and prediction of images of sections for the seven distinct groups of hydrogel filaments produced.</p>
Full article ">Figure 2
<p>Histological sections of unstretched hydrogel filaments: (<b>A</b>,<b>C</b>,<b>E</b>,<b>G</b>,<b>I</b>,<b>K</b>,<b>M</b>) representative images of longitudinal sections; (<b>B</b>,<b>D</b>,<b>F</b>,<b>H</b>,<b>J</b>,<b>L</b>,<b>N</b>) representative images of transverse sections. The black mark in the bottom right corner of each image represents 100 μm.</p>
Full article ">Figure 3
<p>Histological sections of stretched hydrogel filaments: (<b>A</b>,<b>C</b>,<b>E</b>,<b>G</b>,<b>I</b>,<b>K</b>,<b>M</b>) representative images of longitudinal sections; (<b>B</b>,<b>D</b>,<b>F</b>,<b>H</b>,<b>J</b>,<b>L</b>,<b>N</b>) representative images of transverse sections. The black mark in the bottom right corner of each image represents 100 μm.</p>
Full article ">Figure 4
<p>Confusion matrix of the prediction for stretched and unstretched filaments. This matrix presents the proportions corresponding to the identifications of the 47 images, based on the class assignments of F and FE. In total, 26 images belonging to class F (55.32%) were correctly identified as such, and 15 images from class FE (31.91%) were correctly identified as belonging to that class. Five images from class FE (10.64%) were misclassified as F, and only one image from class F (2.13%) was misclassified as FE.</p>
Full article ">Figure 5
<p>Confusion matrix of the prediction for the seven distinct types of hydrogels. This matrix presents the proportions corresponding to the identifications of the forty-seven images, based on the inputs from the classes DMEM-6, DMEM-8, DMEM-10, DMEM-11, DMEM-12, DMEM-14, and DMEM-15. It reveals a significant disparity in the AI’s prediction classifications.</p>
Full article ">
14 pages, 5093 KiB  
Article
In Situ Classification of Original Rocks by Portable Multi-Directional Laser-Induced Breakdown Spectroscopy Device
by Mengyang Zhang, Hongbo Fu, Huadong Wang, Feifan Shi, Saifullah Jamali, Zongling Ding, Bian Wu and Zhirong Zhang
Chemosensors 2025, 13(1), 18; https://doi.org/10.3390/chemosensors13010018 - 15 Jan 2025
Viewed by 315
Abstract
In situ rapid classification of rock lithology is crucial in various fields, including geological exploration and petroleum logging. Laser-induced breakdown spectroscopy (LIBS) is particularly well-suited for in situ online analysis due to its rapid response time and minimal sample preparation requirements. To facilitate [...] Read more.
In situ rapid classification of rock lithology is crucial in various fields, including geological exploration and petroleum logging. Laser-induced breakdown spectroscopy (LIBS) is particularly well-suited for in situ online analysis due to its rapid response time and minimal sample preparation requirements. To facilitate in situ raw rock discrimination analysis, a portable LIBS device was developed specifically for outdoor use. This device built upon a previous multi-directional optimization scheme and integrated machine learning to classify seven types of original rock samples: mudstone, basalt, dolomite, sandstone, conglomerate, gypsolyte, and shale from oil logging sites. Initially, spectral data were collected from random areas of each rock sample, and a series of pre-processing steps and data dimensionality reduction were performed to enhance the accuracy and efficiency of the LIBS device. Subsequently, four classification algorithms—linear discriminant analysis (LDA), K-nearest neighbor (KNN), support vector machine (SVM), and extreme gradient boosting (XGBoost)—were employed for classification discrimination. The results were evaluated using a confusion matrix. The final average classification accuracies achieved were 95.71%, 93.57%, 92.14%, and 98.57%, respectively. This work not only demonstrates the effectiveness of the portable LIBS device in classifying various original rock types, but it also highlights the potential of the XGBoost algorithm in improving LIBS analytical performance in field scenarios and geological applications, such as oil logging sites. Full article
(This article belongs to the Special Issue Application of Laser-Induced Breakdown Spectroscopy, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the portable LIBS system.</p>
Full article ">Figure 2
<p>Portable LIBS device digital model.</p>
Full article ">Figure 3
<p>Seven types of rock samples.</p>
Full article ">Figure 4
<p>Rock sample spectra and ablation crater images.</p>
Full article ">Figure 5
<p>Spectra data processing flow.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Spectral integral intensities for 50 laser pulses.</p>
Full article ">Figure 8
<p>(<b>a</b>) The RSD of spectral integral intensity for 350 collection points before outlier rejection; (<b>b</b>) The RSD of spectral integral intensity for 350 collection points after outlier rejection.</p>
Full article ">Figure 9
<p>Distribution of the first three PCs features for seven rock samples.</p>
Full article ">Figure 10
<p>The loading plots of the first three PCs.</p>
Full article ">Figure 11
<p>Confusion matrix for LDA model training set and test set.</p>
Full article ">Figure 12
<p>Confusion matrix for KNN model training set and test set.</p>
Full article ">Figure 13
<p>Confusion matrix for SVC model training set and test set.</p>
Full article ">Figure 14
<p>Confusion matrix for XGBoost model training set and test set.</p>
Full article ">Figure 15
<p>Changes in prediction accuracy with increasing number of PC for four models.</p>
Full article ">
10 pages, 3037 KiB  
Proceeding Paper
Comparative Study of Asparagus Production and Quality in Two Coastal Regions of Peru Based on Meteorological Conditions for Crop Productivity Optimization
by Santiago Castillo, Patrick Villamizar, Diego Piñan, Gabriela Huaynate and Antonio Angulo
Eng. Proc. 2025, 83(1), 14; https://doi.org/10.3390/engproc2025083014 - 15 Jan 2025
Viewed by 202
Abstract
This study focuses on remote sensing and monitoring of asparagus crops in the provinces of Ica and Trujillo, highlighting their importance in global food security. Using satellite images and temperature data, productivity was compared using the NDWI, NDVI, and EVI indices. The Grad-CAM [...] Read more.
This study focuses on remote sensing and monitoring of asparagus crops in the provinces of Ica and Trujillo, highlighting their importance in global food security. Using satellite images and temperature data, productivity was compared using the NDWI, NDVI, and EVI indices. The Grad-CAM technique was used to analyze the AlexNet Convolutional Neural Network (CNN) model, seeking to improve productivity. Although AlexNet validated the satellite images, it showed some confusion in regions of medium and low productivity. The model, supported by Grad-CAM, will contribute to the monitoring of optimal climatic conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Salaverry district, Trujillo, La Libertad (<b>a</b>), and Salas district, Ica, Ica (<b>b</b>).</p>
Full article ">Figure 2
<p>Images ready to enter the Salaverry–Trujillo (<b>a</b>) and Salas–Ica networks (<b>b</b>).</p>
Full article ">Figure 3
<p>Neural network training: La Libertad.</p>
Full article ">Figure 4
<p>Confusion matrices: La Libertad.</p>
Full article ">Figure 5
<p>Grad-CAM of La Libertad.</p>
Full article ">Figure 6
<p>Neural network training: Ica.</p>
Full article ">Figure 7
<p>Confusion matrices: Ica.</p>
Full article ">Figure 8
<p>Grad-CAM of Ica.</p>
Full article ">
22 pages, 12031 KiB  
Article
Quantum-Cognitive Neural Networks: Assessing Confidence and Uncertainty with Human Decision-Making Simulations
by Milan Maksimovic and Ivan S. Maksymov
Big Data Cogn. Comput. 2025, 9(1), 12; https://doi.org/10.3390/bdcc9010012 - 14 Jan 2025
Viewed by 351
Abstract
Contemporary machine learning (ML) systems excel in recognising and classifying images with remarkable accuracy. However, like many computer software systems, they can fail by generating confusing or erroneous outputs or by deferring to human operators to interpret the results and make final decisions. [...] Read more.
Contemporary machine learning (ML) systems excel in recognising and classifying images with remarkable accuracy. However, like many computer software systems, they can fail by generating confusing or erroneous outputs or by deferring to human operators to interpret the results and make final decisions. In this paper, we employ the recently proposed quantum tunnelling neural networks (QT-NNs) inspired by human brain processes alongside quantum cognition theory to classify image datasets while emulating human perception and judgment. Our findings suggest that the QT-NN model provides compelling evidence of its potential to replicate human-like decision-making. We also reveal that the QT-NN model can be trained up to 50 times faster than its classical counterpart. Full article
Show Figures

Figure 1

Figure 1
<p>Uncertainty in detecting fresh produce items at a supermarket self-checkout equipped with a machine vision system. (<b>Left</b>): The system has analysed a transparent plastic bag containing truss tomatoes and identified two possible categories, namely, truss tomato and gourmet tomato, leaving the final selection to the customer. (<b>Right</b>): In another test with a bag of Amorette mandarins, the system has suggested three potential options: Delite mandarin, Amorette mandarin, and Navel orange. Similar results are observed with other visually ambiguous items.</p>
Full article ">Figure 2
<p>Schematic representation of the QT-NN architecture. The inset illustrates the effect of quantum tunnelling employed as an activation function of the network.</p>
Full article ">Figure 3
<p>Outputs generated by the QT-NN (red) and the classical neural network model (blue). The insets show the representative testing images for each classification category.</p>
Full article ">Figure 4
<p>JSD and SE figures-of-merit for the QT-NN and the classical model for each item category. Note that the classical SE is zero (to machine accuracy) for the ‘Ankle Boot’ category.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) Distributions of weights between the input layer and the hidden layer (denoted as <math display="inline"><semantics> <msub> <mi>W</mi> <mn>1</mn> </msub> </semantics></math> in the main text), plotted as a function of training iterations for the QT-NN model and the classical model (labelled as ‘Class’.), respectively. (<b>c</b>,<b>d</b>) Results of the JSD cross-comparison of the initial (labelled as ‘Init’.) and trained weight distributions <math display="inline"><semantics> <msub> <mi>W</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>W</mi> <mn>2</mn> </msub> </semantics></math>. The shaded areas in the JSD plots quantify the divergence, with the numerical value presented above each panel. The roles of subpanels (i)–(iii) are detailed in the main text.</p>
Full article ">Figure 6
<p>Schematic illustration of the training process of (<b>a</b>) the classical model and (<b>b</b>) the QT-NN model, inspired by the discussion in [<a href="#B80-BDCC-09-00012" class="html-bibr">80</a>]. The coloured lines illustrate the possible pathways of neural connection formation. Note that the additional hidden layers of neurons are included purely for the sake of illustrating more advanced neural connections; also, note that the neural connections of the QT-NN model, depicted by the solid and dashed lines in panel (<b>b</b>), are equally valid from the perspective of the algorithm and possess a probabilistic quantum nature.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>d</b>) Instantaneous snapshots of an energy wave packet modelling the tunnelling of an electron through a potential barrier, depicted by a white rectangle. The false-colour scale of the images encodes the computed probability density values. Within the framework of the QT-NN model used in this paper, these values correspond to the connection weights of the neural network. The arrows indicate the direction of propagation of the wave packet. Note that in panel (<b>d</b>), the wave packet splits into two parts, a physical phenomenon that would eventually lead to the formation of the connections illustrated by the solid and dashed lines in <a href="#BDCC-09-00012-f006" class="html-fig">Figure 6</a>b.</p>
Full article ">
13 pages, 1066 KiB  
Article
Comparison of Long-Term Antibody Titers in Calves Treated with Different Conjunctival and Subcutaneous Brucella abortus S19 Vaccines
by Ali Uslu, Zafer Sayın, Aslı Balevi, Yasin Gulcu, Fırat Ergen, Islam Akıner, Oguzhan Denizli and Osman Erganis
Animals 2025, 15(2), 212; https://doi.org/10.3390/ani15020212 - 14 Jan 2025
Viewed by 332
Abstract
Brucellosis is still the most common zoonosis worldwide despite advanced technology and animal husbandry. Since there is still no effective Brucella vaccine for humans, it is crucial to control the disease in ruminants through eradication and vaccination. Although some countries around the world [...] Read more.
Brucellosis is still the most common zoonosis worldwide despite advanced technology and animal husbandry. Since there is still no effective Brucella vaccine for humans, it is crucial to control the disease in ruminants through eradication and vaccination. Although some countries around the world have achieved this circumstance, every country aims to become free of Brucellosis through vaccination, animal movements, and various eradication measures. For this purpose, the Brucella abortus S19 strain has been used safely for about 100 years. However, due to the O-polysaccharide (OPS) antigen in its structure, the antibody response created by the vaccine causes confusion in serological tests. For this purpose, researchers have provided both mucosal immunity and short-term antibody response by using the B. abortus S19 vaccine in conjunctival form instead of subcutaneous form. This study aimed to determine how long the post-vaccination titer levels persisted in animals vaccinated with vaccines from 3 different companies and different routes. In this study, a total of 115 calves aged 3 to 4.5 months were created in five groups, with 23 animals in each group: group 1 (vaccine brand A), group 2 (vaccine brand B), and group 3 (vaccine brand C) received the two-dose conjunctival vaccine, group 4 received the single-dose subcutaneous vaccine (vaccine brand C), and group 5 received the subcutaneous vaccine (vaccine brand C) plus the booster dose conjunctival vaccine (vaccine brand B). Brucellosis antibody titers were monitored each 21 days until the cattle were 26–28 months old. The collected sera were analyzed using the Rose Bengal Plate Test (RBPT), Serum Agglutination Test (SAT), and Complement Fixation Test (CFT), which are the preferred serological methods for Brucellosis eradication plans worldwide. In the conjunctival vaccination groups, only 3 (13%) of the animals in group 1 developed antibody titers one month after vaccination, and there was no antibody response detected against Brucellosis in group 2 and group 3. In animals that were stimulated conjunctivally, the threshold value of 30 International CFT Units (ICFTUs) (for distinguishing between infective titers and vaccination titers) was observed in one animal each in group 1 and group 2 and 0 animal in group 3. It was found that antibody titers turn to Brucellosis negative in all conjunctival vaccine groups at 7 months after vaccination. In groups 4 and 5, the first-month serological screening detected over 30 ICFTUs in 17 (89.47%) animals and 16 (69.5%) animals, respectively. In group 4, CFT titers were found to fall below 30 on the 17th month and 9.3 on the 22nd month. On the 14th month, the CFT titers of group 5 were found to be below 30, and all animals in this group turned negative after the 19th month. It was found that the single dose B. abortus S19 subcutaneous vaccination in calves caused persistent antibodies in 5% of the population. It is believed that persistent and high antibody titers created by subcutaneous vaccines will cause false positivity and create confusion in Brucellosis eradication programs. Therefore, although there is no clear distinction between vaccinated and infected animals, it has been observed that conjunctival Brucellosis vaccines create more stable antibody titers and decrease rapidly compared to subcutaneous vaccines. Based on the results of this study and the advantages of conjunctival vaccines, more effective eradication programs and antibody monitoring can be carried out in vaccinated herds where Brucellosis outbreaks are observed. Full article
(This article belongs to the Special Issue The Detection, Prevention and Treatment of Calf Diseases)
Show Figures

Figure 1

Figure 1
<p>Study hypothesis and working plan.</p>
Full article ">Figure 2
<p>Distribution of mean titer values of animals with ICFTU &gt;30 at different periods after vaccination.</p>
Full article ">Figure 3
<p>Distribution of mean titer values of animals with SAT &gt; 1/20 in different periods after vaccination.</p>
Full article ">
25 pages, 5457 KiB  
Article
Research on Small-Target Detection of Flax Pests and Diseases in Natural Environment by Integrating Similarity-Aware Activation Module and Bidirectional Feature Pyramid Network Module Features
by Manxi Zhong, Yue Li and Yuhong Gao
Agronomy 2025, 15(1), 187; https://doi.org/10.3390/agronomy15010187 - 14 Jan 2025
Viewed by 334
Abstract
In the detection of the pests and diseases of flax, early wilt disease is elusive, yellow leaf disease symptoms are easily confusing, and pest detection is hampered by issues such as diversity in species, difficulty in detection, and technological bottlenecks, posing significant challenges [...] Read more.
In the detection of the pests and diseases of flax, early wilt disease is elusive, yellow leaf disease symptoms are easily confusing, and pest detection is hampered by issues such as diversity in species, difficulty in detection, and technological bottlenecks, posing significant challenges to detection efforts. To address these issues, this paper proposes a flax pest and disease detection method based on an improved YOLOv8n model. To enhance the detection accuracy and generalization capability of the model, this paper first employs the Albumentations library for data augmentation, which strengthens the model’s adaptability to complex environments by enriching the diversity of training samples. Secondly, in terms of model architecture, a Bidirectional Feature Pyramid Network (BiFPN) module is introduced to replace the original feature extraction network. Through bidirectional multi-scale feature fusion, the model’s ability to distinguish pests and diseases with similar features and large scale differences is effectively improved. Meanwhile, the integration of the SimAM attention mechanism enables the model to learn information from three-dimensional channels, enhancing its perception of pest and disease features. Additionally, this paper adopts the EIOU loss function to further optimize the model’s bounding box regression, reducing the distortion of bounding boxes caused by high sample variability. The experimental results demonstrate that the improved model achieves a significant detection performance on the flax pest and disease dataset, with notable improvements in the detection accuracy and mean average precision compared to the original YOLOv8n model. Finally, this paper proposes a YOLOv8n model with a four-headed detection design, which significantly enhances the detection capability for small targets such as pests and diseases with a size of 4 × 4 pixels or larger by introducing new detection heads and optimizing feature extraction. This method not only improves the detection accuracy for flax pests and diseases but also maintains a high computational efficiency, providing effective technical support for the rapid and precise detection of flax pests and diseases and possessing an important practical application value. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

Figure 1
<p>Process of image preprocessing.</p>
Full article ">Figure 2
<p>Original images of flax diseases and pests and the effect images enhanced by Albumentations.</p>
Full article ">Figure 3
<p>Network structure diagram of the improved Yolov8 model.</p>
Full article ">Figure 4
<p>Bidirectional Feature Pyramid Network (BiFPN) structure.</p>
Full article ">Figure 5
<p>Enhanced Intersection over Union Loss (EIOU) diagram.</p>
Full article ">Figure 6
<p>Comparison curves of loss values and the mAP@0.5/% of the YOLOv8n model before and after improvement.</p>
Full article ">Figure 7
<p>Comparison of the heat maps of the YOLOv8n model before and after the improvement.</p>
Full article ">Figure 8
<p>Comparison charts of the detection effects of flax diseases and pests by the YOLOv8n model before and after improvement.</p>
Full article ">Figure 9
<p>Interactive interface of the flax disease and pest detection system.</p>
Full article ">
Back to TopTop