[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (332)

Search Parameters:
Keywords = cognitive automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2932 KiB  
Article
Anatomically Guided Deep Learning System for Right Internal Jugular Line (RIJL) Segmentation and Tip Localization in Chest X-Ray
by Siyuan Wei, Liza Shrestha, Gabriel Melendez-Corres and Matthew S. Brown
Life 2025, 15(2), 201; https://doi.org/10.3390/life15020201 - 29 Jan 2025
Abstract
The right internal jugular line (RIJL) is a type of central venous catheter (CVC) inserted into the right internal jugular vein to deliver medications and monitor vital functions in ICU patients. The placement of RIJL is routinely checked by a clinician in a [...] Read more.
The right internal jugular line (RIJL) is a type of central venous catheter (CVC) inserted into the right internal jugular vein to deliver medications and monitor vital functions in ICU patients. The placement of RIJL is routinely checked by a clinician in a chest X-ray (CXR) image to ensure its proper function and patient safety. To reduce the workload of clinicians, deep learning-based automated detection algorithms have been developed to detect CVCs in CXRs. Although RIJL is the most widely used type of CVCs, there is a paucity of investigations focused on its accurate segmentation and tip localization. In this study, we propose a deep learning system that integrates an anatomical landmark segmentation, an RIJL segmentation network, and a postprocessing function to segment the RIJL course and detect the tip with accuracy and precision. We utilized the nnU-Net framework to configure the segmentation network. The entire system was implemented on the SimpleMind Cognitive AI platform, enabling the integration of anatomical knowledge and spatial reasoning to model relationships between objects within the image. Specifically, the trachea was used as an anatomical landmark to extract a subregion in a CXR image that is most relevant to the RIJL. The subregions were used to generate cropped images, which were used to train the segmentation network. The segmentation results were recovered to original dimensions, and the most inferior point’s coordinates in each image were defined as the tip. With guidance from the anatomical landmark and customized postprocessing, the proposed method achieved improved segmentation and tip localization compared to the baseline segmentation network: the mean average symmetric surface distance (ASSD) was decreased from 2.72 to 1.41 mm, and the mean tip distance was reduced from 11.27 to 8.29 mm. Full article
(This article belongs to the Special Issue Current Progress in Medical Image Segmentation)
Show Figures

Figure 1

Figure 1
<p>Sample CXR images to demonstrate the variation of image quality. The first row presents the original images. The second row presents the processed ones with CLAHE overlaid with the ground truth label, marked with red lines.</p>
Full article ">Figure 2
<p>The overall framework for segmenting RIJL. The original CXR images are cropped based on trachea segmentation and then input to nnU-Net for training and inference, respectively. The prediction from nnU-Net is then resized to the original scale, followed by the Bidirectional Connect postprocessing step to enhance segmentation. The white lines indicate the segmentation masks. Finally, the RIJL segmentation mask is used to extract the tip location, as highlighted by the red dot.</p>
Full article ">Figure 3
<p>The workflow of subregion extraction guided by trachea segmentation. In Trachea ROI, the red region indicates trachea segmentation, the mask of which is marked with white in the two subsequent images. The red line indicates the reference annotations in Cropped dataset.</p>
Full article ">Figure 4
<p>A graphical illustration of the Bidirectional Connect algorithm. For simplicity, a test sample is presented where only the inferior point of the largest connected component is used as the starting point. The dashed semicircles in the searching pattern are enlarged for clarity.</p>
Full article ">Figure 5
<p>Examples from the test results, with each row corresponding to a single sample. In the Ground Truth images, the RIJL is marked in green. In each image, the red dot indicates the tip of the RIJL. The Bidirectional (BD) Connect postprocessing function applies a dilation step after connecting relevant ROIs—this step compensates for skeletonization, resulting in segmentation ROIs that appear thicker and brighter. The corresponding ASSD and tip distance in [mm] are given below each inference result.</p>
Full article ">Figure 6
<p>Representative test samples where the RIJL segmentation failed, with each row representing a single sample. The images were processed with CLAHE and overlaid with Ground Truth, DL on original and cropped segmentation outputs, respectively. The ground truths and segmentation outputs are marked with green and red, respectively.</p>
Full article ">
13 pages, 503 KiB  
Article
Correlates of Inaccuracy in Reporting of Energy Intake Among Persons with Multiple Sclerosis
by Stephanie L. Silveira, Brenda Jeng, Barbara A. Gower, Gary R. Cutter and Robert W. Motl
Nutrients 2025, 17(3), 438; https://doi.org/10.3390/nu17030438 - 25 Jan 2025
Viewed by 236
Abstract
Background/Objectives: Persons with multiple sclerosis (MS) are interested in diet as a second-line approach for disease management. This study examined potential variables that correlate with inaccuracy of self-reported energy intake (EI) in adults with MS. Methods: Twenty-eight participants completed two assessment appointments within [...] Read more.
Background/Objectives: Persons with multiple sclerosis (MS) are interested in diet as a second-line approach for disease management. This study examined potential variables that correlate with inaccuracy of self-reported energy intake (EI) in adults with MS. Methods: Twenty-eight participants completed two assessment appointments within a 14-day period that included a standard doubly labeled water (DLW) protocol for estimating total energy expenditure (TEE). The participants reported their EI using the Automated Self-Administered 24 h (ASA24) Dietary Assessment Tool. The primary variables of interest for explaining the discrepancy between TEE and ASA24 EI (i.e., inaccuracy) included cognition (processing speed, visuospatial memory, and verbal memory), hydration status (total body water), and device-measured physical activity. Pearson’s correlations assessed the association between absolute and percent inaccuracy in reporting of EI with outcomes of interest, followed by linear regression analyses for identifying independent correlates. Results: California Verbal Learning Test—Second Edition (CVLT-II) z-scores and light physical activity (LPA) were significantly associated with mean absolute difference in EI (r = –0.53 and r = 0.46, respectively). CVLT-II z-scores and LPA were the only variables significantly associated with mean percent difference in EI (r = –0.48 and r = 0.42, respectively). The regression analyses indicated that both CVLT-II and LPA significantly explained variance in mean absolute difference in EI, and only CVLT-II explained variance for percent difference in EI. Conclusions: The results from this study indicate that verbal learning and memory and LPA are associated with inaccuracy of self-reported EI in adults with MS. This may guide timely research identifying appropriate protocols for assessment of diet in MS. Full article
Show Figures

Figure 1

Figure 1
<p>Flow diagram of recruitment and enrollment of participants for study examining the validity of energy intake reporting among persons with multiple sclerosis.</p>
Full article ">
30 pages, 1187 KiB  
Review
Artificial Intelligence-Empowered Radiology—Current Status and Critical Review
by Rafał Obuchowicz, Julia Lasek, Marek Wodziński, Adam Piórkowski, Michał Strzelecki and Karolina Nurzynska
Diagnostics 2025, 15(3), 282; https://doi.org/10.3390/diagnostics15030282 - 24 Jan 2025
Viewed by 287
Abstract
Humanity stands at a pivotal moment of technological revolution, with artificial intelligence (AI) reshaping fields traditionally reliant on human cognitive abilities. This transition, driven by advancements in artificial neural networks, has transformed data processing and evaluation, creating opportunities for addressing complex and time-consuming [...] Read more.
Humanity stands at a pivotal moment of technological revolution, with artificial intelligence (AI) reshaping fields traditionally reliant on human cognitive abilities. This transition, driven by advancements in artificial neural networks, has transformed data processing and evaluation, creating opportunities for addressing complex and time-consuming tasks with AI solutions. Convolutional networks (CNNs) and the adoption of GPU technology have already revolutionized image recognition by enhancing computational efficiency and accuracy. In radiology, AI applications are particularly valuable for tasks involving pattern detection and classification; for example, AI tools have enhanced diagnostic accuracy and efficiency in detecting abnormalities across imaging modalities through automated feature extraction. Our analysis reveals that neuroimaging and chest imaging, as well as CT and MRI modalities, are the primary focus areas for AI products, reflecting their high clinical demand and complexity. AI tools are also used to target high-prevalence diseases, such as lung cancer, stroke, and breast cancer, underscoring AI’s alignment with impactful diagnostic needs. The regulatory landscape is a critical factor in AI product development, with the majority of products certified under the Medical Device Directive (MDD) and Medical Device Regulation (MDR) in Class IIa or Class I categories, indicating compliance with moderate-risk standards. A rapid increase in AI product development from 2017 to 2020, peaking in 2020 and followed by recent stabilization and saturation, was identified. In this work, the authors review the advancements in AI-based imaging applications, underscoring AI’s transformative potential for enhanced diagnostic support and focusing on the critical role of CNNs, regulatory challenges, and potential threats to human labor in the field of diagnostic imaging. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
16 pages, 1477 KiB  
Article
A Speech-Based Mobile Screening Tool for Mild Cognitive Impairment: Technical Performance and User Engagement Evaluation
by Rukiye Ruzi, Yue Pan, Menwa Lawrence Ng, Rongfeng Su, Lan Wang, Jianwu Dang, Liwei Liu and Nan Yan
Bioengineering 2025, 12(2), 108; https://doi.org/10.3390/bioengineering12020108 - 24 Jan 2025
Viewed by 292
Abstract
Traditional screening methods for Mild Cognitive Impairment (MCI) face limitations in accessibility and scalability. To address this, we developed and validated a speech-based automatic screening app implementing three speech–language tasks with user-centered design and server–client architecture. The app integrates automated speech processing and [...] Read more.
Traditional screening methods for Mild Cognitive Impairment (MCI) face limitations in accessibility and scalability. To address this, we developed and validated a speech-based automatic screening app implementing three speech–language tasks with user-centered design and server–client architecture. The app integrates automated speech processing and SVM classifiers for MCI detection. Functionality validation included comparison with manual assessment and testing in real-world settings (n = 12), with user engagement evaluated separately (n = 22). The app showed comparable performance with manual assessment (F1 = 0.93 vs. 0.95) and maintained reliability in real-world settings (F1 = 0.86). Task engagement significantly influenced speech patterns: users rating tasks as “most interesting” produced more speech content (p < 0.05), though behavioral observations showed consistent cognitive processing across perception groups. User engagement analysis revealed high technology acceptance (86%) across educational backgrounds, with daily cognitive exercise habits significantly predicting task benefit perception (H = 9.385, p < 0.01). Notably, perceived task difficulty showed no significant correlation with cognitive performance (p = 0.119), suggesting the system’s accessibility to users of varying abilities. While preliminary, the mobile app demonstrated both robust assessment capabilities and sustained user engagement, suggesting the potential viability of widespread cognitive screening in the geriatric population. Full article
(This article belongs to the Special Issue Intelligent Computer-Aided Designs for Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>System architecture of our app. Color Styles (boxes and arrows): Blue:Server-side components, server-to-user data/control flow; Green: Client-side applications, user-to-server data/control flow; Yellow box: Processing component; Dotted boxes/lines: Features under development.</p>
Full article ">Figure 2
<p>Screenshots of main user interface (UI) of the app. The screen will automatically switch to landscape orientation during the Picture Description task for better viewing. Detailed descriptions of the tasks are provided in <a href="#app1-bioengineering-12-00108" class="html-app">Supplementary Material SA</a>.</p>
Full article ">Figure 3
<p>User perception and engagement across different tasks (* <span class="html-italic">p</span> &lt; 0.05). Red axes indicate keywords in (<b>a</b>), extra repetition in (<b>b</b>), and total character in (<b>c</b>). Purple axes indicate basically the same thing: the duration or time spent on that specific task.</p>
Full article ">Figure 4
<p>Frequency of observed thinking and distraction behaviors across task perception groups.</p>
Full article ">
27 pages, 12488 KiB  
Article
Smart Transparency: A User-Centered Approach to Improving Human–Machine Interaction in High-Risk Supervisory Control Tasks
by Keran Wang, Wenjun Hou, Leyi Hong and Jinyu Guo
Electronics 2025, 14(3), 420; https://doi.org/10.3390/electronics14030420 - 21 Jan 2025
Viewed by 515
Abstract
In supervisory control tasks, particularly in high-risk fields, operators need to collaborate with automated intelligent agents to manage dynamic, time-sensitive, and uncertain information. Effective human–agent collaboration relies on transparent interface communication to align with the operator’s cognition and enhance trust. This paper proposes [...] Read more.
In supervisory control tasks, particularly in high-risk fields, operators need to collaborate with automated intelligent agents to manage dynamic, time-sensitive, and uncertain information. Effective human–agent collaboration relies on transparent interface communication to align with the operator’s cognition and enhance trust. This paper proposes a human-centered adaptive transparency information design framework (ATDF), which dynamically adjusts the display of transparency information based on the operator’s needs and the task type. This ensures that information is accurately conveyed at critical moments, thereby enhancing trust, task performance, and interface usability. Additionally, the paper introduces a novel user research method, Heu–Kano, to explore the prioritization of transparency needs and presents a model based on eye-tracking and machine learning to identify different types of human–agent interactions. This research provides new insights into human-centered explainability in supervisory control tasks. Full article
(This article belongs to the Special Issue Emerging Trends in Multimodal Human-Computer Interaction)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the construction and key components within the Adaptive Transparency Design Framework (ATDF).</p>
Full article ">Figure 2
<p>Flowchart of the specific implementation steps of the Heu–Kano method.</p>
Full article ">Figure 3
<p>Visualization of system logs and real-time interaction data for labeling in the study. (In Step 1, the yellow circle on the "Eye-tracking data" block represents the operator’s gaze point; the yellow line segment on the “real-time interface” block represents the mouse click path; Step 2 shows the duration and execution order of different tasks under experimental processes, and different colored squares represent different tasks).</p>
Full article ">Figure 4
<p>Interface of aircraft status monitoring task experiment. (Different shapes represent various monitoring targets: the airplane symbolizes dynamic aircraft; red, yellow, and blue colors indicate different scheduling priorities; green triangles represent static signal base stations; and blue circles represent static ground radar).</p>
Full article ">Figure 5
<p>Interface of scheduling priority assessment task experiment. (Different shapes represent various monitoring targets: the airplane symbolizes dynamic aircraft; red, yellow, and blue colors indicate different scheduling priorities; green triangles represent static signal base stations; and blue circles represent static ground radar).</p>
Full article ">Figure 6
<p>Interface of parameter adjustment and anomaly management task experiment. (Different shapes represent various monitoring targets: the airplane symbolizes dynamic aircraft; red, yellow, and blue colors indicate different scheduling priorities; green triangles represent static signal base stations; and blue circles represent static ground radar).</p>
Full article ">Figure 7
<p>Data collection methods. (The yellow circles in the figure representing the visualization of the operator’s gaze points).</p>
Full article ">Figure 8
<p>Kano Model Result: Four-Quadrant Distribution Map of better–worse.</p>
Full article ">Figure 9
<p>Model performance (F1 scores) across different window lengths.</p>
Full article ">Figure 10
<p>Feature importance in classification tasks.</p>
Full article ">
30 pages, 3938 KiB  
Article
Cognitive Method for Synthesising a Fuzzy Controller Mathematical Model Using a Genetic Algorithm for Tuning
by Serhii Vladov
Big Data Cogn. Comput. 2025, 9(1), 17; https://doi.org/10.3390/bdcc9010017 - 20 Jan 2025
Viewed by 499
Abstract
In this article, a fuzzy controller mathematical model synthesising method that uses cognitive computing and a genetic algorithm for automated tuning and adaptation to changing environmental conditions has been developed. The technique consists of 12 stages, including creating the control objects’ mathematical model [...] Read more.
In this article, a fuzzy controller mathematical model synthesising method that uses cognitive computing and a genetic algorithm for automated tuning and adaptation to changing environmental conditions has been developed. The technique consists of 12 stages, including creating the control objects’ mathematical model and tuning the controller coefficients using classical methods. The research pays special attention to the error parameters and their derivative fuzzification, which simplifies the development of logical rules and helps increase the stability of the systems. The fuzzy controller parameters were tuned using a genetic algorithm in a computational experiment based on helicopter flight data. The results show an increase in the integral quality criterion from 85.36 to 98.19%, which confirms an increase in control efficiency by 12.83%. The fuzzy controller use made it possible to significantly improve the helicopter turboshaft engines’ gas-generator rotor speed control performance, reducing the first and second types of errors by 2.06…12.58 times compared to traditional methods. Full article
Show Figures

Figure 1

Figure 1
<p>The closed-loop control system with a forward-loop controller structural diagram.</p>
Full article ">Figure 2
<p>The fuzzy controllers’ structure and a term set for describing the fuzzy controllers’ input and output variables.</p>
Full article ">Figure 3
<p>The fuzzy controllers’ generalised structure.</p>
Full article ">Figure 4
<p>The gas-generator rotor speed parameter recorded onboard the helicopter during the 256 s study interval: (<b>a</b>) input diagram, (<b>b</b>) reconstructed diagram (author’s research).</p>
Full article ">Figure 5
<p>Cluster analysis results: (<b>a</b>) training dataset, (<b>b</b>) test dataset.</p>
Full article ">Figure 6
<p>Membership function type for each input and output variable of the gas-generator rotor speed fuzzy controllers.</p>
Full article ">Figure 7
<p>The Nyquist hodograph and the unit radius circle result in a diagram.</p>
Full article ">Figure 8
<p>Diagram of random disturbance to the control object in the gas-generator rotor speed stabilisation system.</p>
Full article ">Figure 9
<p>The gas-generator rotor r.p.m. changes (in absolute values), resulting in oscillograms.</p>
Full article ">
29 pages, 619 KiB  
Review
Depression Detection and Diagnosis Based on Electroencephalogram (EEG) Analysis: A Comprehensive Review
by Kholoud Elnaggar, Mostafa M. El-Gayar and Mohammed Elmogy
Diagnostics 2025, 15(2), 210; https://doi.org/10.3390/diagnostics15020210 - 17 Jan 2025
Viewed by 432
Abstract
Background: Mental disorders are disturbances of brain functions that cause cognitive, affective, volitional, and behavioral functions to be disrupted to varying degrees. One of these disorders is depression, a significant factor contributing to the increase in suicide cases worldwide. Consequently, depression has become [...] Read more.
Background: Mental disorders are disturbances of brain functions that cause cognitive, affective, volitional, and behavioral functions to be disrupted to varying degrees. One of these disorders is depression, a significant factor contributing to the increase in suicide cases worldwide. Consequently, depression has become a significant public health issue globally. Electroencephalogram (EEG) data can be utilized to diagnose mild depression disorder (MDD), offering valuable insights into the pathophysiological mechanisms underlying mental disorders and enhancing the understanding of MDD. Methods: This survey emphasizes the critical role of EEG in advancing artificial intelligence (AI)-driven approaches for depression diagnosis. By focusing on studies that integrate EEG with machine learning (ML) and deep learning (DL) techniques, we systematically analyze methods utilizing EEG signals to identify depression biomarkers. The survey highlights advancements in EEG preprocessing, feature extraction, and model development, showcasing how these approaches enhance the diagnostic precision, scalability, and automation of depression detection. Results: This survey is distinguished from prior reviews by addressing their limitations and providing researchers with valuable insights for future studies. It offers a comprehensive comparison of ML and DL approaches utilizing EEG and an overview of the five key steps in depression detection. The survey also presents existing datasets for depression diagnosis and critically analyzes their limitations. Furthermore, it explores future directions and challenges, such as enhancing diagnostic robustness with data augmentation techniques and optimizing EEG channel selection for improved accuracy. The potential of transfer learning and encoder-decoder architectures to leverage pre-trained models and enhance diagnostic performance is also discussed. Advancements in feature extraction methods for automated depression diagnosis are highlighted as avenues for improving ML and DL model performance. Additionally, integrating Internet of Things (IoT) devices with EEG for continuous mental health monitoring and distinguishing between different types of depression are identified as critical research areas. Finally, the review emphasizes improving the reliability and predictability of computational intelligence-based models to advance depression diagnosis. Conclusions: This study will serve as a well-organized and helpful reference for researchers working on detecting depression using EEG signals and provide insights into the future directions outlined above, guiding further advancements in the field. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Figure 1
<p>The diversity of DL and ML algorithms used in prior studies for EEG-based depression detection.</p>
Full article ">Figure 2
<p>The detailed mapping of interconnections between survey sections.</p>
Full article ">Figure 3
<p>The paper selection methodology flow chart.</p>
Full article ">Figure 4
<p>The general steps to detect depression using EEG signals.</p>
Full article ">
36 pages, 3744 KiB  
Review
A Review of Cognitive Control: Advancement, Definition, Framework, and Prospect
by Zhenfei Liu and Xunhe Yin
Actuators 2025, 14(1), 32; https://doi.org/10.3390/act14010032 - 15 Jan 2025
Viewed by 459
Abstract
The operational environments of engineering systems are becoming increasingly complex and require automatic control systems to be more intelligent. Cognitive control extends the domain of intelligent control, whereby cognitive science theories are applied to guide the design of automatic control systems to make [...] Read more.
The operational environments of engineering systems are becoming increasingly complex and require automatic control systems to be more intelligent. Cognitive control extends the domain of intelligent control, whereby cognitive science theories are applied to guide the design of automatic control systems to make them conform to the human cognition paradigm and behave like a real person, hence improving physical systems performance. Cognitive control has been investigated in several fields, but a comprehensive review covering all these fields has yet to be provided in any paper. This paper first presents a review of cognitive control development and related works. Then, the relationship between cognitive control and cognitive science is analyzed, based on which the definition and framework of cognitive control are summarized from the perspective of automation and control. Cognitive control is then compared with similar concepts, such as cognitive radio and cognitive radar, and similar control methods, such as intelligent control, robust control, and adaptive control. Finally, the main issues, research directions, and development prospects are discussed. We expect that this paper will contribute to the development of cognitive control. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

Figure 1
<p>The timeline of the symbolized results of cognitive control, as surveyed in this paper.</p>
Full article ">Figure 2
<p>The statistics of cognitive control papers investigated in this paper.</p>
Full article ">Figure 3
<p>The architecture of CTS [<a href="#B9-actuators-14-00032" class="html-bibr">9</a>,<a href="#B10-actuators-14-00032" class="html-bibr">10</a>].</p>
Full article ">Figure 4
<p>The cognitive control unit based on SOAR [<a href="#B46-actuators-14-00032" class="html-bibr">46</a>].</p>
Full article ">Figure 5
<p>The cognitive agent based on BDI [<a href="#B47-actuators-14-00032" class="html-bibr">47</a>].</p>
Full article ">Figure 6
<p>Architecture of the cognitive safety controller [<a href="#B52-actuators-14-00032" class="html-bibr">52</a>].</p>
Full article ">Figure 7
<p>5C architecture for the implementation of CPS [<a href="#B56-actuators-14-00032" class="html-bibr">56</a>].</p>
Full article ">Figure 8
<p>Human operator model [<a href="#B73-actuators-14-00032" class="html-bibr">73</a>].</p>
Full article ">Figure 9
<p>IMA-based ISAC cognitive robot architecture [<a href="#B84-actuators-14-00032" class="html-bibr">84</a>].</p>
Full article ">Figure 10
<p>Behavior control process in ISAC [<a href="#B81-actuators-14-00032" class="html-bibr">81</a>].</p>
Full article ">Figure 11
<p>Levels of interaction engagement [<a href="#B84-actuators-14-00032" class="html-bibr">84</a>].</p>
Full article ">Figure 12
<p>SAS for cognitive robots [<a href="#B86-actuators-14-00032" class="html-bibr">86</a>].</p>
Full article ">Figure 13
<p>Conflicting tasks in WM (<b>left</b>) and the associated network representation (<b>right</b>). Concrete behaviors (dark-gray ovals) compete for shared resources (robot actuators). Their emphasis values (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math>) are affected by object proximity (bottom-up) and (top-down) aroused by task continuation drives (teleology). In the associated network (<b>right</b>), <math display="inline"><semantics> <mrow> <mi>b</mi> </mrow> </semantics></math> nodes are for behaviors enabled by provided inputs along with regulations tel, gla, bot, and emphasis values <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>e</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> are the outputs [<a href="#B86-actuators-14-00032" class="html-bibr">86</a>].</p>
Full article ">Figure 14
<p>A schematic representation of the CRAM cognitive architecture [<a href="#B94-actuators-14-00032" class="html-bibr">94</a>].</p>
Full article ">Figure 15
<p>The cognitive control system architecture based on CDS [<a href="#B105-actuators-14-00032" class="html-bibr">105</a>].</p>
Full article ">Figure 16
<p>The CRC system architecture [<a href="#B108-actuators-14-00032" class="html-bibr">108</a>].</p>
Full article ">Figure 17
<p>Fundamental framework of the cognitive control system.</p>
Full article ">Figure 18
<p>The application of cognitive control in WNCS [<a href="#B159-actuators-14-00032" class="html-bibr">159</a>].</p>
Full article ">
20 pages, 2328 KiB  
Article
Work Roles in Human–Robot Collaborative Systems: Effects on Cognitive Ergonomics for the Manufacturing Industry
by Pablo Segura, Odette Lobato-Calleros, Isidro Soria-Arguello and Eduardo Gamaliel Hernández-Martínez
Appl. Sci. 2025, 15(2), 744; https://doi.org/10.3390/app15020744 - 14 Jan 2025
Viewed by 551
Abstract
Human–robot collaborative systems have been adopted by manufacturing organizations with the objective of releasing physical workload to the human factor. However, the roles and responsibilities of human operators in these semi-automated systems have not been properly analyzed. This might carry important consequences in [...] Read more.
Human–robot collaborative systems have been adopted by manufacturing organizations with the objective of releasing physical workload to the human factor. However, the roles and responsibilities of human operators in these semi-automated systems have not been properly analyzed. This might carry important consequences in the cognitive dimension of ergonomics, which then contradicts the main well-being goals of collaborative work. Therefore, we designed a series of collaborative scenarios where we shifted the assignment of work responsibilities between humans and robots while executing a quality inspection task. Variations in the state of cognitive ergonomics were estimated with subjective and objective techniques via workload tests and physiological responses respectively. Furthermore, we introduced a work design framework based on 50 state-of-the-art applications for a structured implementation of human–robot collaborative systems that contemplates the underlying organizational and technological components necessary to fulfill its basic functionalities. Human operators that possessed responsibility roles over collaborative robots presented better results in terms of cognitive workload and spare mental capacity alike. In this regard, mental demand is seen as a key workload variable to consider when designing collaborative work in current manufacturing settings. Full article
(This article belongs to the Special Issue Advances in Manufacturing Ergonomics)
Show Figures

Figure 1

Figure 1
<p>Work design framework for human–robot collaborative systems.</p>
Full article ">Figure 2
<p>Collaboration flowchart for the quality inspection task.</p>
Full article ">Figure 3
<p>Experimental setting for the collaborative task: (<b>a</b>) collaborative robot with peripheral devices; (<b>b</b>) workpiece-related elements; (<b>c</b>) sensing and communication features.</p>
Full article ">Figure 4
<p>Results of NASA-TLX’s workload factors by collaborative scenario.</p>
Full article ">Figure 5
<p>Results of overall workload and spare mental capacity by collaborative scenario.</p>
Full article ">Figure A1
<p>NASA Task Load Index and Bedford Workload Scale formats.</p>
Full article ">
28 pages, 2683 KiB  
Article
GDT Framework: Integrating Generative Design and Design Thinking for Sustainable Development in the AI Era
by Yongliang Chen, Zhongzhi Qin, Li Sun, Jiantao Wu, Wen Ai, Jiayuan Chao, Huaixin Li and Jiangnan Li
Sustainability 2025, 17(1), 372; https://doi.org/10.3390/su17010372 - 6 Jan 2025
Viewed by 775
Abstract
The ability of AI to process vast datasets can enhance creativity, but its rigid knowledge base and lack of reflective thinking limit sustainable design. Generative Design Thinking (GDT) integrates human cognition and machine learning to enhance design automation. This study aims to explore [...] Read more.
The ability of AI to process vast datasets can enhance creativity, but its rigid knowledge base and lack of reflective thinking limit sustainable design. Generative Design Thinking (GDT) integrates human cognition and machine learning to enhance design automation. This study aims to explore the cognitive mechanisms underlying GDT and their impact on design efficiency. Using behavioral coding and quantitative analysis, we developed a three-tier cognitive model comprising a macro-cycle (knowledge acquisition and expression), meso-cycle (creative generation, intelligent evaluation, and feedback adjustment), and micro-cycle (knowledge base and model optimization). The findings reveal that increased task complexity elevates cognitive load, supporting the hypothesis that designers need to allocate more cognitive resources for complex problems. Knowledge base optimization significantly impacts design efficiency more than generative model refinement. Moreover, creative generation, evaluation, and feedback adjustment are interdependent, highlighting the importance of a dynamic knowledge base for creativity. This study challenges traditional design automation approaches by advocating for an adaptive framework that balances cognitive processes and machine capabilities. The results suggest that improving knowledge management and reducing cognitive load can enhance design outcomes. Future research should focus on developing flexible, real-time knowledge repositories and optimizing generative models for interdisciplinary and sustainable design contexts. Full article
Show Figures

Figure 1

Figure 1
<p>Generative Design Cognitive Model.</p>
Full article ">Figure 2
<p>Double Diamond Modeling and Generative Design Thinking.</p>
Full article ">Figure 3
<p>Von Neumann Model and Generative Design Thinking.</p>
Full article ">Figure 4
<p>Structure of the Generative Design Thinking Model.</p>
Full article ">Figure 5
<p>Generative design thinking model mechanism reasoning process.</p>
Full article ">Figure 6
<p>Product design process framework driven by generative design thinking.</p>
Full article ">
21 pages, 3734 KiB  
Article
Towards Dynamic Human–Robot Collaboration: A Holistic Framework for Assembly Planning
by Fabian Schirmer, Philipp Kranz, Chad G. Rose, Jan Schmitt and Tobias Kaupp
Electronics 2025, 14(1), 190; https://doi.org/10.3390/electronics14010190 - 5 Jan 2025
Viewed by 587
Abstract
The combination of human cognitive skills and dexterity with the endurance and repeatability of robots is a promising approach to modern assembly. However, efficiently allocating tasks and planning an assembly sequence between humans and robots is a manual, complex, and time-consuming activity. This [...] Read more.
The combination of human cognitive skills and dexterity with the endurance and repeatability of robots is a promising approach to modern assembly. However, efficiently allocating tasks and planning an assembly sequence between humans and robots is a manual, complex, and time-consuming activity. This work presents a framework named “Extract–Enrich–Assess–Plan–Review” that facilitates holistic planning of human–robot assembly processes. The framework automatically Extracts data from heterogeneous sources, Assesses the suitability of each assembly step to be performed by the human or robot, and Plans multiple assembly sequence plans (ASP) according to boundary conditions. Those sequences allow for a dynamic adaptation at runtime and incorporate different human–robot interaction modalities that are Synchronized, Cooperative, or Collaborative. An expert remains in the loop to Enrich the extracted data, and Review the results of the Assess and Plan steps with options to modify the process. To experimentally validate this framework, we compare the achieved degree of automation using three different CAD formats. We also demonstrate and analyze multiple assembly sequence plans that are generated by our system according to process time and the interaction modalities used. Full article
(This article belongs to the Special Issue Recent Advances in Robotics and Automation Systems)
Show Figures

Figure 1

Figure 1
<p><b>Left</b>: Overview of our human–robot collaboration (HRC) workstation where humans and robots work together in close proximity to assemble a toy truck. <b>Right</b>: Example of interaction modality Collaboration. Here, the robot acts as a third hand to support the human.</p>
Full article ">Figure 2
<p>A review of related work [<a href="#B4-electronics-14-00190" class="html-bibr">4</a>,<a href="#B7-electronics-14-00190" class="html-bibr">7</a>,<a href="#B8-electronics-14-00190" class="html-bibr">8</a>,<a href="#B10-electronics-14-00190" class="html-bibr">10</a>,<a href="#B19-electronics-14-00190" class="html-bibr">19</a>,<a href="#B23-electronics-14-00190" class="html-bibr">23</a>,<a href="#B24-electronics-14-00190" class="html-bibr">24</a>,<a href="#B26-electronics-14-00190" class="html-bibr">26</a>,<a href="#B27-electronics-14-00190" class="html-bibr">27</a>,<a href="#B28-electronics-14-00190" class="html-bibr">28</a>,<a href="#B29-electronics-14-00190" class="html-bibr">29</a>,<a href="#B30-electronics-14-00190" class="html-bibr">30</a>,<a href="#B31-electronics-14-00190" class="html-bibr">31</a>,<a href="#B32-electronics-14-00190" class="html-bibr">32</a>,<a href="#B39-electronics-14-00190" class="html-bibr">39</a>] organized into the five components of our E<sup>2</sup>APR framework, adapted from Schirmer et al. [<a href="#B22-electronics-14-00190" class="html-bibr">22</a>], showing the missing holistic perspective.</p>
Full article ">Figure 3
<p>Exploded view of the final product consisting of a base (cabin, load carrier, and chassis), a front axle, a rear axle, and four sub-assembly 1 (axle holder and two screws), adapted from Schirmer et al. [<a href="#B22-electronics-14-00190" class="html-bibr">22</a>].</p>
Full article ">Figure 4
<p>The E<sup>2</sup>APR framework introduced in this paper is composed of three layers: Input, Application, and Output, adapted from Schirmer et al. [<a href="#B22-electronics-14-00190" class="html-bibr">22</a>].</p>
Full article ">Figure 5
<p>The data model of our truck assembly, extracted from three input sources: (1) CAD files, (2) DXF files, and (3) assembly instructions for manual assembly (PDF or Excel). The output provides information about assembly steps and the required components [<a href="#B21-electronics-14-00190" class="html-bibr">21</a>].</p>
Full article ">Figure 6
<p>Dashboard for structuring and streamlining the planning process. The extracted data are structured into specific areas listed on the left. A detailed view of the areas is seen on the right. The <span class="html-italic">Relationship Matrix</span> indicates whether there is a relationship (marked with X) or no relationship (marked with O) between the components.</p>
Full article ">Figure 7
<p>Various output options for the assembly step information with different levels of granularity, based on the subdivision in MTM into Basic Movements, Movement Sequences, and Basic Operations [<a href="#B37-electronics-14-00190" class="html-bibr">37</a>]. An additional Basic Movement “hold” was added to enable the robot to act as a third hand. Below the dashed line is an exemplary assembly step from the toy truck use case given to illustrate the levels of granularity.</p>
Full article ">Figure 8
<p>Phase one of the Planning Unit involves determining the sequence of assembly steps without yet assigning tasks to either humans or robots. The double-headed arrow indicates that <span class="html-italic">SA2</span> and <span class="html-italic">SA3</span> can be interchanged, adapted from Schirmer et al. [<a href="#B22-electronics-14-00190" class="html-bibr">22</a>].</p>
Full article ">Figure 9
<p>Phase two of the Planning Unit involves assigning tasks to humans and robots, as well as defining the interaction modality (Synchronization, Cooperation, or Collaboration). Sub-assemblies <span class="html-italic">SA2</span> and <span class="html-italic">SA3</span> are interchangeable. This stage generates six ASP options, therefore enabling the framework to adapt <span class="html-italic">dynamically</span> to changes, adapted from Schirmer et al. [<a href="#B22-electronics-14-00190" class="html-bibr">22</a>].</p>
Full article ">Figure 10
<p>Assembly sequence plans and cycle times for four different human–robot interaction modalities. Manual assembly (no robot) acts as a baseline, adapted from Schirmer et al. [<a href="#B22-electronics-14-00190" class="html-bibr">22</a>].</p>
Full article ">
33 pages, 3507 KiB  
Article
Cognitive Agents Powered by Large Language Models for Agile Software Project Management
by Konrad Cinkusz, Jarosław A. Chudziak and Ewa Niewiadomska-Szynkiewicz
Electronics 2025, 14(1), 87; https://doi.org/10.3390/electronics14010087 - 28 Dec 2024
Viewed by 894
Abstract
This paper investigates the integration of cognitive agents powered by Large Language Models (LLMs) within the Scaled Agile Framework (SAFe) to reinforce software project management. By deploying virtual agents in simulated software environments, this study explores their potential to fulfill fundamental roles in [...] Read more.
This paper investigates the integration of cognitive agents powered by Large Language Models (LLMs) within the Scaled Agile Framework (SAFe) to reinforce software project management. By deploying virtual agents in simulated software environments, this study explores their potential to fulfill fundamental roles in IT project development, thereby optimizing project outcomes through intelligent automation. Particular emphasis is placed on the adaptability of these agents to Agile methodologies and their transformative impact on decision-making, problem-solving, and collaboration dynamics. The research leverages the CogniSim ecosystem, a platform designed to simulate real-world software engineering challenges, such as aligning technical capabilities with business objectives, managing interdependencies, and maintaining project agility. Through iterative simulations, cognitive agents demonstrate advanced capabilities in task delegation, inter-agent communication, and project lifecycle management. By employing natural language processing to facilitate meaningful dialogues, these agents emulate human roles and improve the efficiency and precision of Agile practices. Key findings from this investigation highlight the ability of LLM-powered cognitive agents to deliver measurable improvements in various metrics, including task completion times, quality of deliverables, and communication coherence. These agents exhibit scalability and adaptability, ensuring their applicability across diverse and complex project environments. This study underscores the potential of integrating LLM-powered agents into Agile project management frameworks as a means of advancing software engineering practices. This integration not only refines the execution of project management tasks but also sets the stage for a paradigm shift in how teams collaborate and address emerging challenges. By integrating the capabilities of artificial intelligence with the principles of Agile, the CogniSim framework establishes a foundation for more intelligent, efficient, and adaptable software development methodologies. Full article
Show Figures

Figure 1

Figure 1
<p>Scrum framework with key artifacts, meetings, and processes [<a href="#B7-electronics-14-00087" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>Software engineering layers [<a href="#B5-electronics-14-00087" class="html-bibr">5</a>].</p>
Full article ">Figure 3
<p>A generic agile iteration cycle illustrating planning, development, review, stakeholder feedback, and continuous improvement.</p>
Full article ">Figure 4
<p>Conceptual scaled Agile iteration flow: multiple teams coordinating increments, integrating continuously, and aligning with strategic objectives.</p>
Full article ">Figure 5
<p>Single cognitive agent and its components [<a href="#B17-electronics-14-00087" class="html-bibr">17</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Agent overview [<a href="#B38-electronics-14-00087" class="html-bibr">38</a>]; (<b>b</b>) cognitive agent architecture represented as a cyclic process with four components.</p>
Full article ">Figure 7
<p>Applications of Multi-Agent Systems in software engineering.</p>
Full article ">Figure 8
<p>Layered ecosystem of CogniSim.</p>
Full article ">Figure 9
<p>Integration of CogniSim with SAFe framework [<a href="#B52-electronics-14-00087" class="html-bibr">52</a>].</p>
Full article ">Figure 10
<p>Project structure.</p>
Full article ">Figure 11
<p>Simulation workflow in CogniSim, showing the iterative Agile process from setup through data analysis.</p>
Full article ">Figure 12
<p>Simulation results.</p>
Full article ">Figure 13
<p>Conceptual enterprise-scale Agile layers with cognitive agents and MASs (inspired by principles in frameworks such as SAFe 6.0 [<a href="#B25-electronics-14-00087" class="html-bibr">25</a>,<a href="#B69-electronics-14-00087" class="html-bibr">69</a>]).</p>
Full article ">Figure 14
<p>MAS concept diagram.</p>
Full article ">
11 pages, 1653 KiB  
Article
Neurofilament Light Chain Levels in Serum and Cerebrospinal Fluid Do Not Correlate with Survival Times in Patients with Prion Disease
by Mika Shimamura, Kong Weijie, Toshiaki Nonaka, Koki Kosami, Ryusuke Ae, Koji Fujita, Taiki Matsubayashi, Tadashi Tsukamoto, Nobuo Sanjo and Katsuya Satoh
Biomolecules 2025, 15(1), 8; https://doi.org/10.3390/biom15010008 - 25 Dec 2024
Viewed by 384
Abstract
Prion diseases, including Creutzfeldt–Jakob disease (CJD), are deadly neurodegenerative disorders characterized by the buildup of abnormal prion proteins in the brain. This accumulation disrupts neuronal functions, leading to the rapid onset of psychiatric symptoms, ataxia, and cognitive decline. The urgency of timely diagnosis [...] Read more.
Prion diseases, including Creutzfeldt–Jakob disease (CJD), are deadly neurodegenerative disorders characterized by the buildup of abnormal prion proteins in the brain. This accumulation disrupts neuronal functions, leading to the rapid onset of psychiatric symptoms, ataxia, and cognitive decline. The urgency of timely diagnosis for effective treatment necessitates the identification of strongly correlated biomarkers in bodily fluids, which makes our research crucial. In this study, we employed a fully automated multiplex ELISA (Ella®) to measure the concentrations of 14-3-3 protein, total tau protein, and neurofilament light chain (NF-L) in cerebrospinal fluid (CSF) and serum samples from patients with prion disease and analyzed their link to disease prognosis. However, in North American and European cases, we did not confirm a correlation between NF-L levels and survival time. This discrepancy is believed to stem from differences in treatment policies and measurement methods between Japan and the United States. Nonetheless, our findings suggest that NF-L concentrations could be an early diagnostic marker for CJD patients with further enhancements. The potential impact of our findings on the early diagnosis of CJD patients is significant. Future research should focus on increasing the number of sCJD cases studied in Japan and gathering additional evidence using next-generation measurement techniques. Full article
(This article belongs to the Section Molecular Medicine)
Show Figures

Figure 1

Figure 1
<p>Correlation between disease duration and CSF/serum NF-L (<b>A</b>). Correlation between disease duration and CSF NF-L (R<sup>2</sup> = 0.0042, Pearson’s CC = 0.0064) (<b>B</b>). Correlation between disease duration and serum NF-L (R<sup>2</sup> = 0.0022, Pearson’s CC = 0.047) (<b>C</b>). Correlation between CSF and serum NF-L levels (R<sup>2</sup> = 0.3298, Pearson’s CC = 0.576).</p>
Full article ">Figure 2
<p>Correlation between duration of illness from onset to akinetic mutism and NF-L concentration (<b>A</b>), CSF c (<b>B</b>), serum (R<sup>2</sup> = 3 × 10<sup>5</sup>, Pearson’s CC = −0.005) (<b>C</b>), and average NF-L in CSF and serum (median ± S.D.).</p>
Full article ">Figure 3
<p>Correlation between cerebrospinal fluid NF-L/serum NF-L levels and disease duration (short-term vs. long-term). (<b>A</b>) and (<b>B</b>): 0-20 months (short-term). (<b>A</b>): R<sup>2</sup> = 0.0189; Pearson’s CC = 0.137. (<b>B</b>): R<sup>2</sup> = 0.0397, Pearson’s CC = −0.0199. (<b>C</b>) and (<b>D</b>) ≥20 months (long-term). (<b>C</b>): R<sup>2</sup> = 0.006, Pearson’s CC = 0.077. (<b>D</b>): R<sup>2</sup> = 0.0003; Pearson’s CC = −0.016. (<b>E</b>) Correlation between CSF NF-L levels and serum NF-L levels in the short term (0–20 months) (R<sup>2</sup> = 0.053, Pearson’s CC = 0.2303).</p>
Full article ">Figure 4
<p>Relationship between age at onset and CSF NF-L. (<b>A</b>,<b>B</b>) Age at onset and NF-L concentration ((<b>A</b>): CSF; (<b>B</b>): Serum). (<b>C</b>,<b>D</b>) Correlation between age at onset and CSF/serum NF-L<sup>©</sup>, and correlation between age at onset and CSF NF-L (R<sup>2</sup> = 0.0184, Pearson’s CC = 0.135). (<b>D</b>) Correlation between age at onset and serum NF-L (R<sup>2</sup> = 0.0557, Pearson’s CC = 0.236).</p>
Full article ">
15 pages, 2935 KiB  
Article
Integrated Decision and Motion Planning for Highways with Multiple Objects Using a Naturalistic Driving Study
by Feng Gao, Xu Zheng, Qiuxia Hu and Hongwei Liu
Sensors 2025, 25(1), 26; https://doi.org/10.3390/s25010026 - 24 Dec 2024
Viewed by 427
Abstract
With the rise in the intelligence levels of automated vehicles, increasing numbers of modules of automated driving systems are being combined to achieve better performance and adaptability by reducing information loss. In this study, an integrated decision and motion planning system is designed [...] Read more.
With the rise in the intelligence levels of automated vehicles, increasing numbers of modules of automated driving systems are being combined to achieve better performance and adaptability by reducing information loss. In this study, an integrated decision and motion planning system is designed for multi-object highways. A two-layer structure is presented to decouple the influence of the traffic environment and the dynamic control of ego vehicles using the cognitive safety area, the size of which is determined by naturalistic driving behavior. The artificial potential field method is used to comprehensively describe the influence of all external objects on the cognitive safety area, the lateral motion dynamics of which are determined by the attention mechanism of the human driver during lane changes. Then, the interaction between the designed cognitive safety area and the ego vehicle can be simplified into a spring-damping system, and the desired dynamic states of the ego vehicle can be obtained analytically for better computational efficiency. The effectiveness of this on improving traffic efficiency, driving comfort, safety, and real-time performance was validated using several comparative tests utilizing complicated scenarios with multiple vehicles. Full article
(This article belongs to the Special Issue Intelligent Control Systems for Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Fundamental aspects of two-layered decision and planning system.</p>
Full article ">Figure 2
<p>Decision strategy of lateral motion.</p>
Full article ">Figure 3
<p>Results of selected lane change dataset: (<b>a</b>) typical lane change scenarios; (<b>b</b>) distribution of relative speed; (<b>c</b>) distribution of lateral speed of EV.</p>
Full article ">Figure 4
<p>Statistical results of converting factor <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Statistical results of lane change time.</p>
Full article ">Figure 6
<p>Statistical results of evaluation indexes: (<b>a</b>) safety performance; (<b>b</b>) efficiency performance; (<b>c</b>) comfort performance; (<b>d</b>) running time.</p>
Full article ">Figure 7
<p>Diagram of dynamic multi-task scenario.</p>
Full article ">Figure 8
<p>Results in multi-task and dynamic scenario: (<b>a</b>) bird’s-eye view of trajectory; (<b>b</b>) longitudinal speed; (<b>c</b>) lateral speed; (<b>d</b>) longitudinal acceleration; (<b>e</b>) lateral acceleration.</p>
Full article ">
36 pages, 2037 KiB  
Article
Contextual Fine-Tuning of Language Models with Classifier-Driven Content Moderation for Text Generation
by Matan Punnaivanam and Palani Velvizhy
Entropy 2024, 26(12), 1114; https://doi.org/10.3390/e26121114 - 20 Dec 2024
Viewed by 867
Abstract
In today’s digital age, ensuring the appropriateness of content for children is crucial for their cognitive and emotional development. The rise of automated text generation technologies, such as Large Language Models like LLaMA, Mistral, and Zephyr, has created a pressing need for effective [...] Read more.
In today’s digital age, ensuring the appropriateness of content for children is crucial for their cognitive and emotional development. The rise of automated text generation technologies, such as Large Language Models like LLaMA, Mistral, and Zephyr, has created a pressing need for effective tools to filter and classify suitable content. However, the existing methods often fail to effectively address the intricate details and unique characteristics of children’s literature. This study aims to bridge this gap by developing a robust framework that utilizes fine-tuned language models, classification techniques, and contextual story generation to generate and classify children’s stories based on their suitability. Employing a combination of fine-tuning techniques on models such as LLaMA, Mistral, and Zephyr, alongside a BERT-based classifier, we evaluated the generated stories against established metrics like ROUGE, METEOR, and BERT Scores. The fine-tuned Mistral-7B model achieved a ROUGE-1 score of 0.4785, significantly higher than the base model’s 0.3185, while Zephyr-7B-Beta achieved a METEOR score of 0.4154 compared to its base counterpart’s score of 0.3602. The results indicated that the fine-tuned models outperformed base models, generating content more aligned with human standards. Moreover, the BERT Classifier exhibited high precision (0.95) and recall (0.97) for identifying unsuitable content, further enhancing the reliability of content classification. These findings highlight the potential of advanced language models in generating age-appropriate stories and enhancing content moderation strategies. This research has broader implications for educational technology, content curation, and parental control systems, offering a scalable approach to ensuring children’s exposure to safe and enriching narratives. Full article
Show Figures

Figure 1

Figure 1
<p>Evolution of LLMs.</p>
Full article ">Figure 2
<p>Types of fine-tuning.</p>
Full article ">Figure 3
<p>Traditional story generation using LLM.</p>
Full article ">Figure 4
<p>Pipeline of the proposed system.</p>
Full article ">Figure 5
<p>Prompt generation module.</p>
Full article ">Figure 6
<p>Supervised fine-tuning.</p>
Full article ">Figure 7
<p>Gradient normalization of LLaMA, Mistral, and Zephyr models. (<b>a</b>) LLaMA gradient normalization; (<b>b</b>) Mistral gradient normalization; (<b>c</b>) Zephyr gradient normalization.</p>
Full article ">Figure 8
<p>Learning rate of LLaMA, Mistral, and Zephyr models. (<b>a</b>) LLaMA learning rate; (<b>b</b>) Mistral learning rate; (<b>c</b>) Zephyr learning rate.</p>
Full article ">Figure 9
<p>Training loss of LLaMA, Mistral, and Zephyr models. (<b>a</b>) LLaMA training loss; (<b>b</b>) Mistral training loss; (<b>c</b>) Zephyr training loss.</p>
Full article ">Figure 10
<p>BERT Classifier.</p>
Full article ">Figure 11
<p>Types of classifiers.</p>
Full article ">Figure 12
<p>Confusion matrix BERT Classifier.</p>
Full article ">
Back to TopTop