[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (17,622)

Search Parameters:
Keywords = language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 535 KiB  
Review
Feeding Problems Assessment Tools in Children: A Scoping Review
by Suci Destriatania, Judhiastuty Februhartanty, Fariz Nurwidya and Rini Sekartini
Children 2025, 12(1), 37; https://doi.org/10.3390/children12010037 (registering DOI) - 28 Dec 2024
Abstract
‘Feeding problems’ is a term used to describe problems that may present typically in children. Problems with feeding during infancy can result in significant negative consequences for a child’s nutrition, growth, and brain development. This scoping review aims to map current research, provide [...] Read more.
‘Feeding problems’ is a term used to describe problems that may present typically in children. Problems with feeding during infancy can result in significant negative consequences for a child’s nutrition, growth, and brain development. This scoping review aims to map current research, provide summary of the available feeding problem assessment tools for children, and review current implications and the gaps between tools, providing information that academics, practitioners, and parents may find useful. Three electronic databases (PubMed, Science Direct, and ProQuest) were searched using terms related to feeding problem assessment tools in children, which included, but were not limited to, “feeding difficult*”, “eating problem”, “eating difficult*”, “tool”, “child*”, and “pediatric”. The following limits were implemented on the search: English language, age limit (<18 years old) and publication period (last 10 years). Data management and analysis carried out manually through discussion with the team. Authors 1 and 2 screened titles and abstracts, then full texts were discussed with the full team to identify articles that met inclusion and exclusion criteria. Data were charted into a matrix table based on these categories: author, year, population, assessment tools, usage and aspects. Thematic analysis was carried out to summarize the characteristics of the studies. There were 47 papers included in the study and analysis, in which 23 assessment tools were found. Pedi-EAT was the most frequent assessment tool used in the studies, with nine papers covering this feeding problem assessment tool. MCH–FS came in second for its chosen tool quantifying children’s feeding problems, with a total of seven papers covering this tool, along with BPFAS with seven papers. In this review, 23 assessment tools were validated and tested for reliability. Pedi-EAT, MCH-FS and BPFAS were commonly used instruments. However, it is clear that no single instrument covers comprehensively all aspects of feeding problems in children. In addition, usage of the tools and wide age range indicate that further research is needed to fill the gaps. Full article
(This article belongs to the Section Pediatric Neonatology)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of article collection process.</p>
Full article ">
28 pages, 2499 KiB  
Article
Optimizing Aspect-Based Sentiment Analysis Using BERT for Comprehensive Analysis of Indonesian Student Feedback
by Ahmad Jazuli, Widowati and Retno Kusumaningrum
Appl. Sci. 2025, 15(1), 172; https://doi.org/10.3390/app15010172 (registering DOI) - 28 Dec 2024
Abstract
Evaluating the learning process requires a platform for students to express feedback and suggestions openly through online reviews. Sentiment analysis is often used to analyze review texts but typically captures only overall sentiment without identifying specific aspects. This study develops an aspect-based sentiment [...] Read more.
Evaluating the learning process requires a platform for students to express feedback and suggestions openly through online reviews. Sentiment analysis is often used to analyze review texts but typically captures only overall sentiment without identifying specific aspects. This study develops an aspect-based sentiment analysis (ABSA) model using IndoBERT, a pre-trained model tailored for the Indonesian language. The research uses 10,000 student reviews from Indonesian universities, processed through data labeling, text preprocessing, and splitting, followed by model training and performance evaluation. The model demonstrated superior performance with an aspect extraction accuracy of 0.973, an F1-score of 0.952, a sentiment classification accuracy of 0.979, and an F1-score of 0.974. Experimental results indicate that the proposed ABSA model surpasses previous state-of-the-art models in analyzing sentiment related to specific aspects of educational evaluation. By leveraging IndoBERT, the model effectively handles linguistic complexities and provides detailed insights into student experiences. These findings highlight the potential of the ABSA model in enhancing learning evaluations by offering precise, aspect-focused feedback, contributing to strategies for improving the quality of higher education. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence and Semantic Mining Technology)
Show Figures

Figure 1

Figure 1
<p>The stages of this research.</p>
Full article ">Figure 2
<p>Krippendorff’s Alpha calculation results.</p>
Full article ">Figure 3
<p>Confusion matrix.</p>
Full article ">Figure 4
<p>The training process of aspect extraction.</p>
Full article ">Figure 5
<p>The training process of sentiment classification.</p>
Full article ">Figure 6
<p>Confusion Matrix of Model Performance.</p>
Full article ">Figure 7
<p>Model performance is evaluated using precision, recall, and F1-scores, which comprehensively assess the lecturer’s classification accuracy.</p>
Full article ">Figure 8
<p>Model performance is evaluated using precision, recall, and F1-scores, comprehensively assessing the curriculum’s classification accuracy.</p>
Full article ">
35 pages, 2817 KiB  
Article
A Knowledge Graph Framework to Support Life Cycle Assessment for Sustainable Decision-Making
by Lucas Greif, Svenja Hauck, Andreas Kimmig and Jivka Ovtcharova
Appl. Sci. 2025, 15(1), 175; https://doi.org/10.3390/app15010175 (registering DOI) - 28 Dec 2024
Viewed by 12
Abstract
This study introduces a comprehensive knowledge graph (KG)-based framework designed to support sustainable decision-making by integrating, enriching, and analyzing heterogeneous data sources. The proposed methodology leverages domain expertise, real-world data, and synthetic data generated through language models to address challenges in life cycle [...] Read more.
This study introduces a comprehensive knowledge graph (KG)-based framework designed to support sustainable decision-making by integrating, enriching, and analyzing heterogeneous data sources. The proposed methodology leverages domain expertise, real-world data, and synthetic data generated through language models to address challenges in life cycle assessment (LCA), particularly data scarcity and inconsistency. By modeling the entire product lifecycle, including engineering, production, usage, and disposal phases, the framework facilitates early-stage design decision-making and provides actionable insights for sustainability improvements. The methodology is validated through a case study on 3D printing (3DP), demonstrating its ability to manage complex data, highlight relationships between engineering decisions and environmental impacts, and mitigate data scarcity in the early phases of product development in the context of LCAs. In conclusion, the results demonstrate the framework’s potential to drive sustainable innovation in manufacturing. Full article
(This article belongs to the Special Issue Holistic AI Technologies and Applications)
15 pages, 1090 KiB  
Article
Using Large Language Models to Retrieve Critical Data from Clinical Processes and Business Rules
by Yunguo Yu, Cesar A. Gomez-Cabello, Svetlana Makarova, Yogesh Parte, Sahar Borna, Syed Ali Haider, Ariana Genovese, Srinivasagam Prabha and Antonio J. Forte
Bioengineering 2025, 12(1), 17; https://doi.org/10.3390/bioengineering12010017 (registering DOI) - 28 Dec 2024
Viewed by 31
Abstract
Current clinical care relies heavily on complex, rule-based systems for tasks like diagnosis and treatment. However, these systems can be cumbersome and require constant updates. This study explores the potential of the large language model (LLM), LLaMA 2, to address these limitations. We [...] Read more.
Current clinical care relies heavily on complex, rule-based systems for tasks like diagnosis and treatment. However, these systems can be cumbersome and require constant updates. This study explores the potential of the large language model (LLM), LLaMA 2, to address these limitations. We tested LLaMA 2’s performance in interpreting complex clinical process models, such as Mayo Clinic Care Pathway Models (CPMs), and providing accurate clinical recommendations. LLM was trained on encoded pathways versions using DOT language, embedding them with SentenceTransformer, and then presented with hypothetical patient cases. We compared the token-level accuracy between LLM output and the ground truth by measuring both node and edge accuracy. LLaMA 2 accurately retrieved the diagnosis, suggested further evaluation, and delivered appropriate management steps, all based on the pathways. The average node accuracy across the different pathways was 0.91 (SD ± 0.045), while the average edge accuracy was 0.92 (SD ± 0.122). This study highlights the potential of LLMs for healthcare information retrieval, especially when relevant data are provided. Future research should focus on improving these models’ interpretability and their integration into existing clinical workflows. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Complex Diseases)
23 pages, 7572 KiB  
Article
Arabic Temporal Common Sense Understanding
by Reem Alqifari, Hend Al-Khalifa and Simon O’Keefe
Computation 2025, 13(1), 5; https://doi.org/10.3390/computation13010005 (registering DOI) - 28 Dec 2024
Viewed by 37
Abstract
Natural language understanding (NLU) includes temporal text understanding, which can be complex and encompasses temporal common sense understanding. There are many challenges in comprehending common sense within a text. Currently, there is a limited number of datasets containing temporal common sense in English [...] Read more.
Natural language understanding (NLU) includes temporal text understanding, which can be complex and encompasses temporal common sense understanding. There are many challenges in comprehending common sense within a text. Currently, there is a limited number of datasets containing temporal common sense in English and there is an absence of such datasets specifically for the Arabic language. In this study, an Arabic dataset was constructed based on an available English dataset. This dataset is considered a valuable resource for the Arabic community. Consequently, different multilingual pre-trained language models (PLMs) were applied to both the English and new Arabic datasets. Based on this, the effectiveness of these models in Arabic and English is compared and discussed. After analyzing the errors, a new categorization of errors was proposed. Finally, the ability of the PLMs to understand the input text and predict temporal features was evaluated. Through this detailed categorization of errors and classification of temporal elements, this study establishes a comprehensive framework aimed at clarifying the specific challenges encountered by PLMs in temporal common sense understanding (TCU). This methodology underscores the urgent need for further research on PLMs’ capabilities for TCU tasks. Full article
Show Figures

Figure 1

Figure 1
<p>Example of a TCU challenge showing a scenario where the model fails to validate the correct answer. The table highlights the scenario description, posed question, provided candidate answers, the correct label (marked with a ✔), and the model’s incorrect prediction (marked with a ✕). This example illustrates the limitations of the model’s temporal commonsense reasoning, emphasizing the need for better training or enhanced datasets tailored for temporal understanding.</p>
Full article ">Figure 2
<p>Percentage of the unique question–answer pairs in each temporal category.</p>
Full article ">Figure 3
<p>Sample of the dataset. Each row targets one temporal aspect from the five aspects covered by the original dataset. An example context for each aspect is provided from both the English and Arabic datasets. The English column is from the MC-TACO dataset and includes five different contexts, each representing one aspect. For each context, the question is provided along with all candidate answers, with the correct answers in bold. Note that there may be more than one correct answer for a question, and the number of answers for each question varies. The Arabic column is from the translated dataset.</p>
Full article ">Figure 4
<p>Model results: F1 score for each temporal aspect.</p>
Full article ">Figure 5
<p>Predictions of AraBERT vs. CAMeLBERT.</p>
Full article ">Figure 6
<p>The results of TCU and temporal classification for Arabic and English.</p>
Full article ">Figure 7
<p>Results of Arabic temporal classification in comparison with TCU.</p>
Full article ">
27 pages, 2436 KiB  
Article
Seeing the Sound: Multilingual Lip Sync for Real-Time Face-to-Face Translation
by Amirkia Rafiei Oskooei, Mehmet S. Aktaş and Mustafa Keleş
Computers 2025, 14(1), 7; https://doi.org/10.3390/computers14010007 (registering DOI) - 28 Dec 2024
Viewed by 59
Abstract
Imagine a future where language is no longer a barrier to real-time conversations, enabling instant and lifelike communication across the globe. As cultural boundaries blur, the demand for seamless multilingual communication has become a critical technological challenge. This paper addresses the lack of [...] Read more.
Imagine a future where language is no longer a barrier to real-time conversations, enabling instant and lifelike communication across the globe. As cultural boundaries blur, the demand for seamless multilingual communication has become a critical technological challenge. This paper addresses the lack of robust solutions for real-time face-to-face translation, particularly for low-resource languages, by introducing a comprehensive framework that not only translates language but also replicates voice nuances and synchronized facial expressions. Our research tackles the primary challenge of achieving accurate lip synchronization across culturally diverse languages, filling a significant gap in the literature by evaluating the generalizability of lip sync models beyond English. Specifically, we develop a novel evaluation framework combining quantitative lip sync error metrics and qualitative assessments by human observers. This framework is applied to assess two state-of-the-art lip sync models with different architectures for Turkish, Persian, and Arabic languages, using a newly collected dataset. Based on these findings, we propose and implement a modular system that integrates language-agnostic lip sync models with neural networks to deliver a fully functional face-to-face translation experience. Inference Time Analysis shows this system achieves highly realistic, face-translated talking heads in real time, with a throughput as low as 0.381 s. This transformative framework is primed for deployment in immersive environments such as VR/AR, Metaverse ecosystems, and advanced video conferencing platforms. It offers substantial benefits to developers and businesses aiming to build next-generation multilingual communication systems for diverse applications. While this work focuses on three languages, its modular design allows scalability to additional languages. However, further testing in broader linguistic and cultural contexts is required to confirm its universal applicability, paving the way for a more interconnected and inclusive world where language ceases to hinder human connection. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

Figure 1
<p>A high-level overview of the audio-driven talking head generation concept, in which a generative model takes audio and a Reference Identity as input and generates a talking head as output.</p>
Full article ">Figure 2
<p>A modular face-to-face translation workflow.</p>
Full article ">Figure 3
<p>Stacked bar chart showing the approximate percentage contribution of each component to the total inference time, with and without Super Resolution component.</p>
Full article ">Figure 4
<p>Illustration of the impact of Super Resolution on lip shape representation across both system variations, captured from our ablation study.</p>
Full article ">
33 pages, 3507 KiB  
Article
Cognitive Agents Powered by Large Language Models for Agile Software Project Management
by Konrad Cinkusz, Jarosław A. Chudziak and Ewa Niewiadomska-Szynkiewicz
Electronics 2025, 14(1), 87; https://doi.org/10.3390/electronics14010087 (registering DOI) - 28 Dec 2024
Viewed by 44
Abstract
This paper investigates the integration of cognitive agents powered by Large Language Models (LLMs) within the Scaled Agile Framework (SAFe) to reinforce software project management. By deploying virtual agents in simulated software environments, this study explores their potential to fulfill fundamental roles in [...] Read more.
This paper investigates the integration of cognitive agents powered by Large Language Models (LLMs) within the Scaled Agile Framework (SAFe) to reinforce software project management. By deploying virtual agents in simulated software environments, this study explores their potential to fulfill fundamental roles in IT project development, thereby optimizing project outcomes through intelligent automation. Particular emphasis is placed on the adaptability of these agents to Agile methodologies and their transformative impact on decision-making, problem-solving, and collaboration dynamics. The research leverages the CogniSim ecosystem, a platform designed to simulate real-world software engineering challenges, such as aligning technical capabilities with business objectives, managing interdependencies, and maintaining project agility. Through iterative simulations, cognitive agents demonstrate advanced capabilities in task delegation, inter-agent communication, and project lifecycle management. By employing natural language processing to facilitate meaningful dialogues, these agents emulate human roles and improve the efficiency and precision of Agile practices. Key findings from this investigation highlight the ability of LLM-powered cognitive agents to deliver measurable improvements in various metrics, including task completion times, quality of deliverables, and communication coherence. These agents exhibit scalability and adaptability, ensuring their applicability across diverse and complex project environments. This study underscores the potential of integrating LLM-powered agents into Agile project management frameworks as a means of advancing software engineering practices. This integration not only refines the execution of project management tasks but also sets the stage for a paradigm shift in how teams collaborate and address emerging challenges. By integrating the capabilities of artificial intelligence with the principles of Agile, the CogniSim framework establishes a foundation for more intelligent, efficient, and adaptable software development methodologies. Full article
Show Figures

Figure 1

Figure 1
<p>Scrum framework with key artifacts, meetings, and processes [<a href="#B7-electronics-14-00087" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>Software engineering layers [<a href="#B5-electronics-14-00087" class="html-bibr">5</a>].</p>
Full article ">Figure 3
<p>A generic agile iteration cycle illustrating planning, development, review, stakeholder feedback, and continuous improvement.</p>
Full article ">Figure 4
<p>Conceptual scaled Agile iteration flow: multiple teams coordinating increments, integrating continuously, and aligning with strategic objectives.</p>
Full article ">Figure 5
<p>Single cognitive agent and its components [<a href="#B17-electronics-14-00087" class="html-bibr">17</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Agent overview [<a href="#B38-electronics-14-00087" class="html-bibr">38</a>]; (<b>b</b>) cognitive agent architecture represented as a cyclic process with four components.</p>
Full article ">Figure 7
<p>Applications of Multi-Agent Systems in software engineering.</p>
Full article ">Figure 8
<p>Layered ecosystem of CogniSim.</p>
Full article ">Figure 9
<p>Integration of CogniSim with SAFe framework [<a href="#B52-electronics-14-00087" class="html-bibr">52</a>].</p>
Full article ">Figure 10
<p>Project structure.</p>
Full article ">Figure 11
<p>Simulation workflow in CogniSim, showing the iterative Agile process from setup through data analysis.</p>
Full article ">Figure 12
<p>Simulation results.</p>
Full article ">Figure 13
<p>Conceptual enterprise-scale Agile layers with cognitive agents and MASs (inspired by principles in frameworks such as SAFe 6.0 [<a href="#B25-electronics-14-00087" class="html-bibr">25</a>,<a href="#B69-electronics-14-00087" class="html-bibr">69</a>]).</p>
Full article ">Figure 14
<p>MAS concept diagram.</p>
Full article ">
10 pages, 473 KiB  
Article
Parental Compliance with Preschool Vision Screening Test
by Hilit Kerner Lavi, Tal Koval, Ilanit Trifonov, Olga Reitablat and Oriel Spierer
J. Clin. Med. 2025, 14(1), 107; https://doi.org/10.3390/jcm14010107 (registering DOI) - 28 Dec 2024
Viewed by 177
Abstract
Objective: To assess the barriers to parental compliance with preschool vision screening tests and the recommended follow-up eye care. Methods: This prospective study included children aged 3–6 years attending 46 preschools. Parents were asked for consent for their children to participate in a [...] Read more.
Objective: To assess the barriers to parental compliance with preschool vision screening tests and the recommended follow-up eye care. Methods: This prospective study included children aged 3–6 years attending 46 preschools. Parents were asked for consent for their children to participate in a vision screening test. Parents whose child did not participate due to lack of parental consent and parents whose child failed the screening test were contacted by telephone and given a standardized questionnaire to identify potential barriers to compliance. Results: A total of 1511 children (mean age 4.76 years ± 0.76, 51.3% boys) were eligible for vision screening. Consent was given by the parents of 1295 children (85.7%). Lack of consent in children who had never been examined by an ophthalmologist was primarily due to unawareness of the screening test or other logistical reasons (117 cases, 92.1%). Of the children screened, 140 (11.1%) failed the test and 80.0% of their parents adhered to the recommended follow-up eye care. Parents who followed the screening vision test recommendations were more likely to be native language speakers (82.8% vs. 58.8% mothers and 88.9% vs. 60.0% fathers; p = 0.049 and 0.015, respectively). There was a higher chance of at least one parent being native-born if recommendations were followed (90.6% vs. 58.8%, p = 0.004). All other factors tested were insignificant. Conclusions: Parental consent and cooperation with vision screening test and its recommendations were high. Migrant families are more likely to face challenges in following vision screening test recommendations, underscoring the need for tailored approaches for specific populations. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

Figure 1
<p>Parental consent and children’s participation in vision screening test. * Exclusion was due to inaccuracies in the council’s registry.</p>
Full article ">
1932 KiB  
Proceeding Paper
Research on Text Information Extraction and Analysis of Civil Transport Aircraft Accidents Based on Large Language Model
by Jianzhong Yang, Tao Su and Xiyuan Chen
Eng. Proc. 2024, 80(1), 4; https://doi.org/10.3390/engproc2024080004 (registering DOI) - 27 Dec 2024
Viewed by 40
Abstract
Civil aviation safety is crucial to the airline transportation industry, and the effective prevention and analysis of accidents are essential. This paper delves into the mining of unstructured textual information within accident reports, tracing the evolution from manual rules to machine learning and [...] Read more.
Civil aviation safety is crucial to the airline transportation industry, and the effective prevention and analysis of accidents are essential. This paper delves into the mining of unstructured textual information within accident reports, tracing the evolution from manual rules to machine learning and then to advanced deep learning techniques. We particularly highlight the advantages of text extraction methods that leverage large language models. We propose an innovative approach that integrates TF-IDF keyword extraction with large language model prompted filtering to scrutinize the causes of accidents involving civil transport aircraft. By analyzing the keywords before and after filtering, this method significantly enhances the efficiency of information extraction, minimizes the need for manual annotation, and thus improves the overall effectiveness of accident prevention and analysis. This research is not only pivotal in preventing similar incidents in the future but also introduces new perspectives for conducting aviation accident investigations and promotes the sustainable development of the civil aviation industry. Full article
Show Figures

Figure 1

Figure 1
<p>Methods for keyword extraction using large model information extraction.</p>
Full article ">Figure 2
<p>Large language model prompt response process.</p>
Full article ">Figure 3
<p>High-frequency words in mechanical accident texts.</p>
Full article ">Figure 4
<p>High-frequency words derived from mechanical accident texts after filtering with ChatGLM3-6b.</p>
Full article ">Figure 5
<p>High-frequency words in aircraft-crew-related accidents.</p>
Full article ">Figure 6
<p>High-frequency words derived from aircraft-crew-related accident texts after filtering with ChatGLM3-6b.</p>
Full article ">Figure 7
<p>High-frequency words in maintenance-related accidents.</p>
Full article ">Figure 8
<p>High-frequency words derived from maintenance-related accident texts after filtering with ChatGLM3-6b.</p>
Full article ">
15 pages, 820 KiB  
Article
Design and Implementation of a Compiled Declarative Language for Game AI Control
by Christopher Cromer, Martin Araneda and Clemente Rubio-Manzano
Appl. Sci. 2025, 15(1), 157; https://doi.org/10.3390/app15010157 (registering DOI) - 27 Dec 2024
Viewed by 339
Abstract
Video games have become one of the most popular forms of entertainment around the world. Currently, agents (bots or non-player characters) are predominantly programmed using procedural and deterministic imperative techniques, which pose significant drawbacks in terms of cost and time efficiency. An interesting [...] Read more.
Video games have become one of the most popular forms of entertainment around the world. Currently, agents (bots or non-player characters) are predominantly programmed using procedural and deterministic imperative techniques, which pose significant drawbacks in terms of cost and time efficiency. An interesting and alternative line of work is to develop declarative scripting languages which align the programming task closer to human logic. This allows programmers to intuitively implement agents’ behaviors using straightforward rules. In this regard, most of these languages are interpreted, which may impact performance. Hence, this article presents the design and implementation of a new declarative and compiled scripting language called Obelysk for controlling agents. To test and evaluate the language, a video game was created using the Godot game engine, which allowed us to demonstrate the correct functionality of our scripting language to program the AIs participating in the video game. Finally, an analytics platform was also developed to evaluate the correct behavior of the programmed agents. Full article
Show Figures

Figure 1

Figure 1
<p>2D Alai Computer Game Structure.</p>
Full article ">Figure 2
<p>2D Alai Computer Game: players, opponents and coins.</p>
Full article ">Figure 3
<p>The software architecture comprises three modules: compiler, knowledge base, and library. It is also shown how the language communicates with the video game engine for a hypothetical case of jumping.</p>
Full article ">Figure 4
<p>Normal probability distribution for coins.</p>
Full article ">Figure 5
<p>Normal probability distribution for time.</p>
Full article ">Figure 6
<p>Time series summarizing the game session of an agent.</p>
Full article ">Figure 7
<p>Time series summarizing the game session of an human player.</p>
Full article ">
16 pages, 359 KiB  
Article
EduDCM: A Novel Framework for Automatic Educational Dialogue Classification Dataset Construction via Distant Supervision and Large Language Models
by Changyong Qi, Longwei Zheng, Yuang Wei, Haoxin Xu, Peiji Chen and Xiaoqing Gu
Appl. Sci. 2025, 15(1), 154; https://doi.org/10.3390/app15010154 (registering DOI) - 27 Dec 2024
Viewed by 291
Abstract
Educational dialogue classification is a critical task for analyzing classroom interactions and fostering effective teaching strategies. However, the scarcity of annotated data and the high cost of manual labeling pose significant challenges, especially in low-resource educational contexts. This article presents the EduDCM framework [...] Read more.
Educational dialogue classification is a critical task for analyzing classroom interactions and fostering effective teaching strategies. However, the scarcity of annotated data and the high cost of manual labeling pose significant challenges, especially in low-resource educational contexts. This article presents the EduDCM framework for the first time, offering an original approach to addressing these challenges. EduDCM innovatively integrates distant supervision with the capabilities of Large Language Models (LLMs) to automate the construction of high-quality educational dialogue classification datasets. EduDCM reduces the noise typically associated with distant supervision by leveraging LLMs for context-aware label generation and incorporating heuristic alignment techniques. To validate the framework, we constructed the EduTalk dataset, encompassing diverse classroom dialogues labeled with pedagogical categories. Extensive experiments on EduTalk and publicly available datasets, combined with expert evaluations, confirm the superior quality of EduDCM-generated datasets. Models trained on EduDCM data achieved a performance comparable to that of manually annotated datasets. Expert evaluations using a 5-point Likert scale show that EduDCM outperforms Template-Based Generation and Few-Shot GPT in terms of annotation accuracy, category coverage, and consistency. These findings emphasize EduDCM’s novelty and its effectiveness in generating high-quality, scalable datasets for low-resource educational NLP tasks, thus reducing manual annotation efforts. Full article
(This article belongs to the Special Issue Intelligent Systems and Tools for Education)
15 pages, 329 KiB  
Article
AI-Powered Neurogenetics: Supporting Patient’s Evaluation with Chatbot
by Stefania Zampatti, Juliette Farro, Cristina Peconi, Raffaella Cascella, Claudia Strafella, Giulia Calvino, Domenica Megalizzi, Giulia Trastulli, Carlo Caltagirone and Emiliano Giardina
Genes 2025, 16(1), 29; https://doi.org/10.3390/genes16010029 (registering DOI) - 27 Dec 2024
Viewed by 132
Abstract
Background/Objectives: Artificial intelligence and large language models like ChatGPT and Google’s Gemini are promising tools with remarkable potential to assist healthcare professionals. This study explores ChatGPT and Gemini’s potential utility in assisting clinicians during the first evaluation of patients with suspected neurogenetic disorders. [...] Read more.
Background/Objectives: Artificial intelligence and large language models like ChatGPT and Google’s Gemini are promising tools with remarkable potential to assist healthcare professionals. This study explores ChatGPT and Gemini’s potential utility in assisting clinicians during the first evaluation of patients with suspected neurogenetic disorders. Methods: By analyzing the model’s performance in identifying relevant clinical features, suggesting differential diagnoses, and providing insights into possible genetic testing, this research seeks to determine whether these AI tools could serve as a valuable adjunct in neurogenetic assessments. Ninety questions were posed to ChatGPT (Versions 4o, 4, and 3.5) and Gemini: four questions about clinical diagnosis, seven about genetic inheritance, estimable recurrence risks, and available tests, and four questions about patient management, each for six different neurogenetic rare disorders (Hereditary Spastic Paraplegia type 4 and type 7, Huntington Disease, Fragile X-associated Tremor/Ataxia Syndrome, Becker Muscular Dystrophy, and FacioScapuloHumeral Muscular Dystrophy). Results: According to the results of this study, GPT chatbots demonstrated significantly better performance than Gemini. Nonetheless, all AI chatbots showed notable gaps in diagnostic accuracy and a concerning level of hallucinations. Conclusions: As expected, these tools can empower clinicians in assessing neurogenetic disorders, yet their effective use demands meticulous collaboration and oversight from both neurologists and geneticists. Full article
33 pages, 9196 KiB  
Article
Generic Representation Language for Modeling Transport and Material Handling Systems in Smart Manufacturing Systems
by Micael Gonçalves, Paulo Martins, Guilherme Pereira and Rui Sousa
Processes 2025, 13(1), 43; https://doi.org/10.3390/pr13010043 - 27 Dec 2024
Viewed by 236
Abstract
This paper introduces a generic representation language to be used by organizations to represent physical and behavioral characteristics of Transport and Material Handling Systems (TMHS). This work implied a systematic observation, analysis and interpretation of several TMHS to ensure that most of the [...] Read more.
This paper introduces a generic representation language to be used by organizations to represent physical and behavioral characteristics of Transport and Material Handling Systems (TMHS). This work implied a systematic observation, analysis and interpretation of several TMHS to ensure that most of the behaviors were covered. The generic representation language consists of three main types of elements: (i) objects transported, (ii) workstations and (iii) transport/handling equipment (device), and a small set of simple and easy-to-use properties to be defined by users of each organization to characterize each element of a TMHS. Each property is not related to any specific device and can be used to represent the behavior of different devices. A graphic representation for each element is proposed to make communication between users simpler and more effective, as well as to reduce the time to learn and apply the representation language. The representation of three concrete TMHS (with different behaviors, rules and restrictions) is shown, contributing to demonstrate the ability, flexibility and comprehensiveness of the developed representation language. These results point to the potential of implementing the developed generic representation language in IT (Information Technology) support systems, in particular, in Smart Manufacturing Systems, to control most of the TMHS. Full article
(This article belongs to the Special Issue Process Automation and Smart Manufacturing in Industry 4.0/5.0)
Show Figures

Figure 1

Figure 1
<p>Overview of the research methodology.</p>
Full article ">Figure 2
<p>Stages of the representation language.</p>
Full article ">Figure 3
<p>Graphic representation of a workstation.</p>
Full article ">Figure 4
<p>Examples of workstations with different characteristics: (<b>a</b>) Traditional storage structure; (<b>b</b>) Transportable storage structure; (<b>c</b>) Storage structure organized by hooks.</p>
Full article ">Figure 5
<p>Example of a TMHS with three devices.</p>
Full article ">Figure 6
<p>Example of the definition of a go-to template.</p>
Full article ">Figure 7
<p>Graphic representation of (<b>a</b>) Device request link; (<b>b</b>) Device request link of each object type that can be transported.</p>
Full article ">Figure 8
<p>Representation of the device route links for a TMHS that consists of three devices (DV1, DV2 and DV3).</p>
Full article ">Figure 9
<p>Examples of AGVs: (<b>a</b>) AGV with handling equipment (arm robot); (<b>b</b>) AGV without handling equipment.</p>
Full article ">Figure 10
<p>Example of the definition of a handling template.</p>
Full article ">Figure 11
<p>Graphic representation of a device route.</p>
Full article ">Figure 12
<p>Representation of the device route links and activities for a TMHS that consists of three devices (DV1, DV2 and DV3).</p>
Full article ">Figure 13
<p>Representation of device DV1.</p>
Full article ">Figure 14
<p>Representation of a TMHS.</p>
Full article ">Figure 15
<p>Automated material delivery system with ATV on a shop floor (adapted from [<a href="#B28-processes-13-00043" class="html-bibr">28</a>]).</p>
Full article ">Figure 16
<p>Representation of an automated material delivery system.</p>
Full article ">Figure 17
<p>Graph for (<b>a</b>) Alternatives 1, 2 and 3 (conventional equipment and AGV); (<b>b</b>) Alternative 4 (AGW) [<a href="#B10-processes-13-00043" class="html-bibr">10</a>].</p>
Full article ">Figure 18
<p>Representation of a freight interchange site.</p>
Full article ">Figure 19
<p>Scheme of routes: (<b>a</b>) Hamiltonian and (<b>b</b>) Double-path (adapted from [<a href="#B25-processes-13-00043" class="html-bibr">25</a>]).</p>
Full article ">Figure 20
<p>Representation of two different vehicle routes: (<b>a</b>) Hamiltonian distribution strategy; (<b>b</b>) Double-path distribution strategy.</p>
Full article ">
18 pages, 4041 KiB  
Article
Efficiency Analysis of NIST-Standardized Post-Quantum Cryptographic Algorithms for Digital Signatures in Various Environments
by Dominik Dziechciarz and Marcin Niemiec
Electronics 2025, 14(1), 70; https://doi.org/10.3390/electronics14010070 - 27 Dec 2024
Viewed by 247
Abstract
The advent of quantum computing presents a significant threat to the security of asymmetric cryptographic algorithms, necessitating the adoption of new cryptographic mechanisms resilient to quantum-based attacks. This need is particularly critical for applications that rely exclusively on public-key cryptography, such as digital [...] Read more.
The advent of quantum computing presents a significant threat to the security of asymmetric cryptographic algorithms, necessitating the adoption of new cryptographic mechanisms resilient to quantum-based attacks. This need is particularly critical for applications that rely exclusively on public-key cryptography, such as digital signatures. This paper presents a comprehensive analysis of the performance of various post-quantum cryptographic algorithms, focusing specifically on NIST-standardized digital signature algorithms—SPHINCS+ and Dilithium—and their practical implementations. The study evaluates these algorithms across different programming languages to identify optimal environments for diverse applications. Comparative analyses with the widely used RSA algorithm reveal that the computational cost of adopting post-quantum cryptographic systems is relatively low. Notably, some post-quantum algorithms demonstrate performance advantages over classical RSA in specific scenarios. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>Performance measurement without ‘warmup’ phase (Dilithium2 algorithm), showcasing significant anomalies.</p>
Full article ">Figure 2
<p>Performance analysis of C implementation for 1 KB file.</p>
Full article ">Figure 3
<p>Performance analysis of C implementation for 1 KB file (only NIST level 5 algorithms).</p>
Full article ">Figure 4
<p>Performance analysis of C implementation for 1 MB file (NIST level 5 algorithms).</p>
Full article ">Figure 5
<p>Performance analysis of C implementation for 50 MB file (NIST level 5 algorithms).</p>
Full article ">Figure 6
<p>Performance analysis of C implementation for 1 GB file (NIST level 5 algorithms).</p>
Full article ">Figure 7
<p>Performance analysis of C++ implementation for 1 KB file (NIST level 5 algorithms).</p>
Full article ">Figure 8
<p>Performance analysis of Golang implementation for 1 KB file (NIST level 5 algorithms).</p>
Full article ">Figure 9
<p>Performance analysis of Python implementation for 1 KB file (NIST level 5 algorithms).</p>
Full article ">Figure 10
<p>Performance analysis of Dilithium5 algorithm across languages for 1 KB file size.</p>
Full article ">Figure 11
<p>Performance analysis of Dilithium5 algorithm across languages for 1 GB file size.</p>
Full article ">Figure 12
<p>Comparison between post-quantum algorithms and RSA for 1 KB file size.</p>
Full article ">Figure 13
<p>Comparison between post-quantum algorithms and RSA for 1 GB file size.</p>
Full article ">Figure 14
<p>Comparison between post-quantum algorithms with hashed input data and RSA for 1 GB file size.</p>
Full article ">
18 pages, 558 KiB  
Article
In-Service Teacher Professional Development: Challenges and Opportunities for Innovating the Trichronous Modality of Delivery in Vietnam’s EFL Education
by Tuyen Van Nguyen and Helena Sit
Educ. Sci. 2025, 15(1), 19; https://doi.org/10.3390/educsci15010019 - 27 Dec 2024
Viewed by 186
Abstract
The evolving landscape of educational technology has not only affected the design of teaching learning contents but also the employment of methods of delivery. In Vietnam’s language education discipline, research indicates that the integration of educational technology has significantly expanded the range of [...] Read more.
The evolving landscape of educational technology has not only affected the design of teaching learning contents but also the employment of methods of delivery. In Vietnam’s language education discipline, research indicates that the integration of educational technology has significantly expanded the range of delivery modalities available to educators. However, whether the existing modalities can effectively cater to the needs of diverse learning styles remains uncertain. To bridge the research gap, this study initially seeks to assess the effectiveness of commonly utilized delivery modalities in K-12 EFL education. Thirty volunteer EFL teachers from across Vietnam, representing the north, central, and south regions, participated in in-depth interviews. These teachers teach English at primary, secondary, and high schools. The main findings include their current ICT competence levels and preferences for instructional design regarding diverse modalities of delivery. Then, grounding on an in-depth analysis of their choices and perspectives, a trichronous model is proposed and innovated to accommodate diverse learning preferences and maximize learning potential. The research findings and proposal are significant for professional development trainers and teacher educators, providing valuable insights for decision-making regarding the increasing use of technology in current EFL research and practice. This study can contribute to shaping a forward-thinking approach to EFL education in an increasingly digitalized world by addressing challenges and identifying more practical practices in language teacher education. Full article
Show Figures

Figure 1

Figure 1
<p>Teachers’ preferences for MODs in ICT-related professional in-service training.</p>
Full article ">Figure 2
<p>Trichronous modality of delivery for in-service teacher training.</p>
Full article ">
Back to TopTop