[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (98)

Search Parameters:
Keywords = automated repositories

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 9722 KiB  
Article
Automation Applied to the Collection and Generation of Scientific Literature
by Nadia Paola Valadez-de la Paz, Jose Antonio Vazquez-Lopez, Aidee Hernandez-Lopez, Jaime Francisco Aviles-Viñas, Jose Luis Navarro-Gonzalez, Alfredo Valentin Reyes-Acosta and Ismael Lopez-Juarez
Publications 2025, 13(1), 11; https://doi.org/10.3390/publications13010011 (registering DOI) - 6 Mar 2025
Abstract
Preliminary activities of searching and selecting relevant articles are crucial in scientific research to determine the state of the art (SOTA) and enhance overall outcomes. While there are automatic tools for keyword extraction, these algorithms are often computationally expensive, storage-intensive, and reliant on [...] Read more.
Preliminary activities of searching and selecting relevant articles are crucial in scientific research to determine the state of the art (SOTA) and enhance overall outcomes. While there are automatic tools for keyword extraction, these algorithms are often computationally expensive, storage-intensive, and reliant on institutional subscriptions for metadata retrieval. Most importantly, they still require manual selection of literature. This paper introduces a framework that automates keyword searching in article abstracts to help select relevant literature for the SOTA by identifying key terms matching that we, hereafter, call source words. A case study in the food and beverage industry is provided to demonstrate the algorithm’s application. In the study, five relevant knowledge areas were defined to guide literature selection. The database from scientific repositories was categorized using six classification rules based on impact factor (IF), Open Access (OA) status, and JCR journal ranking. This classification revealed the knowledge area with the highest presence and highlighted the effectiveness of the selection rules in identifying articles for the SOTA. The approach included a panel of experts who confirmed the algorithm’s effectiveness in identifying source words in high-quality articles. The algorithm’s performance was evaluated using the F1 Score, which reached 0.83 after filtering out non-relevant articles. This result validates the algorithm’s ability to extract significant source words and demonstrates its usefulness in building the SOTA by focusing on the most scientifically impactful articles. Full article
Show Figures

Figure 1

Figure 1
<p>Process for enhancing entity recognition and relationship extraction in language models. Source: <a href="#B23-publications-13-00011" class="html-bibr">Panayi et al.</a> (<a href="#B23-publications-13-00011" class="html-bibr">2023</a>).</p>
Full article ">Figure 2
<p>Patent research to identify related documents at various stages of their life cycles. Source: <a href="#B2-publications-13-00011" class="html-bibr">Ali et al.</a> (<a href="#B2-publications-13-00011" class="html-bibr">2024</a>).</p>
Full article ">Figure 3
<p>Proposed framework (created by the authors).</p>
Full article ">Figure 4
<p>Behavior of the article population applying the classification rules.</p>
Full article ">Figure 5
<p>Diagram of relationships between the areas of knowledge of the topic to be investigated.</p>
Full article ">Figure 6
<p>Mamdani’s inference rules for determining the degree of interest of each article.</p>
Full article ">Figure 7
<p>Analysis of the literature by areas of knowledge and classification rules.</p>
Full article ">Figure 8
<p>Descriptive analysis of the repetitiveness of the <span class="html-italic">source words</span> in each article in R1 (R1-1).</p>
Full article ">Figure 9
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each article in R1 (R1-2).</p>
Full article ">Figure 10
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each article in R1 (R1-3).</p>
Full article ">Figure 11
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each article in R1 (R1-4).</p>
Full article ">Figure 12
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each article in R1 (R1-5).</p>
Full article ">Figure 13
<p>Descriptive analysis of the repetitiveness of the <span class="html-italic">source words</span> in each article in R3.</p>
Full article ">Figure 14
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each article in R4 (first set).</p>
Full article ">Figure 15
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each R4 article (second set).</p>
Full article ">Figure 16
<p>Descriptive analysis of the repetitiveness of <span class="html-italic">source words</span> in each article of R4 (third set).</p>
Full article ">Figure 17
<p>Descriptive analysis of the repetitiveness of the <span class="html-italic">source words</span> in each article of R5.</p>
Full article ">Figure 18
<p>Descriptive analysis of the repetitiveness of the <span class="html-italic">source words</span> in each article of R6 (first set).</p>
Full article ">Figure 19
<p>Descriptive analysis of the repetitiveness of the <span class="html-italic">source words</span> in each R6 article (second set).</p>
Full article ">
29 pages, 3281 KiB  
Article
An Automated Repository for the Efficient Management of Complex Documentation
by José Frade and Mário Antunes
Information 2025, 16(3), 205; https://doi.org/10.3390/info16030205 - 5 Mar 2025
Abstract
The accelerating digitalization of the public and private sectors has made information technologies (IT) indispensable in modern life. As services shift to digital platforms and technologies expand across industries, the complexity of legal, regulatory, and technical requirement documentation is growing rapidly. This increase [...] Read more.
The accelerating digitalization of the public and private sectors has made information technologies (IT) indispensable in modern life. As services shift to digital platforms and technologies expand across industries, the complexity of legal, regulatory, and technical requirement documentation is growing rapidly. This increase presents significant challenges in managing, gathering, and analyzing documents, as their dispersion across various repositories and formats hinders accessibility and efficient processing. This paper presents the development of an automated repository designed to streamline the collection, classification, and analysis of cybersecurity-related documents. By harnessing the capabilities of natural language processing (NLP) models—specifically Generative Pre-Trained Transformer (GPT) technologies—the system automates text ingestion, extraction, and summarization, providing users with visual tools and organized insights into large volumes of data. The repository facilitates the efficient management of evolving cybersecurity documentation, addressing issues of accessibility, complexity, and time constraints. This paper explores the potential applications of NLP in cybersecurity documentation management and highlights the advantages of integrating automated repositories equipped with visualization and search tools. By focusing on legal documents and technical guidelines from Portugal and the European Union (EU), this applied research seeks to enhance cybersecurity governance, streamline document retrieval, and deliver actionable insights to professionals. Ultimately, the goal is to develop a scalable, adaptable platform capable of extending beyond cybersecurity to serve other industries that rely on the effective management of complex documentation. Full article
Show Figures

Figure 1

Figure 1
<p>Automated repository implementation structure.</p>
Full article ">Figure 2
<p>Example of a call to GPT-4o API.</p>
Full article ">Figure 3
<p>Update page.</p>
Full article ">Figure 4
<p>Messages sent to GPT models.</p>
Full article ">Figure 5
<p>Simplification of document collection and classification processes.</p>
Full article ">Figure 6
<p>Example of a document stored in MongoDB’s documents collection.</p>
Full article ">Figure 7
<p>Flowchart for the process of adding documents to the automated repository.</p>
Full article ">Figure 8
<p>Repository’s main page.</p>
Full article ">Figure 9
<p>Regenerate page.</p>
Full article ">Figure 10
<p>Regenerate document POST request.</p>
Full article ">Figure 11
<p>Relation graph.</p>
Full article ">Figure 12
<p>Relations graph detail.</p>
Full article ">Figure 13
<p>Number of documents grouped by area.</p>
Full article ">Figure 14
<p>Documents issued over time and grouped by origin.</p>
Full article ">Figure 15
<p>Cumulative count of documents present in the repository.</p>
Full article ">Figure 16
<p>Number of documents by type.</p>
Full article ">Figure 17
<p>Number of documents issued by year and by area.</p>
Full article ">Figure 18
<p>New Documents page.</p>
Full article ">
27 pages, 3723 KiB  
Article
SESAME: Automated Security Assessment of Robots and Modern Multi-Robot Systems
by Manos Papoutsakis, George Hatzivasilis, Emmanouil Michalodimitrakis, Sotiris Ioannidis, Maria Michael, Antonis Savva, Panagiota Nikolaou, Eftychia Stokkou and Gizem Bozdemir
Electronics 2025, 14(5), 923; https://doi.org/10.3390/electronics14050923 - 26 Feb 2025
Viewed by 182
Abstract
As robotic systems become more integrated into our daily lives, there is growing concern about cybersecurity. Robots used in areas such as autonomous driving, surveillance, surgery, home assistance, and industrial automation can be vulnerable to cyber-attacks, which could have serious real-world consequences. Modern [...] Read more.
As robotic systems become more integrated into our daily lives, there is growing concern about cybersecurity. Robots used in areas such as autonomous driving, surveillance, surgery, home assistance, and industrial automation can be vulnerable to cyber-attacks, which could have serious real-world consequences. Modern robotic systems face a unique set of threats due to their evolving characteristics. This paper outlines the SESAME project’s methodology for the automated security analysis of multi-robot systems (MRS) and the production of Executable Digital Dependability Identities (EDDIs). Addressing security challenges in MRS involves overcoming complex factors such as increased connectivity, human–robot interactions, and a lack of risk awareness. The proposed methodology encompasses a detailed process, starting from system description and vulnerability identification and moving to the generation of attack trees and security EDDIs. The SESAME security methodology leverages structured repositories like Common Vulnerabilities and Exposures (CVE), Common Weakness Enumeration (CWE), and Common Attack Pattern Enumeration and Classification (CAPEC) to identify potential vulnerabilities and associated attacks. The introduction of Template Attack Trees facilitates modeling potential attacks, helping security experts develop effective mitigation strategies. This approach not only identifies, but also connects, specific vulnerabilities to possible exploits, thereby generating comprehensive security assessments. By merging safety and security assessments, this methodology ensures the overall dependability of MRS, providing a robust framework to mitigate cyber–physical threats. Full article
(This article belongs to the Special Issue Cyber-Physical Systems: Recent Developments and Emerging Trends)
Show Figures

Figure 1

Figure 1
<p>SESAME security methodology.</p>
Full article ">Figure 2
<p>Example graph that can be produced utilizing the CanFollow relationship of CAPEC.</p>
Full article ">Figure 3
<p>Questionnaire wizard.</p>
Full article ">Figure 4
<p>RVD Java classes of the custom RVD parser.</p>
Full article ">Figure 5
<p>CAPEC classes of the custom CAPEC identifier.</p>
Full article ">Figure 6
<p>Template Attack Tree with cyber and physical vulnerabilities.</p>
Full article ">Figure 7
<p>Area mapping with two UAVs (leader UAV and another UAV).</p>
Full article ">Figure 8
<p>(<b>A</b>) TIAGo Family and (<b>B</b>) TIAGo BASE Family.</p>
Full article ">Figure 9
<p>Example Template Attack Tree.</p>
Full article ">
15 pages, 668 KiB  
Article
PenQA: A Comprehensive Instructional Dataset for Enhancing Penetration Testing Capabilities in Language Models
by Xiaofeng Zhong, Yunlong Zhang and Jingju Liu
Appl. Sci. 2025, 15(4), 2117; https://doi.org/10.3390/app15042117 - 17 Feb 2025
Viewed by 332
Abstract
Large language models’ domain-specific capabilities can be enhanced through specialized datasets, yet constructing comprehensive cybersecurity datasets remains challenging due to the field’s multidisciplinary nature. We present PenQA, a novel instructional dataset for penetration testing that integrates theoretical and practical knowledge. Leveraging authoritative sources [...] Read more.
Large language models’ domain-specific capabilities can be enhanced through specialized datasets, yet constructing comprehensive cybersecurity datasets remains challenging due to the field’s multidisciplinary nature. We present PenQA, a novel instructional dataset for penetration testing that integrates theoretical and practical knowledge. Leveraging authoritative sources like MITRE ATT&CK™ and Metasploit, we employ online large language models to generate approximately 50,000 question–answer pairs.We demonstrate PenQA’s efficacy by fine-tuning language models with fewer than 10 billion parameters. Evaluation metrics, including the BLEU, ROUGE, and BERTScore, show significant improvements in the models’ penetration testing capabilities. PenQA is designed to be compatible with various model architectures and updatable as new techniques emerge. This work has implications for automated penetration testing tools, cybersecurity education, and decision support systems. The PenQA dataset is available in our GitHub repository. Full article
(This article belongs to the Special Issue AI Technology and Security in Cloud/Big Data)
Show Figures

Figure 1

Figure 1
<p>The pipeline of our method for dataset construction and model augmentation. We have sanitized and organized the usage manuals of open-source cybersecurity tools and open-source cybersecurity knowledge documents into coherent text segments. Subsequently, we have extracted specific question-and-answer pairs from several of these segments, which include the usage commands for specific cybersecurity tools and concepts related to network security. Following this, we have designed an enhancement project that enables more capable large language models to learn the correspondences between our crafted questions and answers and the original text segments, thereby generating question-and-answer data for each segmented piece. Finally, we have split the resulting question-and-answer dataset and fine-tuned it on large language models under 10 B parameters to enhance their performance in penetration testing Q&amp;A tasks.</p>
Full article ">Figure 2
<p>Examples of techniques and some sub-techniques in ATT&amp;CK. These techniques symbolize the distinct phases and objectives of the penetration process, constituting a valuable knowledge base for penetration testing. Within each technique, the subtechniques offer a more granular classification and narrative, allowing cybersecurity practitioners, as well as large language models, to systematically absorb and comprehend the intricacies of the penetration testing process.</p>
Full article ">Figure 3
<p>The quantitative relationships between different modules. Exploit, Payloads, and Auxiliary modules constitute the majority of modules within Metasploit. Exploits, Payloads, and Auxiliary modules constitute the majority of all modules by number, with each module typically corresponding to a specific vulnerability. As vulnerabilities continue to be discovered, the modules available on the Metasploit open-source community are continually updated to reflect these findings.</p>
Full article ">Figure 4
<p>Basic template for generating penetration testing Q&amp;A prompts. By utilizing Roles, Skills, and Restrictions, we assist large language models in comprehending the requirements of a task and the boundaries of their capabilities within that task. This approach ensures the generation of data that are uniformly formatted and aligned with the domain-specific criteria.</p>
Full article ">Figure 5
<p>An example that demonstrates how a model fine-tuned on a dataset performs better than one that is not fine-tuned has been practically verified as the answer to this question.</p>
Full article ">
14 pages, 8539 KiB  
Article
Responsible Artificial Intelligence Hyper-Automation with Generative AI Agents for Sustainable Cities of the Future
by Daswin De Silva, Nishan Mills, Harsha Moraliyage, Prabod Rathnayaka, Sam Wishart and Andrew Jennings
Smart Cities 2025, 8(1), 34; https://doi.org/10.3390/smartcities8010034 - 17 Feb 2025
Viewed by 316
Abstract
Smart cities are Hyper-Connected Digital Environments (HCDEs) that transcend the boundaries of natural, human-made, social, virtual, and artificial environments. Human activities are no longer confined to a single environment as our presence and interactions are represented and interconnected across HCDEs. The data streams [...] Read more.
Smart cities are Hyper-Connected Digital Environments (HCDEs) that transcend the boundaries of natural, human-made, social, virtual, and artificial environments. Human activities are no longer confined to a single environment as our presence and interactions are represented and interconnected across HCDEs. The data streams and repositories of HCDEs provide opportunities for the responsible application of Artificial Intelligence (AI) that generates unique insights into the constituent environments and the interplay across constituents. The translation of data into insights poses several complex challenges originating in data generation and then propagating through the computational layers to decision outcomes. To address these challenges, this article presents the design and development of a Hyper-Automated AI framework with Generative AI agents for sustainable smart cities. The framework is empirically evaluated in the living lab setting of a ‘University City of the Future’. The developed AI framework is grounded on the core capabilities of acquisition, preparation, orchestration, dissemination, and retrospection, with an independent cognitive engine for hyper-automation of these AI capabilities using Generative AI. Hyper-automation output feeds into a human-in-the-loop process prior to decision-making outcomes. More broadly, this framework aims to provide a validated pathway for university cities of the future to take up the role of prototypes that deliver evidence-based guidelines for the development and management of sustainable smart cities. Full article
Show Figures

Figure 1

Figure 1
<p>The Proposed Responsible AI Framework for Hyper-automation.</p>
Full article ">Figure 2
<p>Schematic representation of the structure and function of the cognitive engine. The arrows in red are indicative of bi-directional information and instruction flows, for instance the human agent engages the Council with information and instructions on complex tasks that are deconstructed and assigned to agents, with feedback loops to the human following execution and delivery.</p>
Full article ">Figure 3
<p>Functional Codification: from active computing to retrieval and execution of pre-established code.</p>
Full article ">Figure 4
<p>Implementation of the framework with a cognitive engine and agentic AI capabilities.</p>
Full article ">Figure 5
<p>Human mobility prediction for indoor and outdoor activities.</p>
Full article ">Figure 6
<p>Energy Consumption and Generation Forecasting for Time-based Decisions on Demand.</p>
Full article ">Figure 7
<p>Evaluating the impact of events in the building on energy consumption.</p>
Full article ">Figure 8
<p>PAMAP 2 Results 1: Segmentation of High vs. Low Intensity Activities.</p>
Full article ">Figure 9
<p>PAMAP2 Results 2: Incrementally Learned Sequential Flow of Activities.</p>
Full article ">
39 pages, 4211 KiB  
Review
Comprehensive Review of Robotics Operating System-Based Reinforcement Learning in Robotics
by Mohammed Aljamal, Sarosh Patel and Ausif Mahmood
Appl. Sci. 2025, 15(4), 1840; https://doi.org/10.3390/app15041840 - 11 Feb 2025
Viewed by 626
Abstract
Common challenges in the area of robotics include issues such as sensor modeling, dynamic operating environments, and limited on-broad computational resources. To improve decision making, robots need a dependable framework to facilitate communication between different modules and the optimal action for real-world applications. [...] Read more.
Common challenges in the area of robotics include issues such as sensor modeling, dynamic operating environments, and limited on-broad computational resources. To improve decision making, robots need a dependable framework to facilitate communication between different modules and the optimal action for real-world applications. The Robotics Operating System (ROS) and Reinforcement Learning (RL) are two promising approaches that help accomplish precise control, seamless integration of sensors-actuators, and exhibit learned behavior. The ROS enables seamless communication between heterogeneous components, while RL focuses on learning optimal behaviors through trial-and-error scenarios. Combining the ROS and RL offers superior decision making, improved perception, enhanced automation, and reliability. This work focuses on investigating ROS-based RL applications across various domains, aiming to enhance understanding through comprehensive discussion, analysis, and summarization. We base our evaluation on the application area, type of RL algorithm used, and degree of ROS–RL integration. Additionally, we provide summary of seminal works that define the current state of the art, along with GitHub repositories and resources for research purposes. Based on the review of successfully implemented projects, we make recommendations highlighting the advantages and limitations of RL techniques for specific applications and environments. The ultimate goal of this work is to advance the robotics field by providing a comprehensive overview of the recent important works that incorporate both the ROS and RL, thereby improving the adaptability of these emerging techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence and Its Application in Robotics)
Show Figures

Figure 1

Figure 1
<p>Overview of the reinforcement learning process.</p>
Full article ">Figure 2
<p>Classification of popular RL algorithms.</p>
Full article ">Figure 3
<p>ABB RobotStudio workflow. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 4
<p>Multi-ROS modular framework. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 5
<p>Hybrid control system architecture for collision avoidance. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 6
<p>Simulation-in-loop block diagram for RL-learned staircase navigation. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 7
<p>ROS-based information flow architecture. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 8
<p>System architecture for simulation-in-loop training. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 9
<p>RL architecture for pick-and-place application. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 10
<p>ROS-based multirobot DRL framework. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 11
<p>Structure of indoor robotics, including five main parts: “<span class="html-italic">world, robot, ROS components, monitor, and learning</span>”. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 12
<p>Architecture of the autonomous navigation system. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 13
<p>ROS nodes and topics, hardware, and software architecture of the embedded robot. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 14
<p>Framework for autonomous robot navigation. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 15
<p>Architecture of off-policy RL framework. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 16
<p>Simulated learning environment using OpenAI Gym, ROS, and Gazebo. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 17
<p>Sensor fusion for RL based autonomous driving. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 18
<p>ROS publisher–subscriber model. For more information, please refer to <a href="#applsci-15-01840-t004" class="html-table">Table 4</a>.</p>
Full article ">
21 pages, 4087 KiB  
Article
Enhanced Bug Priority Prediction via Priority-Sensitive Long Short-Term Memory–Attention Mechanism
by Geunseok Yang, Jinfeng Ji and Jaehee Kim
Appl. Sci. 2025, 15(2), 633; https://doi.org/10.3390/app15020633 - 10 Jan 2025
Viewed by 389
Abstract
The rapid expansion of software applications has led to an increase in the frequency of bugs, which are typically reported through user-submitted bug reports. Developers prioritize these reports based on severity and project schedules. However, the manual process of assigning bug priorities is [...] Read more.
The rapid expansion of software applications has led to an increase in the frequency of bugs, which are typically reported through user-submitted bug reports. Developers prioritize these reports based on severity and project schedules. However, the manual process of assigning bug priorities is time-consuming and prone to inconsistencies. To address these limitations, this study presents a Priority-Sensitive LSTM–Attention mechanism for automating bug priority prediction. The proposed approach extracts features such as product and component details from bug repositories and preprocesses the data to ensure consistency. Priority-based feature selection is applied to align the input data with the task of bug prioritization. These features are processed through a Long Short-Term Memory (LSTM) network to capture sequential dependencies, and the outputs are further refined using an Attention mechanism to focus on the most relevant information for prediction. The effectiveness of the proposed model was evaluated using datasets from the Eclipse and Mozilla open-source projects. Compared to baseline models such as Naïve Bayes, Random Forest, Decision Tree, SVM, CNN, LSTM, and CNN-LSTM, the proposed model achieved a superior performance. It recorded an accuracy of 93.00% for Eclipse and 84.11% for Mozilla, representing improvements of 31.11% and 40.39%, respectively, over the baseline models. Statistical verification confirmed that these performance gains were significant. This study distinguishes itself by integrating priority-based feature selection with a hybrid LSTM–Attention architecture, which enhances prediction accuracy and robustness compared to existing methods. The results demonstrate the potential of this approach to streamline bug prioritization, improve project management efficiency, and assist developers in resolving high-priority issues. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Example of Android Report (#240016030).</p>
Full article ">Figure 2
<p>Overview of our approach.</p>
Full article ">Figure 3
<p>Overview of the feature selection.</p>
Full article ">Figure 4
<p>Overview of the LSTM–Attention algorithm.</p>
Full article ">Figure 5
<p>Summary of our model.</p>
Full article ">Figure 6
<p>Performance of proposed model.</p>
Full article ">Figure 7
<p>Comparison of performance based on dropout parameter for Eclipse.</p>
Full article ">Figure 8
<p>Comparison of performance based on dropout parameter for Mozilla.</p>
Full article ">Figure 9
<p>Performance of the proposed model based on layer parameters.</p>
Full article ">Figure 10
<p>Comparison of proposed model and non-feature-selection algorithm.</p>
Full article ">Figure 11
<p>Comparison of LSTM–Attention and baseline models for Eclipse.</p>
Full article ">Figure 12
<p>Comparison of LSTM–Attention and baseline models for Mozilla.</p>
Full article ">Figure 13
<p>Comparison of baseline models’ (ML) performances with proposed model for Eclipse.</p>
Full article ">Figure 14
<p>Comparison of baseline models’ (DL) performance with proposed model for Eclipse.</p>
Full article ">Figure 15
<p>Comparison of baseline models’ (ML) performance with proposed model for Mozilla.</p>
Full article ">Figure 16
<p>Comparison of baseline models’ (DL) performance with proposed model for Mozilla.</p>
Full article ">
17 pages, 19075 KiB  
Article
A Channel Attention-Driven Optimized CNN for Efficient Early Detection of Plant Diseases in Resource Constrained Environment
by Sana Parez, Naqqash Dilshad and Jong Weon Lee
Agriculture 2025, 15(2), 127; https://doi.org/10.3390/agriculture15020127 - 8 Jan 2025
Viewed by 524
Abstract
Agriculture is a cornerstone of economic prosperity, but plant diseases can severely impact crop yield and quality. Identifying these diseases accurately is often difficult due to limited expert availability and ambiguous information. Early detection and automated diagnosis systems are crucial to mitigate these [...] Read more.
Agriculture is a cornerstone of economic prosperity, but plant diseases can severely impact crop yield and quality. Identifying these diseases accurately is often difficult due to limited expert availability and ambiguous information. Early detection and automated diagnosis systems are crucial to mitigate these challenges. To address this, we propose a lightweight convolutional neural network (CNN) designed for resource-constrained devices termed as LeafNet. LeafNet draws inspiration from the block-wise VGG19 architecture but incorporates several optimizations, including a reduced number of parameters, smaller input size, and faster inference time while maintaining competitive accuracy. The proposed LeafNet leverages small, uniform convolutional filters to capture fine-grained details of plant disease features, with an increasing number of channels to enhance feature extraction. Additionally, it integrates channel attention mechanisms to prioritize disease-related features effectively. We evaluated the proposed method on four datasets: the benchmark plant village (PV), the data repository of leaf images (DRLIs), the newly curated plant composite (PC) dataset, and the BARI Sunflower (BARI-Sun) dataset, which includes diverse and challenging real-world images. The results show that the proposed performs comparably to state-of-the-art methods in terms of accuracy, false positive rate (FPR), model size, and runtime, highlighting its potential for real-world applications. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed optimized <span class="html-italic">LeafNet</span> for efficient plant disease detection.</p>
Full article ">Figure 2
<p>Two completely linked layers, a multiplication operation, and GAP make up the channel attention module. This module has the ability to re-calibrate the input feature maps.</p>
Full article ">Figure 3
<p>Confusion matrices of the proposed <span class="html-italic">LeafNet</span> for every dataset that is part of the experiment. (<b>a</b>) PV. (<b>b</b>) DRLI. (<b>c</b>) PC. (<b>d</b>) BARI-Sun.</p>
Full article ">Figure 4
<p>The accuracy and loss of the suggested <span class="html-italic">LeafNet</span> method during training and validation on the PC dataset. (<b>a</b>) Accuracy. (<b>b</b>) Loss.</p>
Full article ">Figure 5
<p>Qualitative evaluation of <span class="html-italic">LeafNet</span> using the included datasets. Results for the accurate prediction of the input images are highlighted in <span style="color: #0000FF">blue</span>, while the <span style="color: #FF0000">red</span> represents the inaccurate.</p>
Full article ">
28 pages, 2683 KiB  
Article
GDT Framework: Integrating Generative Design and Design Thinking for Sustainable Development in the AI Era
by Yongliang Chen, Zhongzhi Qin, Li Sun, Jiantao Wu, Wen Ai, Jiayuan Chao, Huaixin Li and Jiangnan Li
Sustainability 2025, 17(1), 372; https://doi.org/10.3390/su17010372 - 6 Jan 2025
Viewed by 775
Abstract
The ability of AI to process vast datasets can enhance creativity, but its rigid knowledge base and lack of reflective thinking limit sustainable design. Generative Design Thinking (GDT) integrates human cognition and machine learning to enhance design automation. This study aims to explore [...] Read more.
The ability of AI to process vast datasets can enhance creativity, but its rigid knowledge base and lack of reflective thinking limit sustainable design. Generative Design Thinking (GDT) integrates human cognition and machine learning to enhance design automation. This study aims to explore the cognitive mechanisms underlying GDT and their impact on design efficiency. Using behavioral coding and quantitative analysis, we developed a three-tier cognitive model comprising a macro-cycle (knowledge acquisition and expression), meso-cycle (creative generation, intelligent evaluation, and feedback adjustment), and micro-cycle (knowledge base and model optimization). The findings reveal that increased task complexity elevates cognitive load, supporting the hypothesis that designers need to allocate more cognitive resources for complex problems. Knowledge base optimization significantly impacts design efficiency more than generative model refinement. Moreover, creative generation, evaluation, and feedback adjustment are interdependent, highlighting the importance of a dynamic knowledge base for creativity. This study challenges traditional design automation approaches by advocating for an adaptive framework that balances cognitive processes and machine capabilities. The results suggest that improving knowledge management and reducing cognitive load can enhance design outcomes. Future research should focus on developing flexible, real-time knowledge repositories and optimizing generative models for interdisciplinary and sustainable design contexts. Full article
Show Figures

Figure 1

Figure 1
<p>Generative Design Cognitive Model.</p>
Full article ">Figure 2
<p>Double Diamond Modeling and Generative Design Thinking.</p>
Full article ">Figure 3
<p>Von Neumann Model and Generative Design Thinking.</p>
Full article ">Figure 4
<p>Structure of the Generative Design Thinking Model.</p>
Full article ">Figure 5
<p>Generative design thinking model mechanism reasoning process.</p>
Full article ">Figure 6
<p>Product design process framework driven by generative design thinking.</p>
Full article ">
17 pages, 860 KiB  
Review
Artificial Intelligence in Wound Care: A Narrative Review of the Currently Available Mobile Apps for Automatic Ulcer Segmentation
by Davide Griffa, Alessio Natale, Yuri Merli, Michela Starace, Nico Curti, Martina Mussi, Gastone Castellani, Davide Melandri, Bianca Maria Piraccini and Corrado Zengarini
BioMedInformatics 2024, 4(4), 2321-2337; https://doi.org/10.3390/biomedinformatics4040126 - 11 Dec 2024
Viewed by 953
Abstract
Introduction: Chronic ulcers significantly burden healthcare systems, requiring precise measurement and assessment for effective treatment. Traditional methods, such as manual segmentation, are time-consuming and error-prone. This review evaluates the potential of artificial intelligence AI-powered mobile apps for automated ulcer segmentation and their application [...] Read more.
Introduction: Chronic ulcers significantly burden healthcare systems, requiring precise measurement and assessment for effective treatment. Traditional methods, such as manual segmentation, are time-consuming and error-prone. This review evaluates the potential of artificial intelligence AI-powered mobile apps for automated ulcer segmentation and their application in clinical settings. Methods: A comprehensive literature search was conducted across PubMed, CINAHL, Cochrane, and Google Scholar databases. The review focused on mobile apps that use fully automatic AI algorithms for wound segmentation. Apps requiring additional hardware or needing more technical documentation were excluded. Vital technological features, clinical validation, and usability were analysed. Results: Ten mobile apps were identified, showing varying levels of segmentation accuracy and clinical validation. However, many apps did not publish sufficient information on the segmentation methods or algorithms used, and most lacked details on the databases employed for training their AI models. Additionally, several apps were unavailable in public repositories, limiting their accessibility and independent evaluation. These factors challenge their integration into clinical practice despite promising preliminary results. Discussion: AI-powered mobile apps offer significant potential for improving wound care by enhancing diagnostic accuracy and reducing the burden on healthcare professionals. Nonetheless, the lack of transparency regarding segmentation techniques, unpublished databases, and the limited availability of many apps in public repositories remain substantial barriers to widespread clinical adoption. Conclusions: AI-driven mobile apps for ulcer segmentation could revolutionise chronic wound management. However, overcoming limitations related to transparency, data availability, and accessibility is essential for their successful integration into healthcare systems. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of automated wound assessment process using AI-powered mobile app.</p>
Full article ">Figure 2
<p>The radar charts provide a visual comparison of various wound monitoring apps based on six key criteria, namely platform availability, regulatory approval, inter-reliability, peer-reviewed studies, the disclosure of methods/algorithms, and the use of public datasets. Each chart represents a single app, highlighting strengths and weaknesses across these categories.</p>
Full article ">
40 pages, 20840 KiB  
Article
Facial Biosignals Time–Series Dataset (FBioT): A Visual–Temporal Facial Expression Recognition (VT-FER) Approach
by João Marcelo Silva Souza, Caroline da Silva Morais Alves, Jés de Jesus Fiais Cerqueira, Wagner Luiz Alves de Oliveira, Orlando Mota Pires, Naiara Silva Bonfim dos Santos, Andre Brasil Vieira Wyzykowski, Oberdan Rocha Pinheiro, Daniel Gomes de Almeida Filho, Marcelo Oliveira da Silva and Josiane Dantas Viana Barbosa
Electronics 2024, 13(24), 4867; https://doi.org/10.3390/electronics13244867 - 10 Dec 2024
Viewed by 642
Abstract
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, [...] Read more.
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, while temporal challenges involve discontinuities in motion observation due to high variability in poses and dynamic conditions such as rotation and translation. To enhance the analytical precision and validation reliability of FER systems, several datasets have been proposed. However, most of these datasets focus primarily on spatial characteristics, rely on static images, or consist of short videos captured in highly controlled environments. These constraints significantly reduce the applicability of such systems in real-world scenarios. This paper proposes the Facial Biosignals Time–Series Dataset (FBioT), a novel dataset providing temporal descriptors and features extracted from common videos recorded in uncontrolled environments. To automate dataset construction, we propose Visual–Temporal Facial Expression Recognition (VT-FER), a method that stabilizes temporal effects using normalized measurements based on the principles of the Facial Action Coding System (FACS) and generates signature patterns of expression movements for correlation with real-world temporal events. To demonstrate feasibility, we applied the method to create a pilot version of the FBioT dataset. This pilot resulted in approximately 10,000 s of public videos captured under real-world facial motion conditions, from which we extracted 22 direct and virtual metrics representing facial muscle deformations. During this process, we preliminarily labeled and qualified 3046 temporal events representing two emotion classes. As a proof of concept, these emotion classes were used as input for training neural networks, with results summarized in this paper and available in an open-source online repository. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>An illustration of the complete process, from face detection to the semantic level, where each face image is correlated with labeled events. The steps include: (1) face cropping, (2) facial landmark detection, (3) landmark normalization, (4) feature extraction, and (5) analysis and event correlation. Illustration created by the authors. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>].</p>
Full article ">Figure 2
<p>The proposed pipeline for generating the FBioT dataset consists of the following modules: Flow [A] (Indexer, Feature Extractor L1, Video Adjuster, Measure Maker), and Flow [B] (Manual and Automatic Labelers). Each module produces its own dataset as output. Indexing can be performed using streaming videos or local videos.</p>
Full article ">Figure 3
<p>The Feature Extractor L1 module extracts image features, including (1)–(2) from the region of interest, (3) the main facial features, and (4) facial landmarks. These features are utilized to (5) identify and standardize the biosignals, which each point has the coordinates X and Y. Illustration created by the authors. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>].</p>
Full article ">Figure 4
<p>An example illustrating how changes in image dimensionality occur due to camera movement along the Z axis in (2) and (3), with (1) demonstrating the effect of dimensionality normalization. Illustration created by the authors. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>].</p>
Full article ">Figure 5
<p>Example of the same open mouth seen from different poses resulting in distortions in the absolute pixel-by-pixel measurements (red line). With the Video Adjuster module it is possible to estimate these distortions and calculate that the measurements belong to the same mouth opening (blue line). Three-dimensional model by [<a href="#B60-electronics-13-04867" class="html-bibr">60</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Example of a schematic representation of the theoretical landmarks of the FACS system for action unit detection. Image of the person generated by AI [<a href="#B55-electronics-13-04867" class="html-bibr">55</a>]. (<b>b</b>) Diagram of the landmarks detected by the Dlib model, where the enumeration corresponds to the following: face contours (1–17); eyebrows left (18–22); eyebrows right (23–27); nose top (28–31); nose base (32–36); eye left (37–42); eye right (43–48); mouth and lips (49–68). Illustration adapted from [<a href="#B61-electronics-13-04867" class="html-bibr">61</a>].</p>
Full article ">Figure 7
<p>Example of measurement acquisition of vertical mouth opening (<math display="inline"><semantics> <msub> <mi>d</mi> <mi>n</mi> </msub> </semantics></math> = distance between 15–19 points, see <a href="#electronics-13-04867-f006" class="html-fig">Figure 6</a>) over time, which results in time–series measurements.</p>
Full article ">Figure 8
<p>The schematic flow of the manual labeling process consists of three steps. In step (1), representative measures are selected to label an expression. In (2), the start and end frames of the movement related to the expression are identified. In (3), the class name and the selected measurements are added to the labeling file on the rows corresponding to the selected interval frames. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 9
<p>To identify subseries similar to a given pattern, the Euclidean distance was calculated. In step (1), the measures of interest that characterize the expression are selected, and sequences that are the most similar to the patterns for each measure are identified individually. In step (2), a similarity filter is applied to select intervals where the patterns occurred in both measures of interest. Furthermore, in step (3), the class name and selected measures are added to the labeling file for the frames corresponding to the identified intervals.</p>
Full article ">Figure 10
<p>Schematic representation of the AU1 measurement process, which involves raising the eyebrows. Motion detection measures the distance between the landmarks on the eyebrows and the nasal bone. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 11
<p>Temporal evolution of AU9 movement. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 12
<p>The graph shows the rotation angles of the BIWI ground truth annotations compared to the results estimated using the model developed (Biosignals) and the OpenFace framework. The values correspond to the processing of video 22 from the BIWI dataset.</p>
Full article ">Figure 13
<p>Normalization result of the M3 measurement from a video, where it is possible to observe that the face remains stable even during translation movement. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 14
<p>Result of the dynamic effect of rotation around the Z axis throughout video ID = 2 of the dataset. Values before and after normalization are as follows: mean = −0.06, STD = 5.80; mean = −2.12, STD = 1.06, respectively.</p>
Full article ">Figure 15
<p>Mean and standard deviation for rotation variations across all videos in the dataset.</p>
Full article ">Figure 16
<p>Percentage of face Z axis translation normalization over time for video ID = 21. Values before and after normalization are as follows: mean = 0.36, STD = 0.20; mean = 0.46, STD = 0.07, respectively.</p>
Full article ">Figure 17
<p>Percentage of normalization (translation in Z axis), in terms of mean and standard deviation for all videos in the dataset.</p>
Full article ">Figure 18
<p>A smiling signature of video ID = 21 from the CK dataset with the main measurements over time. The facial expression images are adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>].</p>
Full article ">Figure 19
<p>Z rotation normalization for video 24 in CK+ dataset.</p>
Full article ">Figure 20
<p>Z translation normalization for video 24 in CK+ dataset.</p>
Full article ">Figure 21
<p>Similarity matrix obtained from the cross-test between the M1 (<b>left</b>) and M12 (<b>right</b>) measurements of each CK+ happy sample.</p>
Full article ">Figure 22
<p>Architecture of the network with reference data, consisting of four layers: sequential, LSTM, dropout, and dense.</p>
Full article ">Figure 23
<p>Training results: accuracy and loss curves for the reference network.</p>
Full article ">Figure 24
<p>ROC curve and confusion matrix.</p>
Full article ">Figure 25
<p>Z rotation normalization for video 70 in AFEW dataset.</p>
Full article ">Figure 26
<p>Z translation normalization for video 70 in AFEW dataset.</p>
Full article ">Figure 27
<p>Training process for arousal neural network. <b>Left</b>—accuracy; <b>right</b>—loss.</p>
Full article ">Figure 28
<p>Training process for valence neural network. <b>Left</b>—accuracy; <b>right</b>—loss.</p>
Full article ">Figure 29
<p>ROC curves for the AFEW reference neural network. <b>Left</b>—arousal; <b>right</b>—valence.</p>
Full article ">Figure 30
<p>The Manual Labeler L0 module comprises the following processes: (<b>a</b>) graphical analysis of time–series measures, (<b>b</b>) selection of the start and end frames of the expression through visualization of each frame, and (<b>c</b>) visualization of annotated data [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>]. Sample video accessible at [<a href="#B65-electronics-13-04867" class="html-bibr">65</a>].</p>
Full article ">Figure 31
<p>Similarity matrix generated by cross-testing measurements M1 (<b>left</b>) and M12 (<b>right</b>) of each automatically found seed from the FBioT dataset.</p>
Full article ">Figure 32
<p>Coincident samples found based on seed search from the Automatic Labeler module.</p>
Full article ">Figure 33
<p>Summarized results of the seed search, grouped in blocks of five units, versus the number of occurrences.</p>
Full article ">Figure 34
<p>Neural network architecture for the proposed dataset prototype.</p>
Full article ">Figure 35
<p>Training results: accuracy and loss curves for the neural network prototype of biosignals.</p>
Full article ">Figure 36
<p>ROC curve (<b>left</b>) and confusion matrix (<b>right</b>). The samples of happy are represented by 0 and the neutral by 1.</p>
Full article ">Figure 37
<p>Prototype for visualizing measurements from local video, with respective graphs of measurements over time. Legend of the measurements: Red (M3), Blue (M13), Purple (M8) and Yellow (E1). The facial expression image is adapted from [<a href="#B10-electronics-13-04867" class="html-bibr">10</a>]. To exemplify the application’s functionality, it has been used a mirrored video of the expression acquired by the CK+ dataset to have a complete smile expression (onset-apex-offset).</p>
Full article ">Figure 38
<p>Prototype for visualizing time–series pattern inference from local video, for the measurement M12. The red sequence represents the corresponding neural network inference, while the blue represents other temporal measurements. Three-dimensional model by [<a href="#B60-electronics-13-04867" class="html-bibr">60</a>].</p>
Full article ">
25 pages, 14192 KiB  
Article
A Low-Cost Remotely Configurable Electronic Trap for Insect Pest Dataset Generation
by Fernando León-García, Jose M. Palomares, Meelad Yousef-Yousef, Enrique Quesada-Moraga and Cristina Martínez-Ruedas
Appl. Sci. 2024, 14(22), 10307; https://doi.org/10.3390/app142210307 - 9 Nov 2024
Viewed by 775
Abstract
The precise monitoring of insect pest populations is the foundation of Integrated Pest Management (IPM) for pests of plants, humans, and animals. Digital technologies can be employed to address one of the main challenges, such as reducing the IPM workload and enhancing decision-making [...] Read more.
The precise monitoring of insect pest populations is the foundation of Integrated Pest Management (IPM) for pests of plants, humans, and animals. Digital technologies can be employed to address one of the main challenges, such as reducing the IPM workload and enhancing decision-making accuracy. In this study, digital technologies are used to deploy an automated trap for capturing images of insects and generating centralized repositories on a server. Subsequently, advanced computational models can be applied to analyze the collected data. The study provides a detailed description of the prototype, designed with a particular focus on its remote reconfigurability to optimize repository quality; and the server, accessible via an API interface to enhance system interoperability and scalability. Quality metrics are presented through an experimental study conducted on the constructed demonstrator, emphasizing trap reliability, stability, performance, and energy consumption, along with an objective analysis of image quality using metrics such as RMS contrast, Image Entropy, Image sharpness metric, Natural Image Quality Evaluator (NIQE), and Modulation Transfer Function (MFT). This study contributes to the optimization of the current knowledge regarding automated insect pest monitoring techniques and offers advanced solutions for the current systems. Full article
Show Figures

Figure 1

Figure 1
<p>ESP32-CAM.</p>
Full article ">Figure 2
<p>OV2640 and OV5640 (with autofocus) cameras.</p>
Full article ">Figure 3
<p>Trap-embedded device structure.</p>
Full article ">Figure 4
<p>Central server structure.</p>
Full article ">Figure 5
<p>Database entities and relationships. * The &lt;configuration&gt; aggregation in DeviceConfiguration are described in <a href="#applsci-14-10307-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 6
<p>Internal working states of the trap.</p>
Full article ">Figure 7
<p>Detailed procedure of the trap states. The figure illustrates the concurrent processes programmed in the electronic trap device through various flow diagrams. Each black circle symbolizes the start of execution for each process, each diamond represents a conditional situation that adapts the logic, and the solid arrows indicate procedural flow. The dashed arrows represent events that trigger the initiation of a task.</p>
Full article ">Figure 8
<p>Sequence diagram for user–trap interaction via the server API in Monitoring mode.The sequence diagram illustrates the interaction between the user and the trap through the server API in Monitoring mode. It shows the chronological sequence of data transmission and reception events from top to bottom, along with the computational load triggered at the receiving end. Three types of lines are used in the diagram: bold vertical dashed lines represent the passage of time for each task, smaller dashed line segments along these bold lines mark specific time periods within each task, and solid horizontal lines indicate the sending and receiving of events. The diagram includes the processing threads for the trap processes—Configuration Task and Camera Task (enclosed within the green square on the left)—the processing thread for the server API (within the red square in the center), and the user’s interaction with the API (displayed on the right side of the figure).</p>
Full article ">Figure 9
<p>Sequence diagram for user–trap interaction via the server API in WebServer mode. The diagram is interpreted in the same way as described in <a href="#applsci-14-10307-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 10
<p>Trap device prototype.</p>
Full article ">Figure 11
<p>Web API configuration interface.</p>
Full article ">Figure 12
<p>WebServer configuration interface.</p>
Full article ">Figure 13
<p>Energy consumption of the trap with relevant intervals labeled. From left to right: (<b>a</b>) system start-up, (<b>b</b>–<b>f</b>) frame upload to the server, (<b>g</b>) activation of the web server without clients, (<b>h</b>) active web server with a connected client, and (<b>i</b>) active web server with a client in streaming mode.</p>
Full article ">Figure 14
<p>Example of an image captured by the OV-2640 (RYS model) camera.</p>
Full article ">
24 pages, 5723 KiB  
Article
Cloud-Based Automatic Configuration and Disaster Recovery of Communication Systems Applied in Engineering Training
by J. D. Morillo Reina and T. J. Mateo Sanguino
Electronics 2024, 13(21), 4203; https://doi.org/10.3390/electronics13214203 - 26 Oct 2024
Viewed by 775
Abstract
Network management and troubleshooting require not only a grasp of advanced concepts but also the development of analytical and problem-solving skills. To bridge this gap, this paper introduces a novel network administration system, DRACSC (Spanish acronym for device for automatic recovery and configuration [...] Read more.
Network management and troubleshooting require not only a grasp of advanced concepts but also the development of analytical and problem-solving skills. To bridge this gap, this paper introduces a novel network administration system, DRACSC (Spanish acronym for device for automatic recovery and configuration of communication systems), designed for the automatic configuration and disaster recovery of communication equipment. This system transcends the limitations of current hardware and software solutions by combining their advantages, boasting portability, automated functions, and a cloud-based repository as its main features. The DRACSC system, undergoing a comprehensive large-scale evaluation involving diverse user groups across multiple institutions, was tested with 89 users, including students and teachers at educational centers and ICT (Information and Communication Technology) professionals. The benefits of the system were evaluated through a training program based on simulated real-world ICT environments, focusing on both quantitative results on the reduction in time to complete user tasks, as well as qualitative results on the interface and usability of the system. Statistical analysis, including Welch’s t-test on opinion surveys, indicated a significant increase in knowledge and understanding, demonstrating the system’s potential to enhance education and practice. Moreover, the evaluation shed light on the user experience, with positive impacts observed for learning and teaching implications. As a result, the study has verified that the system has the potential to significantly influence network management practices, enhancing both learning and professional application through improved efficiency and usability. Full article
Show Figures

Figure 1

Figure 1
<p>Interaction between DRACSC, cloud repository, and managed devices.</p>
Full article ">Figure 2
<p>DRACSC device.</p>
Full article ">Figure 3
<p>XML structure of a MACRO function.</p>
Full article ">Figure 4
<p>View of the web interface of the DRACSC device: (<b>a</b>) list of managed equipment, (<b>b</b>) execution of a MACRO function, and (<b>c</b>) editing of preset MACRO function.</p>
Full article ">Figure 5
<p>View of the repository web interface: (<b>a</b>) list of MACRO functions, (<b>b</b>) creation of metadata with “standard MACRO”, and (<b>c</b>) creation of script with “standard MACRO”.</p>
Full article ">Figure 6
<p>Sequence diagram of Task 1 (red) and Task 2 (blue).</p>
Full article ">Figure 7
<p>Sequence diagram of Task 4.</p>
Full article ">Figure 8
<p>Keystroke comparison between manual and DRACSC settings.</p>
Full article ">Figure 9
<p>Time comparison between setting up manually vs DRACSC.</p>
Full article ">Figure 10
<p>Comparative results between groups for knowledge/learning topics.</p>
Full article ">Figure 11
<p>Comparative results between groups for interest/motivation topics.</p>
Full article ">Figure 12
<p>Comparative results between groups for usability/practicality topics.</p>
Full article ">Figure 13
<p>Comparative results between groups for results/feasibility topics.</p>
Full article ">Figure 14
<p>Comparative results between students, teachers, and professionals.</p>
Full article ">Figure 15
<p>Trend line analysis among students from different high schools.</p>
Full article ">Figure 16
<p>Questions grouped by learning topic.</p>
Full article ">Figure 17
<p>Time taken to complete use cases.</p>
Full article ">Figure 18
<p>Symmetric plot of the FCA about the evaluation questionnaire.</p>
Full article ">
17 pages, 2974 KiB  
Article
TreeSeg—A Toolbox for Fully Automated Tree Crown Segmentation Based on High-Resolution Multispectral UAV Data
by Sönke Speckenwirth, Melanie Brandmeier and Sebastian Paczkowski
Remote Sens. 2024, 16(19), 3660; https://doi.org/10.3390/rs16193660 - 1 Oct 2024
Viewed by 1286
Abstract
Single-tree segmentation on multispectral UAV images shows significant potential for effective forest management such as automating forest inventories or detecting damage and diseases when using an additional classifier. We propose an automated workflow for segmentation on high-resolution data and provide our trained models [...] Read more.
Single-tree segmentation on multispectral UAV images shows significant potential for effective forest management such as automating forest inventories or detecting damage and diseases when using an additional classifier. We propose an automated workflow for segmentation on high-resolution data and provide our trained models in a Toolbox for ArcGIS Pro on our GitHub repository for other researchers. The database used for this study consists of multispectral UAV data (RGB, NIR and red edge bands) of a forest area in Germany consisting of a mix of tree species consisting of five deciduous trees and three conifer tree species in the matured closed canopy stage at approximately 90 years. Information of NIR and Red Edge bands are evaluated for tree segmentation using different vegetation indices (VIs) in comparison to only using RGB information. We trained Faster R-CNN, Mask R-CNN, TensorMask and SAM in several experiments and evaluated model performance on different data combinations. All models with the exception of SAM show good performance on our test data with the Faster R-CNN model trained on the red and green bands and the Normalized Difference Red Edge Index (NDRE) achieving best results with an F1-Score of 83.5% and an Intersection over Union of 65.3% on highly detailed labels. All models are provided in our TreeSeg toolbox and allow the user to apply the pre-trained models on new data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Flowchart showing the overall workflow of our study.</p>
Full article ">Figure 2
<p>Study areas in Germany: (<b>a</b>) The sampling locations near Freiburg im Breisgau; (<b>b</b>) Location of the study area is in Germany; (<b>c</b>) Example of some trees with segmentation masks.</p>
Full article ">
18 pages, 483 KiB  
Article
An Algorithm for Nutrient Mixing Optimization in Aquaponics
by Alexander Kobelski, Patrick Nestler, Mareike Mauerer, Thorsten Rocksch, Uwe Schmidt and Stefan Streif
Appl. Sci. 2024, 14(18), 8140; https://doi.org/10.3390/app14188140 - 10 Sep 2024
Viewed by 1355
Abstract
Controlled environment agriculture is a promising alternative to conventional production methods, as it is less affected by climate change and is often more sustainable, especially in circular and recycling frameworks such as aquaponics. A major cost factor in such facilities, however, is the [...] Read more.
Controlled environment agriculture is a promising alternative to conventional production methods, as it is less affected by climate change and is often more sustainable, especially in circular and recycling frameworks such as aquaponics. A major cost factor in such facilities, however, is the need for skilled labor. Depending on available resources, there are endless possibilities on how to choose ingredients to realize a desired nutrient solution. At the same time, the composition of the desired solution is subject to fluctuations in fish water quality, fertilizer availability, weather, and plant development. In high-evaporation scenarios, e.g., summer, nutrient solutions might be mixed multiple times per day. This results in a complex, multi-variable task that is time-consuming to solve manually, yet requires frequent resolution. This work aims to help solve this challenge by providing methods to automate the nutrient mixing procedure. A simple mass-balance-based model of a nutrient mixing tank with connections to different water sources, drains, and fertilizers is provided. Using methods of static optimization, a program was developed which, in consideration of various process constraints and optimization variables, is able to calculate the necessary steps to mix the desired solution. The program code is provided in an open-source repository. The flexibility of the method is demonstrated in simulation scenarios. The program is easy to use and to adapt, and all necessary steps are explained in this paper. This work contributes to a higher automation level in CEA. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic description of the multi-tank system for mixing nutrient solutions.</p>
Full article ">Figure 2
<p>Flow chart of the program.</p>
Full article ">Figure 3
<p>Resulting normalized concentrations of nutrients in the solution when only the reference error is minimized.</p>
Full article ">Figure 4
<p>Resulting normalized concentrations of nutrients in the solution when the reference error and consumption of resources are optimized.</p>
Full article ">Figure 5
<p>Resulting normalized concentrations of nutrients in the solution when the concentrations of the final solution are constrained by (<a href="#FD25-applsci-14-08140" class="html-disp-formula">25</a>).</p>
Full article ">Figure 6
<p>Resulting normalized concentrations of nutrients in the solution in a sunnier and drier environment.</p>
Full article ">
Back to TopTop