[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = computer aided decision support system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 468 KiB  
Article
Toward 6G: Latency-Optimized MEC Systems with UAV and RIS Integration
by Abdullah Alshahrani
Mathematics 2025, 13(5), 871; https://doi.org/10.3390/math13050871 - 5 Mar 2025
Viewed by 156
Abstract
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative [...] Read more.
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative paradigm for next-generation networks. This paper addresses the critical challenge of task offloading and resource allocation in an MEC-based system, where a massive MIMO base station, serving multiple macro-cells, hosts the MEC server with support from a UAV-equipped RIS. We propose an optimization framework to minimize task execution latency for user equipment (UE) by jointly optimizing task offloading and communication resource allocation within this UAV-assisted, RIS-aided network. By modeling this problem as a Markov decision process (MDP) with a discrete-continuous hybrid action space, we develop a deep reinforcement learning (DRL) algorithm leveraging a hybrid space representation to solve it effectively. Extensive simulations validate the superiority of the proposed method, demonstrating significant latency reductions compared to state-of-the-art approaches, thereby advancing the feasibility of MEC in 6G networks. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed algorithm.</p>
Full article ">Figure 2
<p>Average rewards vs. no. of episodes.</p>
Full article ">Figure 3
<p>The total time delay according to different schemes vs. <math display="inline"><semantics> <mi mathvariant="script">F</mi> </semantics></math>m,k, with <span class="html-italic">K</span> = 100 and <math display="inline"><semantics> <msub> <mi>ζ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> = 30 Giga cycles/s.</p>
Full article ">Figure 4
<p>The total time delay according to different schemes vs. <math display="inline"><semantics> <msub> <mi>ζ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math>, with <span class="html-italic">K</span> = 100 and <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>m</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> </semantics></math> = 600 cycles/bit.</p>
Full article ">Figure 5
<p>The total time delay vs. no. of UEs.</p>
Full article ">Figure 6
<p>Task completion ratio vs. no. of UEs.</p>
Full article ">
23 pages, 6296 KiB  
Article
Dynamic Patch-Based Sample Generation for Pulmonary Nodule Segmentation in Low-Dose CT Scans Using 3D Residual Networks for Lung Cancer Screening
by Ioannis D. Marinakis, Konstantinos Karampidis, Giorgos Papadourakis and Mostefa Kara
Appl. Biosci. 2025, 4(1), 14; https://doi.org/10.3390/applbiosci4010014 - 5 Mar 2025
Viewed by 99
Abstract
Lung cancer is by far the leading cause of cancer death among both men and women, making up almost 25% of all cancer deaths Each year, more people die of lung cancer than colon, breast, and prostate cancer combined. The early detection of [...] Read more.
Lung cancer is by far the leading cause of cancer death among both men and women, making up almost 25% of all cancer deaths Each year, more people die of lung cancer than colon, breast, and prostate cancer combined. The early detection of lung cancer is critical for improving patient outcomes, and automation through advanced image analysis techniques can significantly assist radiologists. This paper presents the development and evaluation of a computer-aided diagnostic system for lung cancer screening, focusing on pulmonary nodule segmentation in low-dose CT images, by employing HighRes3DNet. HighRes3DNet is a specialized 3D convolutional neural network (CNN) architecture based on ResNet principles which uses residual connections to efficiently learn complex spatial features from 3D volumetric data. To address the challenges of processing large CT volumes, an efficient patch-based extraction pipeline was developed. This method dynamically extracts 3D patches during training with a probabilistic approach, prioritizing patches likely to contain nodules while maintaining diversity. Data augmentation techniques, including random flips, affine transformations, elastic deformations, and swaps, were applied in the 3D space to enhance the robustness of the training process and mitigate overfitting. Using a public low-dose CT dataset, this approach achieved a Dice coefficient of 82.65% on the testing set for 3D nodule segmentation, demonstrating precise and reliable predictions. The findings highlight the potential of this system to enhance efficiency and accuracy in lung cancer screening, providing a valuable tool to support radiologists in clinical decision-making. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Biosciences)
Show Figures

Figure 1

Figure 1
<p>Preprocessed data sample.</p>
Full article ">Figure 2
<p>Annotations of nodule and the consensus mask.</p>
Full article ">Figure 3
<p>LDCT preprocessing pipeline.</p>
Full article ">Figure 4
<p>Patch extraction pipeline.</p>
Full article ">Figure 5
<p>Segmentation predictions I. This figure showcases the segmentation results for pulmonary nodules in low-dose CT scans for two different patients. (<b>a</b>) Axial view of a CT slice from the testing set, showing a pulmonary nodule outlined by the ground truth mask (left top) and the corresponding predicted mask generated by the segmentation model (right top). Below each axial view, the 3D representation of the ground truth mask (left bottom) and the predicted mask (right bottom) is displayed, highlighting the nodule across all CT slices. (<b>b</b>) Similar visualizations for a second testing sample, presenting the ground truth and model predictions for two nodules. These panels illustrate the model’s performance in accurately segmenting nodules in varying patient data.</p>
Full article ">Figure 6
<p>Segmentation underperforming ground glass opacity (GGO) case.</p>
Full article ">Figure A1
<p>LIDC-IDRI slice spacing histogram.</p>
Full article ">Figure A2
<p>LIDC-IDRI scans pixel spacing ranges and counts.</p>
Full article ">Figure A3
<p>Label distribution of nodule characteristics in LIDC-IDRI database.</p>
Full article ">Figure A4
<p>Nodule segmentation predictions (II).</p>
Full article ">Figure A5
<p>Nodule segmentation predictions (III).</p>
Full article ">Figure A6
<p>Nodule segmentation predictions (IV).</p>
Full article ">Figure A7
<p>Distribution of nodule characteristics in the testing set. The <span class="html-italic">X</span>-axis illustrates the annotation label, while the <span class="html-italic">Y</span>-axis illustrates the number of annotations.</p>
Full article ">Figure A8
<p>Nodule diameter sizes histogram. The <span class="html-italic">X</span>-axis illustrates the diameter groups while the <span class="html-italic">Y</span>-axis illustrates the number of annotations.</p>
Full article ">
25 pages, 6991 KiB  
Article
A Comprehensive AI Framework for Superior Diagnosis, Cranial Reconstruction, and Implant Generation for Diverse Cranial Defects
by Mamta Juneja, Ishaan Singla, Aditya Poddar, Nitin Pandey, Aparna Goel, Agrima Sudhir, Pankhuri Bhatia, Gurzafar Singh, Maanya Kharbanda, Amanpreet Kaur, Ira Bhatia, Vipin Gupta, Sukhdeep Singh Dhami, Yvonne Reinwald, Prashant Jindal and Philip Breedon
Bioengineering 2025, 12(2), 188; https://doi.org/10.3390/bioengineering12020188 - 16 Feb 2025
Viewed by 383
Abstract
Cranioplasty enables the restoration of cranial defects caused by traumatic injuries, brain tumour excisions, or decompressive craniectomies. Conventional methods rely on Computer-Aided Design (CAD) for implant design, which requires significant resources and expertise. Recent advancements in Artificial Intelligence (AI) have improved Computer-Aided Diagnostic [...] Read more.
Cranioplasty enables the restoration of cranial defects caused by traumatic injuries, brain tumour excisions, or decompressive craniectomies. Conventional methods rely on Computer-Aided Design (CAD) for implant design, which requires significant resources and expertise. Recent advancements in Artificial Intelligence (AI) have improved Computer-Aided Diagnostic systems for accurate and faster cranial reconstruction and implant generation procedures. However, these face inherent limitations, including the limited availability of diverse datasets covering different defect shapes spanning various locations, absence of a comprehensive pipeline integrating the preprocessing of medical images, cranial reconstruction, and implant generation, along with mechanical testing and validation. The proposed framework incorporates a robust preprocessing pipeline for easier processing of Computed Tomography (CT) images through data conversion, denoising, Connected Component Analysis (CCA), and image alignment. At its core is CRIGNet (Cranial Reconstruction and Implant Generation Network), a novel deep learning model rigorously trained on a diverse dataset of 2160 images, which was prepared by simulating cylindrical, cubical, spherical, and triangular prism-shaped defects across five skull regions, ensuring robustness in diagnosing a wide variety of defect patterns. CRIGNet achieved an exceptional reconstruction accuracy with a Dice Similarity Coefficient (DSC) of 0.99, Jaccard Similarity Coefficient (JSC) of 0.98, and Hausdorff distance (HD) of 4.63 mm. The generated implants showed superior geometric accuracy, load-bearing capacity, and gap-free fitment in the defected skull compared to CAD-generated implants. Also, this framework reduced the implant generation processing time from 40–45 min (CAD) to 25–30 s, suggesting its application for a faster turnaround time, enabling decisive clinical support systems. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

Figure 1
<p>Proposed end-to-end framework for cranial reconstruction and implant generation and its assessment: (<b>a</b>) data preprocessing, (<b>b</b>) defect generation, (<b>c</b>) AI modelling, and (<b>d</b>) assessment of CRIGNet generated implants.</p>
Full article ">Figure 2
<p>Regions for defect generation.</p>
Full article ">Figure 3
<p>Architecture of the proposed AI model CRIGNet.</p>
Full article ">Figure 4
<p>Results of various preprocessing steps: (<b>a</b>) RAW DICOM data, (<b>b</b>) noisy NRRD data found in MUG500+ dataset, (<b>c</b>) denoised NRRD file containing extraneous artefacts, (<b>d</b>) clean NRRD file with artefacts removed through CCA, and (<b>e</b>) aligned NRRD file.</p>
Full article ">Figure 5
<p>Visualisations of the various defect types created using four defect mask shapes in the five identified regions on a representative case from the MUG500+ dataset. (<b>a</b>) Masks, (<b>b</b>) Regions.</p>
Full article ">Figure 6
<p>Visual results of the CRIGNet-based reconstruction and implant generation across different regions on five arbitrarily chosen test cases: (<b>a</b>) defected skull (<b>b</b>), reconstructed skull, (<b>c</b>) ground truth implant, and (<b>d</b>) predicted implant.</p>
Full article ">Figure 7
<p>Box plots for quantitative comparison of CRIGNet with existing AI models for cranial reconstruction based on (<b>a</b>) DSC, (<b>b</b>) JSC, (<b>c</b>) HD, (<b>d</b>) precision, (<b>e</b>) recall, and (<b>f</b>) specificity. * Represents the modified architecture as described in <a href="#sec3dot5-bioengineering-12-00188" class="html-sec">Section 3.5</a>.</p>
Full article ">Figure 7 Cont.
<p>Box plots for quantitative comparison of CRIGNet with existing AI models for cranial reconstruction based on (<b>a</b>) DSC, (<b>b</b>) JSC, (<b>c</b>) HD, (<b>d</b>) precision, (<b>e</b>) recall, and (<b>f</b>) specificity. * Represents the modified architecture as described in <a href="#sec3dot5-bioengineering-12-00188" class="html-sec">Section 3.5</a>.</p>
Full article ">Figure 8
<p>CAD-generated implant (Cubical Left).</p>
Full article ">Figure 9
<p>Different views of the skull (grey) with CAD-generated implant (white) and four linear fixture plates.</p>
Full article ">Figure 10
<p>Different views of the skull (grey) with CRIGNet-generated implant (green) and four linear fixture plates.</p>
Full article ">Figure 11
<p>Cavity (red bound) of the defected skull.</p>
Full article ">Figure 12
<p>Edge gap between the boundaries of cavity of defected skull for (<b>a</b>) CAD-generated implant and (<b>b</b>) CRIGNet-generated implant.</p>
Full article ">Figure 13
<p>Simulation of CRIGNet-generated implant in ANSYS: (<b>a</b>) equivalent von Mises stress distribution and (<b>b</b>) total deformation.</p>
Full article ">
28 pages, 3337 KiB  
Article
Lung and Colon Cancer Classification Using Multiscale Deep Features Integration of Compact Convolutional Neural Networks and Feature Selection
by Omneya Attallah
Technologies 2025, 13(2), 54; https://doi.org/10.3390/technologies13020054 - 1 Feb 2025
Viewed by 979
Abstract
The automated and precise classification of lung and colon cancer from histopathological photos continues to pose a significant challenge in medical diagnosis, as current computer-aided diagnosis (CAD) systems are frequently constrained by their dependence on singular deep learning architectures, elevated computational complexity, and [...] Read more.
The automated and precise classification of lung and colon cancer from histopathological photos continues to pose a significant challenge in medical diagnosis, as current computer-aided diagnosis (CAD) systems are frequently constrained by their dependence on singular deep learning architectures, elevated computational complexity, and their ineffectiveness in utilising multiscale features. To this end, the present research introduces a CAD system that integrates several lightweight convolutional neural networks (CNNs) with dual-layer feature extraction and feature selection to overcome the aforementioned constraints. Initially, it extracts deep attributes from two separate layers (pooling and fully connected) of three pre-trained CNNs (MobileNet, ResNet-18, and EfficientNetB0). Second, the system uses the benefits of canonical correlation analysis for dimensionality reduction in pooling layer attributes to reduce complexity. In addition, it integrates the dual-layer features to encapsulate both high- and low-level representations. Finally, to benefit from multiple deep network architectures while reducing classification complexity, the proposed CAD merges dual deep layer variables of the three CNNs and then applies the analysis of variance (ANOVA) and Chi-Squared for the selection of the most discriminative features from the integrated CNN architectures. The CAD is assessed on the LC25000 dataset leveraging eight distinct classifiers, encompassing various Support Vector Machine (SVM) variants, Decision Trees, Linear Discriminant Analysis, and k-nearest neighbours. The experimental results exhibited outstanding performance, attaining 99.8% classification accuracy with cubic SVM classifiers employing merely 50 ANOVA-selected features, exceeding the performance of individual CNNs while markedly diminishing computational complexity. The framework’s capacity to sustain exceptional accuracy with a limited feature set renders it especially advantageous for clinical applications where diagnostic precision and efficiency are critical. These findings confirm the efficacy of the multi-CNN, multi-layer methodology in enhancing cancer classification precision while mitigating the computational constraints of current systems. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of pictures taken from the LC25000 dataset.</p>
Full article ">Figure 2
<p>A summary of the phases of the proposed CAD.</p>
Full article ">Figure 3
<p>Confusion matrices of the SVM classifiers fed with the optimal selected deep variables after applying the ANOVA method on the combined dual-layer features of the three networks; (<b>a</b>) QSVM, (<b>b</b>) CSVM, (<b>c</b>) MGSVM.</p>
Full article ">Figure 3 Cont.
<p>Confusion matrices of the SVM classifiers fed with the optimal selected deep variables after applying the ANOVA method on the combined dual-layer features of the three networks; (<b>a</b>) QSVM, (<b>b</b>) CSVM, (<b>c</b>) MGSVM.</p>
Full article ">Figure 4
<p>The ROC curves of the SVM classifiers inputted using the optimal selected deep variables after applying the ANOVA method on the combined dual-layer features of the three networks; (<b>a</b>) QSVM, (<b>b</b>) CSVM, (<b>c</b>) MGSVM.</p>
Full article ">Figure 4 Cont.
<p>The ROC curves of the SVM classifiers inputted using the optimal selected deep variables after applying the ANOVA method on the combined dual-layer features of the three networks; (<b>a</b>) QSVM, (<b>b</b>) CSVM, (<b>c</b>) MGSVM.</p>
Full article ">
18 pages, 1575 KiB  
Article
MammoViT: A Custom Vision Transformer Architecture for Accurate BIRADS Classification in Mammogram Analysis
by Abdullah G. M. Al Mansour, Faisal Alshomrani, Abdullah Alfahaid and Abdulaziz T. M. Almutairi
Diagnostics 2025, 15(3), 285; https://doi.org/10.3390/diagnostics15030285 - 25 Jan 2025
Viewed by 790
Abstract
Background: Breast cancer screening through mammography interpretation is crucial for early detection and improved patient outcomes. However, the manual classification of mammograms using the BIRADS (Breast Imaging-Reporting and Data System) remains challenging due to subtle imaging features, inter-reader variability, and increasing radiologist workload. [...] Read more.
Background: Breast cancer screening through mammography interpretation is crucial for early detection and improved patient outcomes. However, the manual classification of mammograms using the BIRADS (Breast Imaging-Reporting and Data System) remains challenging due to subtle imaging features, inter-reader variability, and increasing radiologist workload. Traditional computer-aided detection systems often struggle with complex feature extraction and contextual understanding of mammographic abnormalities. To address these limitations, this study proposes MammoViT, a novel hybrid deep learning framework that leverages both ResNet50’s hierarchical feature extraction capabilities and Vision Transformer’s ability to capture long-range dependencies in images. Methods: We implemented a multi-stage approach utilizing a pre-trained ResNet50 model for initial feature extraction from mammogram images. To address the significant class imbalance in our four-class BIRADS dataset, we applied SMOTE (Synthetic Minority Over-sampling Technique) to generate synthetic samples for minority classes. The extracted feature arrays were transformed into non-overlapping patches with positional encodings for Vision Transformer processing. The Vision Transformer employs multi-head self-attention mechanisms to capture both local and global relationships between image patches, with each attention head learning different aspects of spatial dependencies. The model was optimized using Keras Tuner and trained using 5-fold cross-validation with early stopping to prevent overfitting. Results: MammoViT achieved 97.4% accuracy in classifying mammogram images across different BIRADS categories. The model’s effectiveness was validated through comprehensive evaluation metrics, including a classification report, confusion matrix, probability distribution, and comparison with existing studies. Conclusions: MammoViT effectively combines ResNet50 and Vision Transformer architectures while addressing the challenge of imbalanced medical imaging datasets. The high accuracy and robust performance demonstrate its potential as a reliable tool for supporting clinical decision-making in breast cancer screening. Full article
Show Figures

Figure 1

Figure 1
<p>Visualizing the spectrum of breast density and architectural distortion across different BIRADS categories.</p>
Full article ">Figure 2
<p>Sample Training Images from BIRADS Dataset.</p>
Full article ">Figure 3
<p>A t-SNE visualization of the original and SMOTE-balanced training and validation datasets. (<b>a</b>) Training features, (<b>b</b>) SMOTE training feature, (<b>c</b>) validation features, and (<b>d</b>) SMOTE validation feature.</p>
Full article ">Figure 4
<p>A step-by-step visualization of the MammoViT model, highlighting key elements.</p>
Full article ">Figure 5
<p>Training vs. validation accuracy throughout training. The graph illustrates periods where training accuracy surpasses validation accuracy (highlighted in light green) and periods where validation accuracy exceeds training accuracy (highlighted in light pink), indicating model behavior across epochs.</p>
Full article ">Figure 6
<p>Training vs. validation loss over training epochs. The chart highlights intervals where training loss is lower than validation loss (light pink) and intervals where validation loss is lower than training loss (light green), providing insight into model convergence and generalization performance.</p>
Full article ">Figure 7
<p>Confusion matrix showing the model’s classification performance across all classes. The matrix provides a detailed view of correctly and incorrectly classified samples, with the diagonal representing true positives and off-diagonal values indicating misclassifications, aiding in assessing model accuracy and error distribution across classes.</p>
Full article ">Figure 8
<p>Probability distributions of model predictions for each class, displaying the likelihood assigned to each class across samples. The distribution plots show the spread and concentration of predicted probabilities, highlighting the model’s confidence levels and any overlapping predictions among the four classes.</p>
Full article ">Figure 9
<p>Grid view of model predictions, showcasing ten samples with each image labeled by its true class and predicted class. This visualization includes both correct and incorrect predictions, providing insight into the model’s strengths and challenges in accurately classifying each sample.</p>
Full article ">
30 pages, 5917 KiB  
Article
Boston Consulting Group Matrix-Based Equilibrium Optimizer for Numerical Optimization and Dynamic Economic Dispatch
by Lin Yang, Zhe Xu, Fenggang Yuan, Yanting Liu and Guozhong Tian
Electronics 2025, 14(3), 456; https://doi.org/10.3390/electronics14030456 - 23 Jan 2025
Viewed by 736
Abstract
Numerous optimization problems exist in the design and operation of power systems, critical for efficient energy use, cost minimization, and system stability. With increasing energy demand and diversifying energy structures, these problems grow increasingly complex. Metaheuristic algorithms have been highlighted for their flexibility [...] Read more.
Numerous optimization problems exist in the design and operation of power systems, critical for efficient energy use, cost minimization, and system stability. With increasing energy demand and diversifying energy structures, these problems grow increasingly complex. Metaheuristic algorithms have been highlighted for their flexibility and effectiveness in addressing such complex problems. To further explore the theoretical support of metaheuristic algorithms for optimization problems in power systems, this paper proposes a novel algorithm, the Boston Consulting Group Matrix-based Equilibrium Optimizer (BCGEO), which integrates the Equilibrium Optimizer (EO) with the classic economic decision-making model, the Boston Consulting Group Matrix. This matrix is utilized to construct a model for evaluating the potential of individuals, aiding in the rational allocation of computational resources, thereby achieving a better balance between exploration and exploitation. In comparative experiments across various dimensions on CEC2017, the BCGEO demonstrated superior search performance over its peers. Furthermore, in dynamic economic dispatch, the BCGEO has shown strong optimization capabilities and potential in power system optimization problems. Additionally, the experimental results in the spacecraft trajectory optimization problem suggest its potential for broader application across various fields. Full article
Show Figures

Figure 1

Figure 1
<p>BCG Matrix.</p>
Full article ">Figure 2
<p>Descriptive process of BCGEO.</p>
Full article ">Figure 3
<p>Flowchart of BCGEO.</p>
Full article ">Figure 4
<p>Test images and segmented images.</p>
Full article ">Figure 5
<p>Box-and-whisker diagrams of optimization errors on CEC2017 with 30 dimensions.</p>
Full article ">Figure 6
<p>Box-and-whisker diagrams of optimization errors on CEC2017 with 50 dimensions.</p>
Full article ">Figure 7
<p>Box-and-whisker diagrams of optimization errors on CEC2017 with 100 dimensions.</p>
Full article ">Figure 8
<p>Convergence graphs of average optimization errors on CEC2017 with 30 dimensions.</p>
Full article ">Figure 9
<p>Convergence graphs of average optimization errors on CEC2017 with 50 dimensions.</p>
Full article ">Figure 10
<p>Convergence graphs of average optimization errors on CEC2017 with 100 dimensions.</p>
Full article ">Figure 11
<p>Visualization of experimental results for the DED problem.</p>
Full article ">Figure 12
<p>The trajectory of Cassini 2 from Earth to Saturn.</p>
Full article ">Figure 13
<p>Visualization of experimental results for the STO problem.</p>
Full article ">Figure 14
<p>Comparison of population diversity between BCGEO and EO.</p>
Full article ">
33 pages, 1773 KiB  
Article
Energy-Efficient Aerial STAR-RIS-Aided Computing Offloading and Content Caching for Wireless Sensor Networks
by Xiaoping Yang, Quanzeng Wang, Bin Yang and Xiaofang Cao
Sensors 2025, 25(2), 393; https://doi.org/10.3390/s25020393 - 10 Jan 2025
Viewed by 735
Abstract
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance [...] Read more.
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance transmission and the limited coverage of edge base stations (BSs), emerging as a powerful paradigm for both communication and computing services. Furthermore, incorporating simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) as passive relays significantly enhances the propagation environment and service quality of UAV-based WSNs. However, most existing studies place STAR-RISs in fixed positions, ignoring the flexibility of STAR-RISs. Some other studies equip UAVs with STAR-RISs, and UAVs act as flight carriers, ignoring the computing and caching capabilities of UAVs. To address these limitations, we propose an energy-efficient aerial STAR-RIS-aided computing offloading and content caching framework, where we formulate an energy consumption minimization problem to jointly optimize content caching decisions, computing offloading decisions, UAV hovering positions, and STAR-RIS passive beamforming. Given the non-convex nature of this problem, we decompose it into a content caching decision subproblem, a computing offloading decision subproblem, a hovering position subproblem, and a STAR-RIS resource allocation subproblem. We propose a deep reinforcement learning (DRL)–successive convex approximation (SCA) combined algorithm to iteratively achieve near-optimal solutions with low complexity. The numerical results demonstrate that the proposed framework effectively utilizes resources in UAV-based WSNs and significantly reduces overall system energy consumption. Full article
(This article belongs to the Special Issue Recent Developments in Wireless Network Technology)
Show Figures

Figure 1

Figure 1
<p>System model of aerial STAR-RIS-aided WSN.</p>
Full article ">Figure 2
<p>Illustration of task caching and offloading for STAR-RIS-aided UAV system.</p>
Full article ">Figure 3
<p>Time allocation for task processing in STAR-RIS-aided UAV system.</p>
Full article ">Figure 4
<p>The proposed optimization framework of the energy consumption minimization problem.</p>
Full article ">Figure 5
<p>Workflow of PPO algorithm.</p>
Full article ">Figure 6
<p>Energy consumption versus the number of iterations.</p>
Full article ">Figure 7
<p>Energy consumption versus network bandwidth.</p>
Full article ">Figure 8
<p>Energy consumption versus CPU cycles required for computing 1 bit of task data.</p>
Full article ">Figure 9
<p>Energy consumption versus computation task size.</p>
Full article ">Figure 10
<p>Energy consumption versus number of elements.</p>
Full article ">Figure 11
<p>Energy consumption versus sensors’ transmit power.</p>
Full article ">Figure 12
<p>Energy consumption versus SINR.</p>
Full article ">Figure 13
<p>Convergence of average weighted reward sum for various caching DRL learning rates.</p>
Full article ">
19 pages, 12083 KiB  
Article
An XAI Approach to Melanoma Diagnosis: Explaining the Output of Convolutional Neural Networks with Feature Injection
by Flavia Grignaffini, Enrico De Santis, Fabrizio Frezza and Antonello Rizzi
Information 2024, 15(12), 783; https://doi.org/10.3390/info15120783 - 5 Dec 2024
Viewed by 842
Abstract
Computer-aided diagnosis (CAD) systems, which combine medical image processing with artificial intelligence (AI) to support experts in diagnosing various diseases, emerged from the need to solve some of the problems associated with medical diagnosis, such as long timelines and operator-related variability. The most [...] Read more.
Computer-aided diagnosis (CAD) systems, which combine medical image processing with artificial intelligence (AI) to support experts in diagnosing various diseases, emerged from the need to solve some of the problems associated with medical diagnosis, such as long timelines and operator-related variability. The most explored medical application is cancer detection, for which several CAD systems have been proposed. Among them, deep neural network (DNN)-based systems for skin cancer diagnosis have demonstrated comparable or superior performance to that of experienced dermatologists. However, the lack of transparency in the decision-making process of such approaches makes them “black boxes” and, therefore, not directly incorporable into clinical practice. Trying to explain and interpret the reasons for DNNs’ decisions can be performed by the emerging explainable AI (XAI) techniques. XAI has been successfully applied to DNNs for skin lesion image classification but never when additional information is incorporated during network training. This field is still unexplored; thus, in this paper, we aim to provide a method to explain, qualitatively and quantitatively, a convolutional neural network model with feature injection for melanoma diagnosis. The gradient-weighted class activation mapping and layer-wise relevance propagation methods were used to generate heat maps, highlighting the image regions and pixels that contributed most to the final prediction. In contrast, the Shapley additive explanations method was used to perform a feature importance analysis on the additional handcrafted information. To successfully integrate DNNs into the clinical and diagnostic workflow, ensuring their maximum reliability and transparency in whatever variant they are used is necessary. Full article
Show Figures

Figure 1

Figure 1
<p>General organization of XAI methods.</p>
Full article ">Figure 2
<p>CNN proposed in our previous work.</p>
Full article ">Figure 3
<p>Proposed architecture.</p>
Full article ">Figure 4
<p>Proposed interpretability workflow.</p>
Full article ">Figure 5
<p>Classification performance of the proposed model obtained by five-fold cross-validation.</p>
Full article ">Figure 6
<p>Local XAI methods.</p>
Full article ">Figure 7
<p>Overlapping local XAI methods.</p>
Full article ">Figure 8
<p>Some spurious correlations identified by Grad-CAM.</p>
Full article ">Figure 9
<p>Decomposition of the starting model into sub-models.</p>
Full article ">Figure 10
<p>Global XAI method. ‘CNN’: features automatically extracted from the network. ‘LBP’: handcrafted features.</p>
Full article ">
29 pages, 1297 KiB  
Article
Scatter Search Algorithm for a Waste Collection Problem in an Argentine Case Study
by Diego Rossit, Begoña González Landín, Mariano Frutos and Máximo Méndez Babey
Urban Sci. 2024, 8(4), 240; https://doi.org/10.3390/urbansci8040240 - 2 Dec 2024
Viewed by 939
Abstract
Increasing urbanization and rising consumption rates are putting pressure on urban systems to efficiently manage Municipal Solid Waste (MSW). Waste collection, in particular, is one of the most challenging aspects of MSW management. Therefore, developing computer-aided tools to support decision-makers is crucial. In [...] Read more.
Increasing urbanization and rising consumption rates are putting pressure on urban systems to efficiently manage Municipal Solid Waste (MSW). Waste collection, in particular, is one of the most challenging aspects of MSW management. Therefore, developing computer-aided tools to support decision-makers is crucial. In this paper, a Scatter Search algorithm is proposed to address the waste collection problem. The literature is relatively scarce in applying this algorithm, which has proven to be efficient in other routing problems, to real waste management problems. Results from real-world instances of an Argentine city demonstrate that the algorithm is competitive, obtaining, in the case of small instances, the same outcomes as those of an exact solver enhanced by valid inequalities, although requiring more computational time (as expected), and significantly improving the results of the latter for the case of larger instances, now requiring much less computational time. Thus, Scatter Search proves to be a competitive algorithm for addressing waste collection problems. Full article
Show Figures

Figure 1

Figure 1
<p>Scatter Search algorithm flowchart.</p>
Full article ">Figure 2
<p>Comparison of solvers and VIs for the exact method.</p>
Full article ">Figure 2 Cont.
<p>Comparison of solvers and VIs for the exact method.</p>
Full article ">Figure 3
<p>Box-and-whisker plots associated with the results of the three instances of 15 collection points grouped by combination methods, improvement methods, and local search sizes.</p>
Full article ">Figure 4
<p>Box-and-whisker plots associated with the results of the three instances of 30 collection points grouped by combination methods, improvement methods, and local search sizes.</p>
Full article ">Figure A1
<p>Instance 15-1. Box-and-whisker plots of the results grouped by combination methods, improvement method: EXC (white), INS (blue), and INV (gray), and local search sizes: 10, 20, and 30, from left to right.</p>
Full article ">Figure A2
<p>Instance 15-2. Box-and-whisker plots of the results grouped by combination methods, improvement method: EXC (white), INS (blue), and INV (gray), and local search sizes: 10, 20, and 30, from left to right.</p>
Full article ">Figure A3
<p>Instance 15-3. Box-and-whisker plots of the results grouped by combination methods, improvement method: EXC (white), INS (blue), and INV (gray), and local search sizes: 10, 20, and 30, from left to right.</p>
Full article ">Figure A4
<p>Instance 30-1. Box-and-whisker plots of the results grouped by combination methods, improvement method: EXC (white), INS (blue), and INV (gray), and local search sizes: 10, 20, and 30, from left to right.</p>
Full article ">Figure A5
<p>Instance 30-2. Box-and-whisker plots of the results grouped by combination methods, improvement method: EXC (white), INS (blue), and INV (gray), and local search sizes: 10, 20, and 30, from left to right.</p>
Full article ">Figure A6
<p>Instance 30-3. Box-and-whisker plots of the results grouped by combination methods, improvement method: EXC (white), INS (blue), and INV (gray), and local search sizes: 10, 20, and 30, from left to right.</p>
Full article ">
14 pages, 2039 KiB  
Article
Deep Learning Based Breast Cancer Detection Using Decision Fusion
by Doğu Manalı, Hasan Demirel and Alaa Eleyan
Computers 2024, 13(11), 294; https://doi.org/10.3390/computers13110294 - 14 Nov 2024
Viewed by 1758
Abstract
Breast cancer, which has the highest mortality and morbidity rates among diseases affecting women, poses a significant threat to their lives and health. Early diagnosis is crucial for effective treatment. Recent advancements in artificial intelligence have enabled innovative techniques for early breast cancer [...] Read more.
Breast cancer, which has the highest mortality and morbidity rates among diseases affecting women, poses a significant threat to their lives and health. Early diagnosis is crucial for effective treatment. Recent advancements in artificial intelligence have enabled innovative techniques for early breast cancer detection. Convolutional neural networks (CNNs) and support vector machines (SVMs) have been used in computer-aided diagnosis (CAD) systems to identify breast tumors from mammograms. However, existing methods often face challenges in accuracy and reliability across diverse diagnostic scenarios. This paper proposes a three parallel channel artificial intelligence-based system. First, SVM distinguishes between different tumor types using local binary pattern (LBP) features. Second, a pre-trained CNN extracts features, and SVM identifies potential tumors. Third, a newly developed CNN is trained and used to classify mammogram images. Finally, a decision fusion that combines results from the three channels to enhance system performance is implemented using different rules. The proposed decision fusion-based system outperforms state-of-the-art alternatives with an overall accuracy of 99.1% using the product rule. Full article
Show Figures

Figure 1

Figure 1
<p>Side view of a healthy breast (<b>left</b>) and a breast with malignant tumor (<b>right</b>).</p>
Full article ">Figure 2
<p>Examples from the DDSM dataset [<a href="#B27-computers-13-00294" class="html-bibr">27</a>] of benign (<b>top row</b>) and malignant (<b>bottom row</b>) images.</p>
Full article ">Figure 3
<p>Block diagram of the proposed decision fusion-based breast cancer detection model.</p>
Full article ">Figure 4
<p>The developed CNN model architecture.</p>
Full article ">Figure 5
<p>Confusion matrices for the LBP + SVM, ResNet50 + SVM, and CNN models (B: benign, M: malignant).</p>
Full article ">Figure 6
<p>ROC curves for the LBP + SVM, ResNet50 + SVM, and CNN models.</p>
Full article ">
24 pages, 1013 KiB  
Review
Part-Prototype Models in Medical Imaging: Applications and Current Challenges
by Lisa Anita De Santi, Franco Italo Piparo, Filippo Bargagna, Maria Filomena Santarelli, Simona Celi and Vincenzo Positano
BioMedInformatics 2024, 4(4), 2149-2172; https://doi.org/10.3390/biomedinformatics4040115 - 28 Oct 2024
Cited by 1 | Viewed by 1241
Abstract
Recent developments in Artificial Intelligence have increasingly focused on explainability research. The potential of Explainable Artificial Intelligence (XAI) in producing trustworthy computer-aided diagnosis systems and its usage for knowledge discovery are gaining interest in the medical imaging (MI) community to support the diagnostic [...] Read more.
Recent developments in Artificial Intelligence have increasingly focused on explainability research. The potential of Explainable Artificial Intelligence (XAI) in producing trustworthy computer-aided diagnosis systems and its usage for knowledge discovery are gaining interest in the medical imaging (MI) community to support the diagnostic process and the discovery of image biomarkers. Most of the existing XAI applications in MI are focused on interpreting the predictions made using deep neural networks, typically including attribution techniques with saliency map approaches and other feature visualization methods. However, these are often criticized for providing incorrect and incomplete representations of the black-box models’ behaviour. This highlights the importance of proposing models intentionally designed to be self-explanatory. In particular, part-prototype (PP) models are interpretable-by-design computer vision (CV) models that base their decision process on learning and identifying representative prototypical parts from input images, and they are gaining increasing interest and results in MI applications. However, the medical field has unique characteristics that could benefit from more advanced implementations of these types of architectures. This narrative review summarizes existing PP networks, their application in MI analysis, and current challenges. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

Figure 1
<p>Part−prototype network reasoning process during prediction (normal vs. pneumonia classification task from RX images). These models learn prototypes in terms of representative image regions for the predicted class from the training set and perform the classification based on their detection of new images (prototypical regions marked with yellow boxes).</p>
Full article ">Figure 2
<p>Global and local explanations of part-prototype network (classification of Alzheimer’s disease from MR images). The global explanation shows all the learned prototypes. The local explanation shows the model’s reasoning for a specific instance.</p>
Full article ">Figure 3
<p>ProtoPNet architecture.</p>
Full article ">Figure 4
<p>Prototypical part visualization in a normal vs. pneumonia classification task for a normal test image (marked with yellow box). Radiological images were displayed in the standard grayscale (windowing over the entire signal range) while activation maps and heatmaps were visualized using the same color map.</p>
Full article ">
14 pages, 1316 KiB  
Review
The Integration of Radiomics and Artificial Intelligence in Modern Medicine
by Antonino Maniaci, Salvatore Lavalle, Caterina Gagliano, Mario Lentini, Edoardo Masiello, Federica Parisi, Giannicola Iannella, Nicole Dalia Cilia, Valerio Salerno, Giacomo Cusumano and Luigi La Via
Life 2024, 14(10), 1248; https://doi.org/10.3390/life14101248 - 1 Oct 2024
Cited by 5 | Viewed by 3376
Abstract
With profound effects on patient care, the role of artificial intelligence (AI) in radiomics has become a disruptive force in contemporary medicine. Radiomics, the quantitative feature extraction and analysis from medical images, offers useful imaging biomarkers that can reveal important information about the [...] Read more.
With profound effects on patient care, the role of artificial intelligence (AI) in radiomics has become a disruptive force in contemporary medicine. Radiomics, the quantitative feature extraction and analysis from medical images, offers useful imaging biomarkers that can reveal important information about the nature of diseases, how well patients respond to treatment and patient outcomes. The use of AI techniques in radiomics, such as machine learning and deep learning, has made it possible to create sophisticated computer-aided diagnostic systems, predictive models, and decision support tools. The many uses of AI in radiomics are examined in this review, encompassing its involvement of quantitative feature extraction from medical images, the machine learning, deep learning and computer-aided diagnostic (CAD) systems approaches in radiomics, and the effect of radiomics and AI on improving workflow automation and efficiency, optimize clinical trials and patient stratification. This review also covers the predictive modeling improvement by machine learning in radiomics, the multimodal integration and enhanced deep learning architectures, and the regulatory and clinical adoption considerations for radiomics-based CAD. Particular emphasis is given to the enormous potential for enhancing diagnosis precision, treatment personalization, and overall patient outcomes. Full article
(This article belongs to the Special Issue New Insights Into Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

Figure 1
<p>Current and future applications of Radiomics-AI.</p>
Full article ">Figure 2
<p>Steps of radiomics-AI processes in medical imaging. Step 1A: Various modalities; Step 1B: Radiomics feature extraction; Step 2A: Automated image analysis; Step 2B: Lesion Detection; Step 2C: Diagnostic decision support; Step 3: Clinical decision making.</p>
Full article ">
15 pages, 3397 KiB  
Article
Multi-UAV Area Coverage Track Planning Based on the Voronoi Graph and Attention Mechanism
by Jubo Wang and Ruixin Wang
Appl. Sci. 2024, 14(17), 7844; https://doi.org/10.3390/app14177844 - 4 Sep 2024
Viewed by 1340
Abstract
Drone area coverage primarily involves using unmanned aerial vehicles (UAVs) for extensive monitoring, surveying, communication, and other tasks over specific regions. The significance and value of this technology are multifaceted. Firstly, UAVs can rapidly and efficiently reach remote or inaccessible areas to perform [...] Read more.
Drone area coverage primarily involves using unmanned aerial vehicles (UAVs) for extensive monitoring, surveying, communication, and other tasks over specific regions. The significance and value of this technology are multifaceted. Firstly, UAVs can rapidly and efficiently reach remote or inaccessible areas to perform tasks such as terrain mapping, disaster monitoring, or search and rescue, significantly enhancing response speed and execution efficiency. Secondly, drone area coverage in agricultural monitoring, forestry conservation, and urban planning offers high-precision data support, aiding scientists and decision-makers in making more accurate judgments and decisions. Additionally, drones can serve as temporary communication base stations in areas with poor communication, ensuring the transfer of crucial information. Drone area coverage technology is vital in improving work efficiency, reducing costs, and strengthening decision support. This paper aims to solve the optimization problem of multi-UAV area coverage flight path planning to enhance system efficiency and task execution capability. For multi-center optimization problems, a region decomposition method based on the Voronoi graph is designed, transforming the multi-UAV area coverage issue into the single-UAV area coverage problem, greatly simplifying the complexity and computational process. For the single-UAV area coverage problem and its corresponding area, this paper contrives a convolutional neural network with the channel and spatial attention mechanism (CSAM) to enhance feature fusion capability, enabling the model to focus on core features for solving single-UAV path selection and ultimately generating the optimal path. Simulation results demonstrate that the proposed method achieves excellent performance. Full article
(This article belongs to the Special Issue Application of Machine Vision and Deep Learning Technology)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of regional decomposition.</p>
Full article ">Figure 2
<p>Schematic diagram of residual structure.</p>
Full article ">Figure 3
<p>Schematic diagram of drone flight selection.</p>
Full article ">Figure 4
<p>Network structure integrating CSAM.</p>
Full article ">Figure 5
<p>Schematic diagram of the integrated reinforcement learning strategy.</p>
Full article ">Figure 6
<p>Schematic diagram of pentagonal area division.</p>
Full article ">Figure 7
<p>Schematic diagram of octagonal area division.</p>
Full article ">Figure 8
<p>Visual results of basic optimization algorithm.</p>
Full article ">Figure 9
<p>Visual results of trajectory optimization algorithm integrating CSAM.</p>
Full article ">
25 pages, 5685 KiB  
Article
Deep Learning Techniques for Enhanced Flame Monitoring in Cement Rotary Kilns Using Petcoke and Refuse-Derived Fuel (RDF)
by Jorge Arroyo, Christian Pillajo, Jorge Barrio, Pedro Compais and Valter Domingos Tavares
Sustainability 2024, 16(16), 6862; https://doi.org/10.3390/su16166862 - 9 Aug 2024
Viewed by 1769
Abstract
The use of refuse-derived fuel (RDF) in cement kilns offers a multifaceted approach to sustainability, addressing environmental, economic, and social aspects. By converting waste into a valuable energy source, RDF reduces landfill use, conserves natural resources, lowers greenhouse gas emissions, and promotes a [...] Read more.
The use of refuse-derived fuel (RDF) in cement kilns offers a multifaceted approach to sustainability, addressing environmental, economic, and social aspects. By converting waste into a valuable energy source, RDF reduces landfill use, conserves natural resources, lowers greenhouse gas emissions, and promotes a circular economy. This sustainable practice not only supports the cement industry in meeting regulatory requirements but also advances global efforts toward more sustainable waste management and energy production systems. This research promotes the integration of RDF as fuel in cement kilns to reduce the use of fossil fuels by improving the control of the combustion. Addressing the variable composition of RDF requires continuous monitoring to ensure operational stability and product quality, traditionally managed by operators through visual inspections. This study introduces a real-time, computer vision- and deep learning-based monitoring system to aid in decision-making, utilizing existing kiln imaging devices for a non-intrusive, cost-effective solution applicable across various facilities. The system generates two detailed datasets from the kiln environment, undergoing extensive preprocessing to enhance image quality. The YOLOv8 algorithm was chosen for its real-time accuracy, with the final model demonstrating strong performance and domain adaptation. In an industrial setting, the system identifies critical elements like flame and clinker with high precision, achieving 25 frames per second (FPS) and a mean average precision (mAP50) of 98.8%. The study also develops strategies to improve the adaptability of the model to changing operational conditions. This advancement marks a significant step towards more energy-efficient and quality-focused cement production practices. By leveraging technological innovations, this research contributes to the move of the industry towards sustainability and operational efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Image analysis techniques: classification, object detection, and segmentation.</p>
Full article ">Figure 2
<p>Released versions of the YOLO algorithm throughout the years.</p>
Full article ">Figure 3
<p>Classes predicted in the model developed.</p>
Full article ">Figure 4
<p>Scheme of a rotary kiln with a video system for flame monitoring.</p>
Full article ">Figure 5
<p>(<b>a</b>) Location of the video system in the rotary kiln. (<b>b</b>) Sample image of the combustion inside the rotary kiln.</p>
Full article ">Figure 6
<p>Image captures of the flame in the rotary kiln under different boundary conditions.</p>
Full article ">Figure 7
<p>Clustering using K-means method.</p>
Full article ">Figure 8
<p>Sample images from the labeled dataset, where the Flame class is outlined in blue, the Plume class in violet, and the Clinker class in orange tone.</p>
Full article ">Figure 9
<p>Application of horizontal flip to the original image.</p>
Full article ">Figure 10
<p>Application of rotation to the original image.</p>
Full article ">Figure 11
<p>Example of flame detection in an image. The predicted bounding box is drawn in red, while the actual bounding box is drawn in blue. Areas of overlap and union for the <span class="html-italic">IoU</span> calculation are shown in green. On the right is the equivalent calculation for instance segmentation masks.</p>
Full article ">Figure 12
<p>Training summary. Lower box_loss suggests more accurate predictions in the location and size of boxes, lower seg_loss indicates greater similarity between predicted and actual masks in segmentation, and lower cls_loss reflects more accurate object classification.</p>
Full article ">Figure 13
<p>Comparison between the ground truth and the prediction of the model on validation images.</p>
Full article ">Figure 14
<p>Architecture of the real-time monitoring system.</p>
Full article ">Figure 15
<p>Comparison between the ground truth and the prediction of the model on dataset 2.</p>
Full article ">
26 pages, 12122 KiB  
Article
Large-Scale Solar Potential Analysis in a 3D CAD Framework as a Use Case of Urban Digital Twins
by Evgeny Shirinyan and Dessislava Petrova-Antonova
Remote Sens. 2024, 16(15), 2700; https://doi.org/10.3390/rs16152700 - 23 Jul 2024
Cited by 1 | Viewed by 2677
Abstract
Solar radiation impacts diverse aspects of city life, such as harvesting energy with PV panels, passive heating of buildings in winter, cooling the loads of air-conditioning systems in summer, and the urban microclimate. Urban digital twins and 3D city models can support solar [...] Read more.
Solar radiation impacts diverse aspects of city life, such as harvesting energy with PV panels, passive heating of buildings in winter, cooling the loads of air-conditioning systems in summer, and the urban microclimate. Urban digital twins and 3D city models can support solar studies in the process of urban planning and provide valuable insights for data-driven decision support. This study examines the calculation of solar incident radiation at the city scale in Sofia using remote sensing data for the large shading context in a mountainous region and 3D building data. It aims to explore the methods of geometry optimisation, limitations, and performance issues of a 3D computer-aided design (CAD) tool dedicated to small-scale solar analysis and employed at the city scale. Two cases were considered at the city and district scales, respectively. The total face count of meshes for the simulations constituted approximately 2,000,000 faces. A total of 64,379 roofs for the whole city and 4796 buildings for one district were selected. All calculations were performed in one batch and visualised in a 3D web platform. The use of a 3D CAD environment establishes a seamless process of updating 3D models and simulations, while preprocessing in Geographic Information System (GIS) ensures working with large-scale datasets. The proposed method showed a moderate computation time for both cases and could be extended to include reflected radiation and dense photogrammetric meshes in the future. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>UDT functionality utilisation across different urban scenarios, including solar potential use case.</p>
Full article ">Figure 2
<p>The scheme of the workflow.</p>
Full article ">Figure 3
<p>Effective shading terrain surface and 3D terrain generation: (<b>a</b>) viewshed analysis of the terrain shading in QGIS; (<b>b</b>) clipping the terrain with the viewshed (terrain surface clipped the viewshed is presented in green and terrain surface outside the viewshed is presented in blue).</p>
Full article ">Figure 4
<p>Shading mask according to Global Solar Atlas.</p>
Full article ">Figure 5
<p>Calculation of incident radiation in Ladybug Tools.</p>
Full article ">Figure 6
<p>Texture baking in Blender for the study mesh (<b>a</b>), 3134 faces, and the simplification of the mesh, 292 faces(<b>b</b>).</p>
Full article ">Figure 7
<p>Location of the sample buildings.</p>
Full article ">Figure 8
<p>Terrain shading in Global Solar Atlas of Building 1 (<b>a</b>) and Building 2 (<b>b</b>).</p>
Full article ">Figure 9
<p>Input geometry for the study.</p>
Full article ">Figure 10
<p>Solar incident radiation without shading (<b>a</b>) and with shading (<b>b</b>).</p>
Full article ">Figure 11
<p>Calculation time and face count: (<b>a</b>) Tregenza sky; (<b>b</b>) Tregenza sky, Reinhart sky, and Reinhart sky with grafting; (<b>c</b>) Tregenza sky without shading geometry and with shading geometry.</p>
Full article ">Figure 12
<p>(<b>a</b>) solar potential in Rhino; (<b>b</b>) solar potential in ArcGIS Online.</p>
Full article ">Figure 13
<p>(<b>a</b>) Solar potential in Rhino; (<b>b</b>) solar potential in ArcGIS Online.</p>
Full article ">Figure 14
<p>Solar potential visualisation in CesiumJS; (<b>a</b>) rooftops of residential buildings in Sofia; (<b>b</b>) building surfaces in the district of Lozenets.</p>
Full article ">Figure 15
<p>Comparison PC1 and PC2.</p>
Full article ">Figure 16
<p>Solar analysis of dense photogrammetric meshes.</p>
Full article ">
Back to TopTop