[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (217)

Search Parameters:
Keywords = hybrid detectors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 6809 KiB  
Review
TCNQ and Its Derivatives as Electrode Materials in Electrochemical Investigations—Achievement and Prospects: A Review
by Tetiana Starodub and Slawomir Michalkiewicz
Materials 2024, 17(23), 5864; https://doi.org/10.3390/ma17235864 - 29 Nov 2024
Viewed by 771
Abstract
7,7′,8,8′-tetracyanoquinodimethane (TCNQ) is one of the most widely used effective surface electron acceptors in organic electronics and sensors, which opens up a very interesting field in electrochemical applications. In this review article, we outline the historical context of electrochemically stable selective electrode materials [...] Read more.
7,7′,8,8′-tetracyanoquinodimethane (TCNQ) is one of the most widely used effective surface electron acceptors in organic electronics and sensors, which opens up a very interesting field in electrochemical applications. In this review article, we outline the historical context of electrochemically stable selective electrode materials based on TCNQ and its derivatives and their development, their electrochemical characteristics, and the experimental aspects of their electrochemical applications. TCNQ-modified electrodes are characterized by long-term stability, reproducibility, and a low detection limit compared to other sensors; thus, their use can increase determination speed and flexibility and reduce investigation costs. TCNQ and its derivatives can also be successfully combined with other detector materials for cancer-related clinical diagnostic testing. Examples of simple, rapid, and sensitive detection procedures for various analytes are provided. Applications of new electrochemically stable TCNQ-based metal/covalent–organic hybrid frameworks, with exceptionally large surface areas, tunable pore sizes, diverse functionality, and high electrical conductivity, are also presented. As a result, they also offer enormous potential as revolutionary catalysts, drug carrier systems, and smart materials, as well as for use in gas storage. The use of TCNQ compounds as promising active electrode materials in high-power organic batteries/energy storage devices is discussed. We hope that the information featured in this review will provide readers with a good understanding of the chemistry of TCNQ and, more importantly, help to find good ways to prepare new micro-/nanoelectrode materials for rational sensor design. Full article
(This article belongs to the Special Issue Progress in Carbon-Based Materials)
Show Figures

Figure 1

Figure 1
<p>The molecular structure of 7,7′,8,8′-tetracyanoquinodimethane (TCNQ).</p>
Full article ">Figure 2
<p>The number of published works on the application of TCNQ and its derivatives in electrochemistry from 1976 to September 2024.</p>
Full article ">Figure 3
<p>Two-electron reversible reaction of TCNQ.</p>
Full article ">Figure 4
<p>General scheme for obtaining electrode mediators based on TCNQ or its CTC.</p>
Full article ">Figure 5
<p>Schematic illustration of the formation of MOF based on TCNQ.</p>
Full article ">Figure 6
<p>Three major applications of TCNQ and its derivatives in electrochemical techniques.</p>
Full article ">Figure 7
<p>The optimized structure of a tetrathiafulvalene molecule as superconductor and the scheme of TTF oxidation.</p>
Full article ">Figure 8
<p>Schematic illustration of a glucose sensor, with functionalizing TTF-TCNQ on the PPW working electrode and the electrochemical processes near the working electrode.</p>
Full article ">Figure 9
<p>Supramolecular organization of multi-walled CNTs–TCNQ at (<b>I</b>) low (&lt;1 mM) and (<b>II</b>) high (&gt;10 mM) concentrations of TCNQ.</p>
Full article ">Figure 10
<p>The structure of F<sub>4</sub>TCNQ molecules and e<sup>−</sup> transferring from graphene to F<sub>4</sub>TCNQ.</p>
Full article ">Figure 11
<p>Schematic dual doping process of Au (50 nm) for bilayer graphene consisting of both n-doping by aminopropyltriethoxysilane self-assembled monolayers (NH<sub>2</sub>-SAMs) and modified SiO<sub>2</sub>/Si substrate (bottom).</p>
Full article ">Figure 12
<p>A schematic representation of the formation of a continuous chain through the MOF crystal in a conductive TCNQ@Cu<sub>3</sub>(BTC)<sub>2</sub> MOF (the yellow dashed line indicates the pathway for charge conduction).</p>
Full article ">Figure 13
<p>Schematic illustration of properties of TCNQ-MOFs for chemiresistive sensors.</p>
Full article ">Figure 14
<p>A schematic illustration of the construction of stable solid-state Li/TCNQ batteries.</p>
Full article ">Scheme 1
<p>The formation of two series of TCNQ salts: Kt<sup>+</sup>[TCNQ<sup>•−</sup>], simple salts, and Kt<sup>+</sup>[TCNQ<sup>•−</sup>][TCNQ<sup>0</sup>], complex salts.</p>
Full article ">
19 pages, 6931 KiB  
Article
A Hybrid Deep Learning Framework for OFDM with Index Modulation Under Uncertain Channel Conditions
by Md Abdul Aziz, Md Habibur Rahman, Rana Tabassum, Mohammad Abrar Shakil Sejan, Myung-Sun Baek and Hyoung-Kyu Song
Mathematics 2024, 12(22), 3583; https://doi.org/10.3390/math12223583 - 15 Nov 2024
Viewed by 547
Abstract
Index modulation (IM) is considered a promising approach for fifth-generation wireless systems due to its spectral efficiency and reduced complexity compared to conventional modulation techniques. However, IM faces difficulties in environments with unpredictable channel conditions, particularly in accurately detecting index values and dynamically [...] Read more.
Index modulation (IM) is considered a promising approach for fifth-generation wireless systems due to its spectral efficiency and reduced complexity compared to conventional modulation techniques. However, IM faces difficulties in environments with unpredictable channel conditions, particularly in accurately detecting index values and dynamically adjusting index assignments. Deep learning (DL) offers a potential solution by improving detection performance and resilience through the learning of intricate patterns in varying channel conditions. In this paper, we introduce a robust detection method based on a hybrid DL (HDL) model designed specifically for orthogonal frequency-division multiplexing with IM (OFDM-IM) in challenging channel environments. Our proposed HDL detector leverages a one-dimensional convolutional neural network (1D-CNN) for feature extraction, followed by a bidirectional long short-term memory (Bi-LSTM) network to capture temporal dependencies. Before feeding data into the network, the channel matrix and received signals are preprocessed using domain-specific knowledge. We evaluate the bit error rate (BER) performance of the proposed model using different optimizers and equalizers, then compare it with other models. Moreover, we evaluate the throughput and spectral efficiency across varying SNR levels. Simulation results demonstrate that the proposed hybrid detector surpasses traditional and other DL-based detectors in terms of performance, underscoring its effectiveness for OFDM-IM under uncertain channel conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Generalized data transmission process for an OFDM-IM system.</p>
Full article ">Figure 2
<p>Structure of the proposed HDL detector for OFDM-IM systems.</p>
Full article ">Figure 3
<p>The internal configuration of an LSTM cell.</p>
Full article ">Figure 4
<p>Training loss of the proposed HDL model for different equalizers with data setup <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>N</mi> <mo>,</mo> <mi>A</mi> <mo>,</mo> <mi>M</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math>: (<b>a</b>) training loss for the ZF equalizer, (<b>b</b>) training loss for the MMSE equalizer, and (<b>c</b>) training loss for the DFE equalizer.</p>
Full article ">Figure 5
<p>Training loss of the proposed HDL model for different modulation orders and data combinations with the ZF equalizer: (<b>a</b>) training loss for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>N</mi> <mo>,</mo> <mi>A</mi> <mo>,</mo> <mi>M</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> setup, (<b>b</b>) training loss for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>N</mi> <mo>,</mo> <mi>A</mi> <mo>,</mo> <mi>M</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>8</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>8</mn> <mo>)</mo> </mrow> </semantics></math> setup, and (<b>c</b>) training loss for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>N</mi> <mo>,</mo> <mi>A</mi> <mo>,</mo> <mi>M</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>8</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>16</mn> <mo>)</mo> </mrow> </semantics></math> setup.</p>
Full article ">Figure 6
<p>The confusion matrix of the proposed HDL-based model.</p>
Full article ">Figure 7
<p>Performance of the HDL-based detector with (<b>a</b>) different learning rates and (<b>b</b>) different batch sizes in for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> data combination.</p>
Full article ">Figure 8
<p>Performance of the HDL-based detector at various training SNRs for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> data configuration.</p>
Full article ">Figure 9
<p>Performance of the proposed HDL-based detector with various equalizers for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> data configuration.</p>
Full article ">Figure 10
<p>BER performance of the proposed HDL-based detector utilizing different optimizers for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> data configuration.</p>
Full article ">Figure 11
<p>BER performance of the proposed HDL-based detector for various modulation orders and data setup.</p>
Full article ">Figure 12
<p>BER performance comparison of the proposed HDL-based detector with other detectors under imperfect CSI conditions for the <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> data combinations.</p>
Full article ">Figure 13
<p>Throughput and SE of the proposed HDL-based OFDM-IM system: (<b>a</b>) throughput performance and (<b>b</b>) SE performance.</p>
Full article ">
10 pages, 2849 KiB  
Article
Effects of 10 keV Electron Irradiation on the Performance Degradation of SiC Schottky Diode Radiation Detectors
by Jinlu Ruan, Liang Chen, Leidang Zhou, Xue Du, Fangbao Wang, Yapeng Zhang, Penghui Zhao and Xiaoping Ouyang
Micromachines 2024, 15(11), 1331; https://doi.org/10.3390/mi15111331 - 30 Oct 2024
Viewed by 520
Abstract
The silicon carbide (SiC) Schottky diode (SBD) detector in a SiC hybrid photomultiplier tube (HPMT) generates signals by receiving photocathode electrons with an energy of 10 keV. So, the performance of the SiC SBD under electron irradiation with an energy of 10 keV [...] Read more.
The silicon carbide (SiC) Schottky diode (SBD) detector in a SiC hybrid photomultiplier tube (HPMT) generates signals by receiving photocathode electrons with an energy of 10 keV. So, the performance of the SiC SBD under electron irradiation with an energy of 10 keV has an important significance for the application of the SiC-HPMT. However, studies on 10 keV radiation effects on the SiC SBDs were rarely reported. In this paper, the performance degradation of the SiC SBDs irradiated by 10 keV electrons at different fluences was investigated. After the irradiation, the forward current of the SiC SBDs increased, and the turn-on voltage decreased with the irradiation fluences until 1.6 × 1016 cm−2. According to the capacitance–voltage (C-V) curves, the effective doping concentration increased slightly after the irradiation, and an obvious discrepancy of C-V curves occurred below 5 V. Moreover, as a radiation detector, the peak position of the α-particles’ amplitude spectrum changed slightly, and the energy resolution was also slightly reduced after the irradiation due to the high collection charge efficiency (CCE) still being larger than 99.5%. In addition, the time response of the SiC SBD to the 50 ns pulsed X-ray was almost not affected by the irradiation. The results indicated that the performance degradation of the SiC SBD irradiated at the fluence of 1.5 × 1017 cm−2 would not result in a deterioration of the properties of the SiC-HPMT and showed an important significance for the supplement of the radiation resistance of the SiC SBD radiation detector. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of the SiC Schottky diode.</p>
Full article ">Figure 2
<p>Layout of the α-particle spectra measurement system.</p>
Full article ">Figure 3
<p>Schematic of the measurement system of responses to the pulsed X-ray source.</p>
Full article ">Figure 4
<p>(<b>a</b>) D1 at irradiation fluences of 3.2 × 10<sup>15</sup> cm<sup>−2</sup>. (<b>b</b>) D2 at irradiation fluences of 1.6 × 10<sup>16</sup> cm<sup>−2</sup>. (<b>c</b>) D3 at different irradiation fluences. (<b>d</b>) Forward <span class="html-italic">I-V</span> characteristics of D1, D2, and D3 before and after irradiation. (<b>e</b>) Reverse <span class="html-italic">I-V</span> characteristics of D3 at different irradiation fluences.</p>
Full article ">Figure 5
<p><span class="html-italic">C-V</span> curves and effective dopant concentrations of the three detectors before and after irradiations: (<b>a</b>,<b>b</b>) D1 detector after the irradiation at 3.2 × 10<sup>15</sup> cm<sup>−2</sup> irradiation fluences; (<b>c</b>,<b>d</b>) D2 detector after the irradiation at 1.6 × 10<sup>16</sup> cm<sup>−2</sup> irradiation fluences; (<b>e</b>,<b>f</b>) D3 detector after the irradiation at 1.5 × 10<sup>17</sup> cm<sup>−2</sup> irradiation fluences.</p>
Full article ">Figure 6
<p>α-particle amplitude spectrum of the three detectors before and after irradiation at different irradiation fluences: (<b>a</b>) 3.2 × 10<sup>15</sup> cm<sup>−2</sup> irradiation fluences; (<b>b</b>) 1.6 × 10<sup>16</sup> cm<sup>−2</sup> irradiation fluences; (<b>c</b>) 1.5 × 10<sup>17</sup> cm<sup>−2</sup> irradiation fluences.</p>
Full article ">Figure 7
<p>CCEs of the three detectors before and after irradiations.</p>
Full article ">Figure 8
<p>Responses of the three detectors to the pulsed X-ray source at 200V: (<b>a</b>) D1 detector; (<b>b</b>) D2 detector; (<b>c</b>) D3 detector.</p>
Full article ">
18 pages, 1300 KiB  
Article
XAI-Based Accurate Anomaly Detector That Is Robust Against Black-Box Evasion Attacks for the Smart Grid
by Islam Elgarhy, Mahmoud M. Badr, Mohamed Mahmoud, Maazen Alsabaan, Tariq Alshawi and Muteb Alsaqhan
Appl. Sci. 2024, 14(21), 9897; https://doi.org/10.3390/app14219897 - 29 Oct 2024
Viewed by 1006
Abstract
In the realm of smart grids, machine learning (ML) detectors—both binary (or supervised) and anomaly (or unsupervised)—have proven effective in detecting electricity theft (ET). However, binary detectors are designed for specific attacks, making their performance unpredictable against new attacks. Anomaly detectors, conversely, are [...] Read more.
In the realm of smart grids, machine learning (ML) detectors—both binary (or supervised) and anomaly (or unsupervised)—have proven effective in detecting electricity theft (ET). However, binary detectors are designed for specific attacks, making their performance unpredictable against new attacks. Anomaly detectors, conversely, are trained on benign data and identify deviations from benign patterns as anomalies, but their performance is highly sensitive to the selected threshold values. Additionally, ML detectors are vulnerable to evasion attacks, where attackers make minimal changes to malicious samples to evade detection. To address these limitations, we introduce a hybrid anomaly detector that combines a Deep Auto-Encoder (DAE) with a One-Class Support Vector Machine (OCSVM). This detector not only enhances classification performance but also mitigates the threshold sensitivity of the DAE. Furthermore, we evaluate the vulnerability of this detector to benchmark evasion attacks. Lastly, we propose an accurate and robust cluster-based DAE+OCSVM ET anomaly detector, trained using Explainable Artificial Intelligence (XAI) explanations generated by the Shapley Additive Explanations (SHAP) method on consumption readings. Our experimental results demonstrate that the proposed XAI-based detector achieves superior classification performance and exhibits enhanced robustness against various evasion attacks, including gradient-based and optimization-based methods, under a black-box threat model. Full article
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Black-box threat model.</p>
Full article ">Figure 2
<p>The proposed XAI-based robust and accurate ET anomaly detector.</p>
Full article ">Figure 3
<p>SHAP values: top-20 features (readings).</p>
Full article ">Figure 4
<p>PCA applied to global consumption readings.</p>
Full article ">Figure 5
<p>PCA applied to cluster-based consumption readings (e.g., Cluster 1).</p>
Full article ">Figure 6
<p>PCA applied to the cluster-based SHAP explanations of consumption readings (e.g., Cluster 1).</p>
Full article ">Figure 7
<p>Comparison of the robustness of DAE+OCSVM with and without the proposed defense against CNN-based evasion attacks in terms of DR.</p>
Full article ">Figure 8
<p>Comparison of the robustness of DAE+OCSVM with and without the proposed defense against FFNN-based evasion attacks in terms of DR.</p>
Full article ">Figure 9
<p>PR and ROC curves of DAE+OCSVM with and without the proposed defense (XAI and clustering).</p>
Full article ">
20 pages, 19948 KiB  
Article
Seasonal Variations of PM2.5 Pollution in the Chengdu–Chongqing Urban Agglomeration, China
by Kun Wang, Yuan Yao and Kun Mao
Sustainability 2024, 16(21), 9242; https://doi.org/10.3390/su16219242 - 24 Oct 2024
Viewed by 681
Abstract
During the development of the Chengdu–Chongqing Urban Agglomeration (CCUA) in China, PM2.5 pollution severely threatened public health, presenting a significant environmental challenge. This study employs a novel spatial interpolation method known as High Accuracy Surface Modeling (HASM), along with the geographical detector [...] Read more.
During the development of the Chengdu–Chongqing Urban Agglomeration (CCUA) in China, PM2.5 pollution severely threatened public health, presenting a significant environmental challenge. This study employs a novel spatial interpolation method known as High Accuracy Surface Modeling (HASM), along with the geographical detector method, local and regional contributions calculation model, and the Hybrid Single–Particle Lagrangian Integrated Trajectory model to analyze the seasonal spatial distribution of PM2.5 concentrations and their anthropogenic driving factors from 2014 to 2023. The transport pathway and potential sources of seasonal PM2.5 concentrations were also examined. The results showed the following: (1) HASM was identified as the most suitable interpolation method for monitoring PM2.5 concentrations in the CCUA; (2) The PM2.5 concentrations exhibited a decreasing trend across all seasons, with the highest values in winter and the lowest in summer. Spatially, the concentrations showed a pattern of being higher in the southwest and lower in the southeast; (3) Industrial soot (dust) emissions (ISEs) and industry structure (IS) were the most important anthropogenic driving factors influencing PM2.5 pollution; (4) The border area between the eastern part of the Tibet Autonomous Region and western Sichuan province in China significantly contribute to PM2.5 pollution in the CCUA, especially during winter. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

Figure 1
<p>PM<sub>2.5</sub> monitoring sites in the Chengdu–Chongqing Urban Agglomeration.</p>
Full article ">Figure 2
<p>PM<sub>2.5</sub> concentration interpolation results for the Chengdu–Chongqing Urban Agglomeration in January 2023: (<b>a</b>) spline; (<b>b</b>) IDW; (<b>c</b>) Kriging; (<b>d</b>) HASM.</p>
Full article ">Figure 3
<p>Interpolation accuracy validation of PM<sub>2.5</sub> concentrations for the Chengdu–Chongqing Urban Agglomeration in January 2023. (<b>a</b>) Spline. (<b>b</b>) IDW. (<b>c</b>) Kriging. (<b>d</b>) HASM.</p>
Full article ">Figure 4
<p>Seasonal variations in PM<sub>2.5</sub> concentrations in the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023.</p>
Full article ">Figure 5
<p>Spatial distribution of seasonal PM<sub>2.5</sub> concentrations in the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023.</p>
Full article ">Figure 5 Cont.
<p>Spatial distribution of seasonal PM<sub>2.5</sub> concentrations in the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023.</p>
Full article ">Figure 6
<p>Cluster–mean back trajectories in the western center of the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023 across the four seasons. The percentage indicates the proportion of back trajectories within each cluster relative to the total number of back trajectories. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">Figure 7
<p>Cluster–mean back trajectories in the eastern center of the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023 across the four seasons. The percentage indicates the proportion of back trajectories within each cluster relative to the total number of back trajectories. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">Figure 8
<p>Spatial distribution of seasonal industrial soot (dust) emissions in the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023.</p>
Full article ">Figure 8 Cont.
<p>Spatial distribution of seasonal industrial soot (dust) emissions in the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023.</p>
Full article ">Figure 9
<p>The influence of anthropogenic driving factors in driving seasonal PM<sub>2.5</sub> concentrations in the Chengdu–Chongqing Urban Agglomeration from 2014 to 2023.</p>
Full article ">Figure 10
<p>Statistics of four meteorological factors for each season from 2014 to 2021 within the coverage area of the primary pollution pathways for the Chengdu–Chongqing Urban Agglomeration: (<b>a</b>) precipitation; (<b>b</b>) air temperature; (<b>c</b>) relative humidity; (<b>d</b>) wind speed.</p>
Full article ">Figure 11
<p>Weighted potential source contribution function (WPSCF) maps for PM<sub>2.5</sub> in the western center of the Chengdu–Chongqing Urban Agglomeration during 2014–2023 across different seasons. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">Figure 12
<p>Weighted potential source contribution function (WPSCF) maps for PM<sub>2.5</sub> in the eastern center of the Chengdu–Chongqing Urban Agglomeration during 2014–2023 across different seasons. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">Figure 13
<p>Weighted concentration weighted trajectory (WCWT) maps of PM<sub>2.5</sub> in the western center of the Chengdu–Chongqing Urban Agglomeration during 2014–2023 across different seasons. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">Figure 14
<p>Weighted concentration weighted trajectory (WCWT) maps of PM<sub>2.5</sub> in the eastern center of the Chengdu–Chongqing Urban Agglomeration during 2014–2023 across different seasons. (<b>a</b>) Spring. (<b>b</b>) Summer. (<b>c</b>) Autumn. (<b>d</b>) Winter.</p>
Full article ">
14 pages, 1292 KiB  
Communication
Covert Communications in a Hybrid DF/AF Relay System
by Jihwan Moon
Sensors 2024, 24(20), 6518; https://doi.org/10.3390/s24206518 - 10 Oct 2024
Viewed by 744
Abstract
In this paper, we study covert communications in a hybrid decode-and-forward (DF)/ amplify-and-forward (AF) relay system. The considered relay in normal operation forwards messages from a source node to a destination node in either DF or AF mode on request. Meanwhile, the source [...] Read more.
In this paper, we study covert communications in a hybrid decode-and-forward (DF)/ amplify-and-forward (AF) relay system. The considered relay in normal operation forwards messages from a source node to a destination node in either DF or AF mode on request. Meanwhile, the source and destination nodes also attempt to secretly exchange covert messages such as confidential or sensitive information and avoid detection by the covert message detector embedded on the relay. We first establish an optimal DF/AF mode selection criterion to maximize the covert rate based on the analyses of delay-aware achievable covert rates of individual DF and AF modes. To further reduce the time complexity, we propose a low-complexity selection criterion as well for practical use. The numerical results demonstrate the covert rate gain as high as 50% and running time gain as high as 20% for particular system parameters, which verify the effectiveness of the proposed criteria. Full article
(This article belongs to the Special Issue Secure Communication for Next-Generation Wireless Networks)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>An optimal DF/AF mode selection criterion based on <math display="inline"><semantics> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mstyle scriptlevel="2" displaystyle="false"> <mi>P</mi> </mstyle> </msub> </semantics></math>. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mstyle scriptlevel="2" displaystyle="false"> <mrow> <mi>P</mi> <mo>,</mo> <mi>boundary</mi> <mo>,</mo> <mi>DF</mi> </mrow> </mstyle> </msub> <mo>≥</mo> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mstyle scriptlevel="2" displaystyle="false"> <mrow> <mi>P</mi> <mo>,</mo> <mi>boundary</mi> <mo>,</mo> <mi>AF</mi> </mrow> </mstyle> </msub> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mstyle scriptlevel="2" displaystyle="false"> <mrow> <mi>P</mi> <mo>,</mo> <mi>boundary</mi> <mo>,</mo> <mi>DF</mi> </mrow> </mstyle> </msub> <mo>&lt;</mo> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mstyle scriptlevel="2" displaystyle="false"> <mrow> <mi>P</mi> <mo>,</mo> <mi>boundary</mi> <mo>,</mo> <mi>AF</mi> </mrow> </mstyle> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The average worst-case covert rate <math display="inline"><semantics> <msub> <mi>r</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>C</mi> </mstyle> </msub> </semantics></math> versus the source transmit power <math display="inline"><semantics> <msub> <mi>P</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>S</mi> </mstyle> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>The average time elapsed versus the source transmit power <math display="inline"><semantics> <msub> <mi>P</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>S</mi> </mstyle> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>The average worst-case covert rate <math display="inline"><semantics> <msub> <mi>r</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>C</mi> </mstyle> </msub> </semantics></math> versus the relay transmit power <math display="inline"><semantics> <msub> <mi>P</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>R</mi> </mstyle> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>The average worst-case covert rate <math display="inline"><semantics> <msub> <mi>r</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>C</mi> </mstyle> </msub> </semantics></math> versus the public rate threshold <math display="inline"><semantics> <msub> <mover accent="true"> <mi>r</mi> <mo stretchy="false">¯</mo> </mover> <mstyle scriptlevel="2" displaystyle="false"> <mi>P</mi> </mstyle> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>The average worst-case covert rate <math display="inline"><semantics> <msub> <mi>r</mi> <mstyle scriptlevel="2" displaystyle="false"> <mi>C</mi> </mstyle> </msub> </semantics></math> versus the processing delay factor <math display="inline"><semantics> <mi>δ</mi> </semantics></math>.</p>
Full article ">
25 pages, 60939 KiB  
Article
DETR-ORD: An Improved DETR Detector for Oriented Remote Sensing Object Detection with Feature Reconstruction and Dynamic Query
by Xiaohai He, Kaiwen Liang, Weimin Zhang, Fangxing Li, Zhou Jiang, Zhengqing Zuo and Xinyan Tan
Remote Sens. 2024, 16(18), 3516; https://doi.org/10.3390/rs16183516 - 22 Sep 2024
Viewed by 1076
Abstract
Optical remote sensing images often feature high resolution, dense target distribution, and uneven target sizes, while transformer-based detectors like DETR reduce manually designed components, DETR does not support arbitrary-oriented object detection and suffers from high computational costs and slow convergence when handling large [...] Read more.
Optical remote sensing images often feature high resolution, dense target distribution, and uneven target sizes, while transformer-based detectors like DETR reduce manually designed components, DETR does not support arbitrary-oriented object detection and suffers from high computational costs and slow convergence when handling large sequences of images. Additionally, bipartite graph matching and the limit on the number of queries result in transformer-based detectors performing poorly in scenarios with multiple objects and small object sizes. We propose an improved DETR detector for Oriented remote sensing object detection with Feature Reconstruction and Dynamic Query, termed DETR-ORD. It introduces rotation into the transformer architecture for oriented object detection, reduces computational cost with a hybrid encoder, and includes an IFR (image feature reconstruction) module to address the loss of positional information due to the flattening operation. It also uses ATSS to select auxiliary dynamic training queries for the decoder. This improved DETR-based detector enhances detection performance in challenging oriented optical remote sensing scenarios with similar backbone network parameters. Our approach achieves superior results on most optical remote sensing datasets, such as DOTA-v1.5 (72.07% mAP) and DIOR-R (66.60% mAP), surpassing the baseline detector. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Illustration of our proposed framework. DETR-ORD adapts the standard deformable DETR for the AOOD task by (1) introducing rotation into the transformer architecture, (2) reducing the computational cost by employing a hybrid encoder, (3) proposing an IFR module to supervise the feature memory obtained from encoder interactions, and (4) using ATSS [<a href="#B36-remotesensing-16-03516" class="html-bibr">36</a>] to select auxiliary dynamic training queries for the decoder.</p>
Full article ">Figure 2
<p>Illustration of the iterative decoder of the proposed method DETR-ORD. Given the memory selected by the encoder and the predicted proposals, treat the memory as <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>,</mo> <mo> </mo> <mi>K</mi> <mo>,</mo> <mo> </mo> <mi>V</mi> </mrow> </semantics></math> for self-attention. Then, treat the proposals as reference points for cross-attention, and pass the output of this layer to the next layer. The output of this layer, after passing through a feed-forward network (FFN), obtains correction values to adjust the reference points. Iteratively, this process continues layer by layer to achieve the final result.</p>
Full article ">Figure 3
<p>Illustration of the IFR module of the proposed DETR-ORD method. Given the feature maps that have been processed by the backbone and hybrid encoder and restored to multiple scales, we adopt an architecture similar to the FPN to fully integrate multi-scale features, obtaining the feature map with the smallest downsampling factor. Then, through <span class="html-italic">N</span> RUsample modules, we obtain features of the original image size. The RUsample modules consist of transposed convolution, batch normalization (BN), and ReLU activation.</p>
Full article ">Figure 4
<p>Distribution of ground-truth per image in DOTA-v1.5.</p>
Full article ">Figure 5
<p>DIOR-R ground-truth distribution per image.</p>
Full article ">Figure 6
<p>HRSC2016 distribution of ground-truth per image.</p>
Full article ">Figure 7
<p>DOTA-v1.0 Inference results pre- and post-improvement.</p>
Full article ">Figure 8
<p>DOTA-v1.5 inference results pre- and post-improvement.</p>
Full article ">Figure 8 Cont.
<p>DOTA-v1.5 inference results pre- and post-improvement.</p>
Full article ">Figure 9
<p>Inference results pre- and post-improvement on DIOR-R.</p>
Full article ">Figure 9 Cont.
<p>Inference results pre- and post-improvement on DIOR-R.</p>
Full article ">Figure 10
<p>HRSC2016 inference results with pre- and post-improvement.</p>
Full article ">Figure A1
<p>In each sub-figure, the figures (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) show the results without using dynamic queries, while the figure (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) shows the results with the use of dynamic queries on the DOTA-v1.0 dataset. The red dots in the images represent the centers of the rotated bounding boxes selected by ATSS used for the query position encoding and reference points in decoder iteration. The blue numbers in the top left corner indicate the number of center points located within the ground truth.</p>
Full article ">Figure A2
<p>The inference results of the IFR module. In each sub-figure, the figures (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) are the original images, and the figures (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) are the images reconstructed and restored.</p>
Full article ">
31 pages, 23384 KiB  
Article
A Hybrid Approach for Image Acquisition Methods Based on Feature-Based Image Registration
by Anchal Kumawat, Sucheta Panda, Vassilis C. Gerogiannis, Andreas Kanavos, Biswaranjan Acharya and Stella Manika
J. Imaging 2024, 10(9), 228; https://doi.org/10.3390/jimaging10090228 - 14 Sep 2024
Viewed by 1304
Abstract
This paper presents a novel hybrid approach to feature detection designed specifically for enhancing Feature-Based Image Registration (FBIR). Through an extensive evaluation involving state-of-the-art feature detectors such as BRISK, FAST, ORB, Harris, MinEigen, and MSER, the proposed hybrid detector demonstrates superior performance in [...] Read more.
This paper presents a novel hybrid approach to feature detection designed specifically for enhancing Feature-Based Image Registration (FBIR). Through an extensive evaluation involving state-of-the-art feature detectors such as BRISK, FAST, ORB, Harris, MinEigen, and MSER, the proposed hybrid detector demonstrates superior performance in terms of keypoint detection accuracy and computational efficiency. Three image acquisition methods (i.e., rotation, scene-to-model, and scaling transformations) are considered in the comparison. Applied across a diverse set of remote-sensing images, the proposed hybrid approach has shown marked improvements in match points and match rates, proving its effectiveness in handling varied and complex imaging conditions typical in satellite and aerial imagery. The experimental results have consistently indicated that the hybrid detector outperforms conventional methods, establishing it as a valuable tool for advanced image registration tasks. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the proposed methodology.</p>
Full article ">Figure 2
<p>Diagonal approach for hybrid feature-detection method.</p>
Full article ">Figure 3
<p>Sampled color images from AID database [<a href="#B46-jimaging-10-00228" class="html-bibr">46</a>].</p>
Full article ">Figure 4
<p>Grayscale conversion of sampled color images from AID database [<a href="#B46-jimaging-10-00228" class="html-bibr">46</a>].</p>
Full article ">Figure 5
<p>Various rotation angles applied on park and railway station grayscale aerial images.</p>
Full article ">Figure 6
<p>Scaling transformations applied to VSSUT gate and Hirakud dam images. (<b>a</b>) 0.7 scaling factor on VSSUT gate. (<b>b</b>) 0.7 scaling factor on Hirakud dam. (<b>c</b>) 2.0 scaling factor on VSSUT gate. (<b>d</b>) 2.0 scaling factor on Hirakud dam.</p>
Full article ">Figure 7
<p>Detection of feature keypoints in the park image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation, showcasing the performance of different detectors. Green markers highlight the keypoints detected, with each subfigure corresponding to the output using a different feature-detection method.</p>
Full article ">Figure 8
<p>Detection of feature keypoints in the railway station image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation, showcasing the performance of different detectors. Green markers indicate the keypoints, and each subfigure corresponds to the output using a different feature-detection method.</p>
Full article ">Figure 9
<p>Extraction of feature keypoints from the park image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Green markers demonstrate the keypoints extracted, emphasizing the nuances of each algorithm with each subfigure showing results using a different feature extraction method.</p>
Full article ">Figure 10
<p>Extraction of feature keypoints from the railway station image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Each subfigure demonstrates the results using a different feature extraction method, with green markers used to emphasize keypoint locations and algorithmic nuances.</p>
Full article ">Figure 11
<p>Matching of feature keypoints in the park image across different rotational views under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Subfigures (<b>a</b>–<b>f</b>) display the matched keypoints separately to illustrate individual detector performance clearly. Subfigure (<b>g</b>) shows an overlaid result of the hybrid detector to demonstrate the integration of multiple detection outcomes, providing a comprehensive view of the keypoints matched by the proposed method. Each image aims to highlight the effectiveness of each feature detector in achieving consistent matching across transformations.</p>
Full article ">Figure 12
<p>Matching of feature keypoints in the railway station image across different rotational views under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Each subfigure highlights the effectiveness of each feature detector in achieving consistent matching.</p>
Full article ">Figure 13
<p>Sequential presentation of detection, extraction, and matching phases for various feature detectors on two sets of airport aerial images. Each row represents a different detector and showcases the process from detection to matching.</p>
Full article ">Figure 14
<p>Sequential presentation of detection, extraction, and matching phases for various feature detectors on two sets of bridge aerial images. Each row represents a different detector and showcases the process from detection to matching.</p>
Full article ">Figure 15
<p>Comparison of feature-detection performance using MSER, BRISK, and Hybrid detectors on two different images under scaling transformations. Each row demonstrates the response of the detectors at scaling factors of the original, 0.7, and 2.0, highlighting the adaptability of these algorithms to changes in image scale. (<b>a</b>) MSER: VSSUT Gate Image. (<b>b</b>) BRISK: VSSUT Gate Image. (<b>c</b>) MSER: HD Image. (<b>d</b>) BRISK: HD Image. (<b>e</b>) Hybrid: VSSUT Gate Image. (<b>f</b>) Hybrid: HD Image. (<b>g</b>) MSER: VSSUT Gate Image, Scale 0.7. (<b>h</b>) BRISK: VSSUT Gate Image, Scale 0.7. (<b>i</b>) MSER: HD Image, Scale 0.7. (<b>j</b>) BRISK: HD Image, Scale 0.7. (<b>k</b>) Hybrid: VSSUT Gate Image, Scale 0.7. (<b>l</b>) Hybrid: HD Image, Scale 0.7. (<b>m</b>) MSER: VSSUT Gate Image, Scale 2.0. (<b>n</b>) BRISK: VSSUT Gate Image, Scale 2.0. (<b>o</b>) MSER: HD Image, Scale 2.0. (<b>p</b>) BRISK: HD Image, Scale 2.0. (<b>q</b>) Hybrid: VSSUT Gate Image, Scale 2.0. (<b>r</b>) Hybrid: HD Image, Scale 2.0.</p>
Full article ">Figure 16
<p>Extraction of feature keypoints using various extractors based on scaling factors of <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>2.0</mn> </mrow> </semantics></math>. Each row demonstrates the impact of scaling on the effectiveness of feature extraction across different images and detectors. (<b>a</b>) MSER: VSSUT Gate Image. (<b>b</b>) BRISK: VSSUT Gate Image. (<b>c</b>) MSER: HD Image. (<b>d</b>) BRISK: HD Image. (<b>e</b>) Hybrid: VSSUT Gate Image. (<b>f</b>) Hybrid: HD Image. (<b>g</b>) MSER: VSSUT Gate Image, Scale 0.7. (<b>h</b>) BRISK: VSSUT Gate Image, Scale 0.7. (<b>i</b>) MSER: HD Image, Scale 0.7. (<b>j</b>) BRISK: HD Image, Scale 0.7. (<b>k</b>) Hybrid: VSSUT Gate Image, Scale 0.7. (<b>l</b>) Hybrid: HD Image, Scale 0.7. (<b>m</b>) MSER: VSSUT Gate Image, Scale 2.0. (<b>n</b>) BRISK: VSSUT Gate Image, Scale 2.0. (<b>o</b>) MSER: HD Image, Scale 2.0. (<b>p</b>) BRISK: HD Image, Scale 2.0. (<b>q</b>) Hybrid: VSSUT Gate Image, Scale 2.0. (<b>r</b>) Hybrid: HD Image, Scale 2.0.</p>
Full article ">Figure 17
<p>Matching of feature keypoints using various detectors on VSSUT gate and Hirakud dam images under two scaling factors, 0.7 and 2.0. Each image series demonstrates the effect of scaling on feature matching performance. (<b>a</b>) MSER: VSSUT Gate Image, Scale 0.7. (<b>b</b>) BRISK: VSSUT Gate Image, Scale 0.7. (<b>c</b>) MSER: Hirakud Dam Image, Scale 0.7. (<b>d</b>) BRISK: Hirakud Dam Image, Scale 0.7. (<b>e</b>) Hybrid: VSSUT Gate Image, Scale 0.7. (<b>f</b>) Hybrid: Hirakud Dam Image, Scale 0.7. (<b>g</b>) MSER: VSSUT Gate Image, Scale 2.0. (<b>h</b>) BRISK: VSSUT Gate Image, Scale 2.0. (<b>i</b>) MSER: Hirakud Dam Image, Scale 2.0. (<b>j</b>) BRISK: Hirakud Dam Image, Scale 2.0. (<b>k</b>) Hybrid: VSSUT Gate Image, Scale 2.0. (<b>l</b>) Hybrid: Hirakud Dam Image, Scale 2.0.</p>
Full article ">Figure 18
<p>Registered images of different scenes using the Hybrid feature detector. Each subfigure shows a different aerial or scene image, highlighting the detailed synthesis achieved through the registration process.</p>
Full article ">Figure 19
<p>Performance comparison of various feature detectors on park scene images. Each subplot visually demonstrates how each feature detector identifies keypoints within the same environmental setting. This provides insights into the adaptability and precision of each method under similar conditions, highlighting their strengths and limitations in detecting significant image features effectively.</p>
Full article ">
25 pages, 6231 KiB  
Article
Physical Properties of an Efficient MAPbBr3/GaAs Hybrid Heterostructure for Visible/Near-Infrared Detectors
by Tarek Hidouri, Maura Pavesi, Marco Vaccari, Antonella Parisini, Nabila Jarmouni, Luigi Cristofolini and Roberto Fornari
Nanomaterials 2024, 14(18), 1472; https://doi.org/10.3390/nano14181472 - 10 Sep 2024
Cited by 1 | Viewed by 783
Abstract
Semiconductor photodetectors can work only in specific material-dependent light wavelength ranges, connected with the bandgaps and absorption capabilities of the utilized semiconductors. This limitation has driven the development of hybrid devices that exceed the capabilities of individual materials. In this study, for the [...] Read more.
Semiconductor photodetectors can work only in specific material-dependent light wavelength ranges, connected with the bandgaps and absorption capabilities of the utilized semiconductors. This limitation has driven the development of hybrid devices that exceed the capabilities of individual materials. In this study, for the first time, a hybrid heterojunction photodetector based on methylammonium lead bromide (MAPbBr3) polycrystalline film deposited on gallium arsenide (GaAs) was presented, along with comprehensive morphological, structural, optical, and photoelectrical investigations. The MAPbBr3/GaAs heterojunction photodetector exhibited wide spectral responsivity, from 540 to 900 nm. The fabrication steps of the prototype device, including a new preparation recipe for the MAPbBr3 solution and spinning, will be disclosed and discussed. It will be shown that extending the soaking time and refining the precursor solution’s stoichiometry may enhance surface coverage, adhesion to the GaAs, and film uniformity, as well as provide a new way to integrate MAPbBr3 on GaAs. Compared to the pristine MAPbBr3, the enhanced structural purity of the perovskite on GaAs was confirmed by X-ray Diffraction (XRD) upon optimization compared to the conventional glass substrate. Scanning Electron Microscopy (SEM) revealed the formation of microcube-like structures on the top of an otherwise continuous MAPbBr3 polycrystalline film, with increased grain size and reduced grain boundary effects pointed by Energy-Dispersive Spectroscopy (EDS) and cathodoluminescence (CL). Enhanced absorption was demonstrated in the visible range and broadened photoluminescence (PL) emission at room temperature, with traces of reduction in the orthorhombic tilting revealed by temperature-dependent PL. A reduced average carrier lifetime was reduced to 13.8 ns, revealed by time-resolved PL (TRPL). The dark current was typically around 8.8 × 10−8 A. Broad photoresponsivity between 540 and 875 nm reached a maximum of 3 mA/W and 16 mA/W, corresponding to a detectivity of 6 × 1010 and 1 × 1011 Jones at −1 V and 50 V, respectively. In case of on/off measurements, the rise and fall times were 0.40 s and 0.61 s or 0.62 s and 0.89 s for illumination, with 500 nm or 875 nm photons, respectively. A long-term stability test at room temperature in air confirmed the optical and structural stability of the proposed hybrid structure. This work provides insights into the physical mechanisms of new hybrid junctions for high-performance photodetectors. Full article
(This article belongs to the Special Issue Physical Properties of Semiconductor Nanostructures and Devices)
Show Figures

Figure 1

Figure 1
<p>Schematic presentation of the multistep process for the deposition of MAPbBr<sub>3</sub> on GaAs. Acronyms of the chemical compounds are provided in the article text.</p>
Full article ">Figure 2
<p>(<b>a</b>) XRD spectrum of MAPbBr<sub>3</sub>/GaAs (sample S *, see <a href="#nanomaterials-14-01472-t001" class="html-table">Table 1</a>) with a zoomed portion of the spectrum to highlight the MAPbBr<sub>3</sub> peaks; (<b>b</b>) SEM images showing poor (upper image) and improved coverage (lower image) by the perovskite microcubes and the formation of an underlying layer (red circles).</p>
Full article ">Figure 3
<p>(<b>a</b>) Top-view SEM images showing the effect of the antisolvent soaking time on the morphology of MAPbBr<sub>3</sub>/GaAs, with reference to <a href="#nanomaterials-14-01472-t001" class="html-table">Table 1</a> (image 1, sample S *); (image 2, sample S1); (image 3, sample S2); (image 4, sample S3); and (image 5, sample S4). The corresponding insets are photos of the sample surfaces. (<b>b</b>) Top-view and cross-section SEM of S4 (image 5) with 1 cm<sup>2</sup> full surface coverage, as seen in the surface photo in the inset; (<b>c</b>) XRD diffractogram of S4.</p>
Full article ">Figure 4
<p>Typical energy-dispersive X-ray spectroscopy (EDX) elemental mapping images showing the EDS layered image of S4 and corresponding element analysis.</p>
Full article ">Figure 5
<p>SEM-CL images of the MAPbBr<sub>3</sub>/GaAs heterostructure (sample S4). (<b>a</b>) The original CL image. (<b>b</b>,<b>c</b>) Treated SEM-CL images for better clarity.</p>
Full article ">Figure 6
<p>(<b>a</b>) Absorption spectra; (<b>b</b>) PL spectra taken at 300 K, showing that the broad MAPbBr<sub>3</sub>/GaAs emission is deconvoluted using the Gaussian function (dashed green lines); (<b>c</b>) power-dependent PL spectra of MAPbBr<sub>3</sub>/GaAs; inset shows the power-dependent PL spectra of the control (S0 *); (<b>d</b>) integrated PL intensity vs. excitation power (black dots) fitted with power law (red solid line); (<b>e</b>) time-resolved PL spectra of the control (S0 *) and MAPbBr<sub>3</sub>/GaAs (S4), showing the experimental time trace (dots) with bi-exponential fit (solid lines) at a fixed density of excited carriers; (<b>f</b>) time-resolved PL spectra of MAPbBr<sub>3</sub>/GaAs (sample S4) acquired at different densities of excited carriers (different photon density) and the corresponding time constants, using bi-exponential fit.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Absorption spectra; (<b>b</b>) PL spectra taken at 300 K, showing that the broad MAPbBr<sub>3</sub>/GaAs emission is deconvoluted using the Gaussian function (dashed green lines); (<b>c</b>) power-dependent PL spectra of MAPbBr<sub>3</sub>/GaAs; inset shows the power-dependent PL spectra of the control (S0 *); (<b>d</b>) integrated PL intensity vs. excitation power (black dots) fitted with power law (red solid line); (<b>e</b>) time-resolved PL spectra of the control (S0 *) and MAPbBr<sub>3</sub>/GaAs (S4), showing the experimental time trace (dots) with bi-exponential fit (solid lines) at a fixed density of excited carriers; (<b>f</b>) time-resolved PL spectra of MAPbBr<sub>3</sub>/GaAs (sample S4) acquired at different densities of excited carriers (different photon density) and the corresponding time constants, using bi-exponential fit.</p>
Full article ">Figure 7
<p>(<b>a</b>) Temperature dependence of the PL emission energy of the high-energy peak of MAPbBr<sub>3</sub>/GaAs (S4) (red dots) and its control sample (blue dots). (<b>b</b>) Temperature-dependent PL intensity of MAPbBr<sub>3</sub>/GaAs (red dots) fitted by Arrhenius law.</p>
Full article ">Figure 8
<p>Structural, optical, and cycling stability of the MAPbBr<sub>3</sub>/GaAs heterojunction (sample S4).</p>
Full article ">Figure 8 Cont.
<p>Structural, optical, and cycling stability of the MAPbBr<sub>3</sub>/GaAs heterojunction (sample S4).</p>
Full article ">Figure 9
<p>(<b>a</b>) I–V response in semi-log scale in the dark (black symbols) and under 500 nm light illumination (blue symbols). (<b>b</b>) Room-temperature dark forward current–voltage characteristics in a semi-log scale. The linear fitting (red solid line) of ln(I)–V is shown. Inset: the obtained ideality factor extracted from the low-injection region.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) I–V response in semi-log scale in the dark (black symbols) and under 500 nm light illumination (blue symbols). (<b>b</b>) Room-temperature dark forward current–voltage characteristics in a semi-log scale. The linear fitting (red solid line) of ln(I)–V is shown. Inset: the obtained ideality factor extracted from the low-injection region.</p>
Full article ">Figure 10
<p>(<b>a</b>) Schematic energy band diagrams of the junction (S4) under illumination and reverse bias. (<b>b</b>) Schematic design of the final device and (<b>c</b>) the extracted responsivity and detectivity under 500 nm illumination and bias of −1 V.</p>
Full article ">Figure 11
<p>(<b>a</b>) Responsivity of MAPbBr<sub>3</sub>/GaAs under −0.5 V (green symbols) and −1 V (black symbols) for illumination with 500 nm photons. (<b>b</b>) Time-dependent photoresponse showing the on/off switching cycles for 500 nm illumination at −1 V of the sample S4 (red dots) and the control sample S * (blue dots). (<b>c</b>) Time-dependent photoresponse of MAPbBr<sub>3</sub>/GaAs under 500 nm and 875 nm illumination at a constant bias (−1 V).</p>
Full article ">
18 pages, 3730 KiB  
Article
Temporal Monitoring of Simulated Burials in an Arid Environment Using RGB/Multispectral Sensor Unmanned Aerial Vehicles
by Abdullah Alawadhi, Constantine Eliopoulos and Frederic Bezombes
Drones 2024, 8(9), 444; https://doi.org/10.3390/drones8090444 - 29 Aug 2024
Viewed by 608
Abstract
For the first time, RGB and multispectral sensors deployed on UAVs were used to facilitate grave detection in a desert location. The research sought to monitor surface anomalies caused by burials using manual and enhanced detection methods, which was possible up to 18 [...] Read more.
For the first time, RGB and multispectral sensors deployed on UAVs were used to facilitate grave detection in a desert location. The research sought to monitor surface anomalies caused by burials using manual and enhanced detection methods, which was possible up to 18 months. Near-IR (NIR) and Red-Edge bands were the most suitable for manual detection, with a 69% and 31% success rate, respectively. Meanwhile, the enhanced method results varied depending on the sensor. The standard Reed–Xiaoli Detector (RXD) algorithm and Uniform Target Detector (UTD) algorithm were the most suitable for RGB data, with 56% and 43% detection rates, respectively. For the multispectral data, the percentages varied between the algorithms with a hybrid of the RXD and UTD algorithms yielding a 56% detection rate, the UTD algorithm 31%, and the RXD algorithm 13%. Moreover, the research explored identifying grave mounds using the normalized digital surface model (nDSM) and evaluated using the normalized difference vegetation index (NDVI) in grave detection. nDSM successfully located grave mounds at heights as low as 1 cm. A noticeable difference in NDVI values was observed between the graves and their surroundings, regardless of the extreme weather conditions. The results support the potential of using RGB and multispectral sensors mounted on UAVs for detecting burial sites in an arid environment. Full article
Show Figures

Figure 1

Figure 1
<p>Aerial images of the research area using (<b>a</b>) RGB and (<b>b</b>) NIR wavelength bands, one week post-burial. G1–G6: Grave 1–Grave 6.</p>
Full article ">Figure 2
<p>The nDSM map of the research area calculated two months post-burial. G1–G6: Grave 1–Grave 6.</p>
Full article ">Figure 3
<p>The (<b>a</b>) manual and (<b>b</b>) enhanced anomaly detection methods 15 months post-burial using an RGB sensor. G1–G6: Grave1–Grave6.</p>
Full article ">Figure 4
<p>The application of the RXD-UTD anomaly detection method on the MSI image, twelve months post-burial. G1–G6: Grave 1–Grave 6.</p>
Full article ">Figure 5
<p>The NDVI map of the surveyed area, seven months post-burial. G1–G6: Grave 1–Grave 6.</p>
Full article ">
23 pages, 5761 KiB  
Article
FFA: Foreground Feature Approximation Digitally against Remote Sensing Object Detection
by Rui Zhu, Shiping Ma, Linyuan He and Wei Ge
Remote Sens. 2024, 16(17), 3194; https://doi.org/10.3390/rs16173194 - 29 Aug 2024
Viewed by 755
Abstract
In recent years, research on adversarial attack techniques for remote sensing object detection (RSOD) has made great progress. Still, most of the research nowadays is on end-to-end attacks, which mainly design adversarial perturbations based on the prediction information of the object detectors (ODs) [...] Read more.
In recent years, research on adversarial attack techniques for remote sensing object detection (RSOD) has made great progress. Still, most of the research nowadays is on end-to-end attacks, which mainly design adversarial perturbations based on the prediction information of the object detectors (ODs) to achieve the attack. These methods do not discover the common vulnerabilities of the ODs and, thus, the transferability is weak. Based on this, this paper proposes a foreground feature approximation (FFA) method to generate adversarial examples (AEs) that discover the common vulnerabilities of the ODs by changing the feature information carried by the image itself to implement the attack. Specifically, firstly, the high-quality predictions are filtered as attacked objects using the detector, after which a hybrid image without any target is made, and the hybrid foreground is created based on the attacked targets. The images’ shallow features are extracted using the backbone network, and the features of the input foreground are approximated towards the hybrid foreground to implement the attack. In contrast, the model predictions are used to assist in realizing the attack. In addition, we have found the effectiveness of FFA for targeted attacks, and replacing the hybrid foreground with the targeted foreground can realize targeted attacks. Extensive experiments are conducted on the remote sensing target detection datasets DOTA and UCAS-AOD with seven rotating target detectors. The results show that the mAP of FFA under the IoU threshold of 0.5 untargeted attack is 3.4% lower than that of the advanced method, and the mAP of FFA under targeted attack is 1.9% lower than that of the advanced process. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the FFA realizing the targetless attack and the targeted attack. The purple box indicates the part that joins the perturbation and succeeds in the attack, and the green box indicates the part that does not join the perturbation and is recognized correctly. The orange dashed box part indicates the targetless attack, and the blue dashed box part indicates the targeted attack.</p>
Full article ">Figure 2
<p>Visualization of the feature extraction process. The left column shows the input image, the middle column shows the foreground portion of various images, and the right column represents the corresponding shallow features of the foreground image. Blue arrows indicate the foreground extraction process, and purple arrows indicate the feature extraction process. The orange dashed box portion indicates an targetless attack, and the blue dashed box portion indicates a targeted attack.</p>
Full article ">Figure 3
<p>Flowchart of FFA for generating AEs. The bottommost arrow in the figure represents the adversarial example generation process. First, we need to generate a hybrid image <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">x</mi> <mo>˜</mo> </mover> </semantics></math>. Input image <math display="inline"><semantics> <mi mathvariant="bold-italic">x</mi> </semantics></math> after the detector to obtain the predicted foreground box <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> </semantics></math> and predicted labels <math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> </semantics></math>. Use <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> </semantics></math> to intercept the input image <math display="inline"><semantics> <mi mathvariant="bold-italic">x</mi> </semantics></math> and hybrid image <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">x</mi> <mo>˜</mo> </mover> </semantics></math> to obtain the input foreground <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mi mathvariant="bold-italic">f</mi> </msub> </semantics></math> and hybrid foreground <math display="inline"><semantics> <mover accent="true"> <msub> <mi mathvariant="bold-italic">x</mi> <mi mathvariant="bold-italic">f</mi> </msub> <mo>˜</mo> </mover> </semantics></math>. After extracting the features through the backbone network, the KL divergence of the two is calculated as the feature loss. The Smooth L1 loss between <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> </semantics></math> and the real object box <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>u</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> </semantics></math> is also calculated as the object box loss, and the cross-entropy loss between <math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> </semantics></math> and the background label <math display="inline"><semantics> <msub> <mi>l</mi> <mrow> <mi>b</mi> <mi>g</mi> </mrow> </msub> </semantics></math> is calculated as the classification loss, which together constitute the detector prediction loss. The sum of the L2 distance between the <span class="html-italic">i</span>-1st generated AE <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>a</mi> <mi>d</mi> <mi>v</mi> </mrow> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="bold-italic">x</mi> </semantics></math> is calculated as the perception loss. The weighted combination of feature loss, prediction loss, and perception loss yields the total loss, which is iterated to generate the adversarial perturbation <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>i</mi> </msub> </semantics></math>, which is summed with <math display="inline"><semantics> <mi mathvariant="bold-italic">x</mi> </semantics></math> to obtain the adversarial example <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>a</mi> <mi>d</mi> <mi>v</mi> </mrow> <mi>i</mi> </msubsup> </semantics></math> after iteration.</p>
Full article ">Figure 4
<p>Visualization of FFA targetless attack detection results. There are four layers in total. The top two layers represent the original images and AEs detection results for small objects, and the bottom two layers represent the original images and AEs detection results for large objects. The orange dashed section represents the detection outcomes of the two-stage ODs, while the blue dashed section represents the detection outcomes of the single-stage ODs.</p>
Full article ">Figure 5
<p>Visualization of object detection results for targeted attacks against planes. The part indicated by the red arrow is the target category, the orange color is the detection result of the two-stage OD, and the blue color is the detection result of the single-stage OD.</p>
Full article ">Figure 6
<p>Visualization of object detection results for targeted attacks against vehicles. The part indicated by the red arrow is the target category, the orange color is the detection result of the two-stage OD, and the blue color is the detection result of the single-stage OD.</p>
Full article ">Figure 7
<p>The impact of the iteration number on the attack. The trained and attacked ODs for both targetless attack and targeted attack are OR, and the backbone network is ReSNet50.</p>
Full article ">Figure 8
<p>Visualization results of perturbations in ablation experiments. The upper layer is the targetless attack, and the lower layer is the targeted attack. (<b>a</b>,<b>e</b>) are the original images; (<b>b</b>,<b>f</b>) are the perturbations with feature loss only; (<b>c</b>,<b>g</b>) are the perturbations with prediction loss and feature loss; and (<b>d</b>,<b>h</b>) are the perturbations with feature loss, prediction loss, and perception loss.</p>
Full article ">
16 pages, 8197 KiB  
Article
DAN-YOLO: A Lightweight and Accurate Object Detector Using Dilated Aggregation Network for Autonomous Driving
by Shuwan Cui, Feiyang Liu, Zhifu Wang, Xuan Zhou, Bo Yang, Hao Li and Junhao Yang
Electronics 2024, 13(17), 3410; https://doi.org/10.3390/electronics13173410 - 27 Aug 2024
Cited by 1 | Viewed by 975
Abstract
Object detection is becoming increasingly critical in autonomous driving. However, the accuracy and effectiveness of object detectors are often constrained by the obscuration of object features and details in adverse weather conditions. Therefore, this paper presented the DAN-YOLO vehicle object detector specifically designed [...] Read more.
Object detection is becoming increasingly critical in autonomous driving. However, the accuracy and effectiveness of object detectors are often constrained by the obscuration of object features and details in adverse weather conditions. Therefore, this paper presented the DAN-YOLO vehicle object detector specifically designed for driving conditions in adverse weather. Building on the YOLOv7-Tiny network, SPP was replaced with SPPF, resulting in the SPPFCSPC structure, which enhances processing speed. The concept of Hybrid Dilated Convolution (HDC) was also introduced to improve the SPPFCSPC and ELAN-T structures, expanding the network’s receptive field (RF) while maintaining a lightweight design. Furthermore, an efficient multi-scale attention (EMA) mechanism was introduced to enhance the effectiveness of feature fusion. Finally, the Wise-IoUv1 loss function was employed as a replacement for CIoU to enhance the localization accuracy of the bounding box (bbox) and the convergence speed of the model. With an input size of 640 × 640, the DAN-YOLO algorithm proposed in this study achieved an increase in mAP0.5 values of 3.4% and 6.3% compared to the YOLOv7-Tiny algorithm in the BDD100K and DAWN benchmark tests, respectively, while achieving real-time detection (142.86 FPS). When compared with other state-of-the-art detectors, it reports better trade-off in terms of detection accuracy and speed under adverse driving conditions, indicating the suitability for autonomous driving applications. Full article
Show Figures

Figure 1

Figure 1
<p>YOLOv7-Tiny network architecture diagram.</p>
Full article ">Figure 2
<p>Illustration of the gridding effect. The numbers in the graph represent the frequency of utilization of input pixels by the convolution, and the heatmap reflects the extent to which input pixels are utilized.</p>
Full article ">Figure 3
<p>The RF effect diagram for dilation factors r = [1, 2, 5]. The numbers in the graph represent the frequency of utilization of input pixels by the convolution, and the heatmap reflects the extent to which input pixels are utilized.</p>
Full article ">Figure 4
<p>DAN-YOLO network architecture diagram.</p>
Full article ">Figure 5
<p>Structure of the SPPF and SPPFCSPC.</p>
Full article ">Figure 6
<p>Structure of the SPPFCSPC-E.</p>
Full article ">Figure 7
<p>Structure of the DAN.</p>
Full article ">Figure 8
<p>Efficient multi-scale attention mechanism.</p>
Full article ">Figure 9
<p>Comparison of network model P–R curves before and after improvement. (<b>a</b>) Based on the BDD100K dataset; (<b>b</b>) based on the DAWN dataset.</p>
Full article ">Figure 10
<p>Confusion matrix difference heatmap. (<b>a</b>) Based on the BDD100K test dataset; (<b>b</b>) based on the DAWN test dataset.</p>
Full article ">Figure 11
<p>Detection results of YOLOv7-Tiny and DAN-YOLO. (<b>a</b>) Based on the BDD100K test dataset; (<b>b</b>) based on the DAWN test dataset.</p>
Full article ">
24 pages, 44227 KiB  
Article
Assessment of Trees’ Structural Defects via Hybrid Deep Learning Methods Used in Unmanned Aerial Vehicle (UAV) Observations
by Qiwen Qiu and Denvid Lau
Forests 2024, 15(8), 1374; https://doi.org/10.3390/f15081374 - 6 Aug 2024
Viewed by 1266
Abstract
Trees’ structural defects are responsible for the reduction in forest product quality and the accident of tree collapse under extreme environmental conditions. Although the manual view inspection for assessing tree health condition is reliable, it is inefficient in discriminating, locating, and quantifying the [...] Read more.
Trees’ structural defects are responsible for the reduction in forest product quality and the accident of tree collapse under extreme environmental conditions. Although the manual view inspection for assessing tree health condition is reliable, it is inefficient in discriminating, locating, and quantifying the defects with various features (i.e., crack and hole). There is a general need for investigation of efficient ways to assess these defects to enhance the sustainability of trees. In this study, the deep learning algorithms of lightweight You Only Look Once (YOLO) and encoder-decoder network named DeepLabv3+ are combined in unmanned aerial vehicle (UAV) observations to evaluate trees’ structural defects. Experimentally, we found that the state-of-the-art detector YOLOv7-tiny offers real-time (i.e., 50–60 fps) and long-range sensing (i.e., 5 m) of tree defects but has limited capacity to acquire the patterns of defects at the millimeter scale. To address this limitation, we further utilized DeepLabv3+ cascaded with different network architectures of ResNet18, ResNet50, Xception, and MobileNetv2 to obtain the actual morphology of defects through close-range and pixel-wise image semantic segmentation. Moreover, the proposed hybrid scheme YOLOv7-tiny_DeepLabv3+_UAV assesses tree’s defect size with an averaged accuracy of 92.62% (±6%). Full article
(This article belongs to the Special Issue UAV Application in Forestry)
Show Figures

Figure 1

Figure 1
<p>Network architecture of YOLOv7-tiny.</p>
Full article ">Figure 2
<p>Overall network architecture of DeepLabv3+ used for trees’ defect segmentation.</p>
Full article ">Figure 3
<p>Scheme of YOLO-tiny_DeepLabv3+_UAV: (<b>a</b>) training and testing workflow and (<b>b</b>) “two-stage” approach for trees’ defect assessment.</p>
Full article ">Figure 4
<p>Tree defect detection by YOLO-tiny models: (<b>a</b>) tree hole and (<b>b</b>) tree crack.</p>
Full article ">Figure 5
<p>Performance metrics of detecting tree hole and crack by YOLO-tiny: (<b>a</b>) accuracy, (<b>b</b>) precision, (<b>c</b>) recall, and (<b>d</b>) <span class="html-italic">F</span>1 score. A specific color is assigned to each YOLO-tiny model (i.e., light yellow for YOLOv2-tiny, light green for YOLOv3-tiny, light blue for YOLOv4-tiny, and dark blue for YOLOv7-tiny). The dotted line refers to the averaged value, within its colored band (i.e., error interval).</p>
Full article ">Figure 6
<p>Confusion matrix of data classes between tree hole and tree crack: (<b>a</b>) YOLOv2-tiny, (<b>b</b>) YOLOv3-tiny, (<b>c</b>) YOLOv4-tiny, and (<b>d</b>) YOLOv7-tiny. The percentage denotes the ratio of the number of predicted objects to the number of original samples.</p>
Full article ">Figure 7
<p>Segmentation of tree hole and tree crack by DeepLabv3+ models. The segmented images consist of the foreground of defect and the background. The defect is masked by the light blue color, while the background is represented by the light grey color with transparency.</p>
Full article ">Figure 8
<p>Effect of epoch on the tree defect segmentation by ResNet50-based DeepLabv3+.</p>
Full article ">Figure 9
<p><span class="html-italic">mIoU</span> versus epoch in the semantic segmentation of tree defects by DeepLabv3+: (<b>a</b>) tree hole and (<b>b</b>) tree crack. A specific color is assigned to each DeepLabv3+ model (i.e., light blue for ResNet18-based DeepLabv3+, dark blue for ResNet50-based DeepLabv3+, light purple for MobileNetv2-based DeepLabv3+, and dark purple for Xception-based DeepLabv3+). The dotted line means the averaged <span class="html-italic">mIOU</span>, within its colored band (i.e., error interval).</p>
Full article ">Figure 10
<p>Effect of environmental darkness on the performance of tree defect detection by YOLO-tiny and DeepLabv3+: (<b>a</b>) YOLOv7-tiny, (<b>b</b>) ResNet18-based DeepLabv3+, (<b>c</b>) ResNet50-based DeepLabv3+, (<b>d</b>) Xception-based DeepLabv3+, and (<b>e</b>) MobileNetv2-based DeepLabv3+.</p>
Full article ">Figure 11
<p>Tree defect detection in a dynamic distance change experiment. YOLO-tiny successfully detects the presence of tree hole in varying distance from 0.5 to 5 m and tree crack from 0.5 to 1 m. DeepLabv3+ is effective to segment tree hole at the drone-to-target of 0.5–2 m and tree crack at around 0.5 m.</p>
Full article ">Figure 12
<p>Practical application of tree defect characterization by YOLOv7-tiny_DeepLabv3+_UAV system: (<b>a</b>) photograph of flying quadcopter drone over the study area, (<b>b</b>) remote sensing of tree defect by YOLOv7-tiny, and (<b>c</b>) close-range assessment of tree defect by DeepLabv3+.</p>
Full article ">Figure 13
<p>Comparison of pattern recognition of a tree’s defect by YOLO-tiny and DeepLabv3+.</p>
Full article ">Figure 14
<p>Comparison of defect quantification by different deep learning models.</p>
Full article ">
25 pages, 8740 KiB  
Article
Open-Source Optimization of Hybrid Monte Carlo Methods for Fast Response Modeling of NaI (Tl) and HPGe Gamma Detectors
by Matthew Niichel and Stylianos Chatzidakis
J. Nucl. Eng. 2024, 5(3), 274-298; https://doi.org/10.3390/jne5030019 - 5 Aug 2024
Viewed by 892
Abstract
Modeling the response of gamma detectors has long been a challenge within the nuclear community. Significant research has been conducted to digitally replicate instruments that can cost over USD 100,000 and are difficult to operate outside of a laboratory setting. The large cost [...] Read more.
Modeling the response of gamma detectors has long been a challenge within the nuclear community. Significant research has been conducted to digitally replicate instruments that can cost over USD 100,000 and are difficult to operate outside of a laboratory setting. The large cost and availability prevent some from making use of such equipment. Subsequently, there have been multiple attempts to create cost-effective codes that replicate the response of sodium-iodide and high-purity germanium detectors for data derivation related to gamma-ray interaction with matter. While robust programs do exist, they are often subject to export controls and/or they are not intuitive to use. Through the use of hybrid Monte Carlo methods, MATLAB can be used to produce a fast first-order response of various gamma-ray detectors. The combination of a graphical user interface with a numerical-based script allows for open-source and intuitive code. When benchmarked with experimental data from Co-60, Cs-137, and Na-22, the code can numerically calculate a response comparable to experimental and industry-standard response codes. Evidence supports both savings in computational requirements and the inclusion of an intuitive user experience that does not heavily compromise data when compared to other standard codes, such as MCNP and GADRAS, or experimental results. When the application is installed on a Dell Intel i7 computer with 16 cores, the average time to simulate the benchmarked isotopes is 0.26 s. Installation on an HP Intel i7 four-core machine runs the same isotopes in 1.63 s. The results indicate that simple gamma detectors can be modeled in an open-source format. The anticipation for the MATLAB application is to be a tool that can be easily accessible and provide datasets for use in an academic setting requiring gamma-ray detectors. Ultimately, this article provides evidence that hybrid Monte Carlo codes in an open-source format can benefit the nuclear community in both computational time and up-front cost for access. Full article
(This article belongs to the Special Issue Monte Carlo Simulation in Reactor Physics)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the hybrid Monte Carlo method.</p>
Full article ">Figure 2
<p>Block diagram of the algorithm comparator steps leading to gamma–matter interaction events.</p>
Full article ">Figure 3
<p>Comparison of NaI empirical relations between Knoll’s textbook and the model [<a href="#B9-jne-05-00019" class="html-bibr">9</a>,<a href="#B10-jne-05-00019" class="html-bibr">10</a>].</p>
Full article ">Figure 4
<p>Distribution of scattering angles based on the Klein–Nishina formula for 0.5 MeV.</p>
Full article ">Figure 5
<p>Comparison of the resolution for the benchmarking detector and model with the user input of 0% corresponding to a correction factor of 1 in Equation (9).</p>
Full article ">Figure 6
<p>Absolute efficiency of the HPGe and NaI detectors as a function of the gamma energy.</p>
Full article ">Figure 7
<p>The energy resolution of the HPGe and NaI detectors as a function of the gamma energy.</p>
Full article ">Figure 8
<p>The energy resolution of the HPGe detector as a function of the gamma energy.</p>
Full article ">Figure 9
<p>Experimental setup for the NaI detector placed within the lead well and the dimensions of the printed ABS collimator and lead collimator.</p>
Full article ">Figure 10
<p>Experimental setup for the HPGe detector placed within the lead well and the dimensions of the printed ABS collimator and lead collimator. Note the change in the orientation of the setup.</p>
Full article ">Figure 11
<p>Dimensional views of the 3D-printed ABS collimator holder. Two copies are made with slight modifications to the upper detector holding portion for each detector, denoted in <a href="#jne-05-00019-f009" class="html-fig">Figure 9</a> and <a href="#jne-05-00019-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 12
<p>Comparison of the NaI Co-60 simulation in the GUI for “600 s” with the experimental data.</p>
Full article ">Figure 13
<p>Relative error between the NaI Co-60 simulation for “600 s” and the experimental data.</p>
Full article ">Figure 14
<p>Comparison of the HPGe Co-60 simulation in the GUI for “100 s” with the experimental data.</p>
Full article ">Figure 15
<p>Relative error between the HPGe Co-60 simulation for “100 s” and the experimental data.</p>
Full article ">Figure 16
<p>Graphical user interface view of user input variables for HPGe (100% resolution).</p>
Full article ">Figure 17
<p>Results for Cs-137 at the user-specified 100% resolution. Used to show a “perfect” HPGe response.</p>
Full article ">Figure 18
<p>Results for Cs-137 at the user-specified 20% resolution. Used to show the benchmarked NaI response.</p>
Full article ">Figure 19
<p>Results for Ba-133 at the user-specified 5-year-old sample and 600-s count time.</p>
Full article ">Figure 20
<p>Results for Ba-133 at the user-specified 50-year-old sample and 600-s count time.</p>
Full article ">Figure 21
<p>NaI response for 1 microcurie of Ba-133, Co-60, and Na-22 at an age of 1 year and a count time of 600 s.</p>
Full article ">Figure 22
<p>NaI response for 1 microcurie of Ba-133, Co-60, and Na-22 at 100 years of age and a count time of 600 s.</p>
Full article ">Figure 23
<p>GADRAS and Purdue code comparison for a 1-microcurie Co-60, 1-year-old sample, and 10-s count time.</p>
Full article ">Figure 24
<p>GADRAS and Purdue code comparison for a 1 µCi Co-60, 1-year-old sample, and 10 s count time. Adjusted to align the 1.17 MeV peaks.</p>
Full article ">Figure 25
<p>GADRAS and Purdue code HPGe comparison for a 1-microcurie Co-60, 1-year-old sample, and 10-s count time.</p>
Full article ">Figure 26
<p>GADRAS and Purdue code HPGe comparison for a 1-microcurie Co-60, 1-year-old sample, and 10-s count time.</p>
Full article ">Figure 27
<p>GADRAS and Purdue code HPGe comparison for 1 µCi Co-60, Cs-137, Na-22, 1-year-old samples, and 10 s count times.</p>
Full article ">Figure 28
<p>GADRAS and Purdue code HPGe comparison for 1-microcurie Co-60, Cs-137, Na-22, 1-year-old samples, and 10-s count times. Adjusted to be aligned at 662 keV.</p>
Full article ">
28 pages, 26533 KiB  
Article
End-to-End Deep Learning Framework for Arabic Handwritten Legal Amount Recognition and Digital Courtesy Conversion
by Hakim A. Abdo, Ahmed Abdu, Mugahed A. Al-Antari, Ramesh R. Manza, Muhammed Talo, Yeong Hyeon Gu and Shobha Bawiskar
Mathematics 2024, 12(14), 2256; https://doi.org/10.3390/math12142256 - 19 Jul 2024
Viewed by 1101
Abstract
Arabic handwriting recognition and conversion are crucial for financial operations, particularly for processing handwritten amounts on cheques and financial documents. Compared to other languages, research in this area is relatively limited, especially concerning Arabic. This study introduces an innovative AI-driven method for simultaneously [...] Read more.
Arabic handwriting recognition and conversion are crucial for financial operations, particularly for processing handwritten amounts on cheques and financial documents. Compared to other languages, research in this area is relatively limited, especially concerning Arabic. This study introduces an innovative AI-driven method for simultaneously recognizing and converting Arabic handwritten legal amounts into numerical courtesy forms. The framework consists of four key stages. First, a new dataset of Arabic legal amounts in handwritten form (“.png” image format) is collected and labeled by natives. Second, a YOLO-based AI detector extracts individual legal amount words from the entire input sentence images. Third, a robust hybrid classification model is developed, sequentially combining ensemble Convolutional Neural Networks (CNNs) with a Vision Transformer (ViT) to improve the prediction accuracy of single Arabic words. Finally, a novel conversion algorithm transforms the predicted Arabic legal amounts into digital courtesy forms. The framework’s performance is fine-tuned and assessed using 5-fold cross-validation tests on the proposed novel dataset, achieving a word level detection accuracy of 98.6% and a recognition accuracy of 99.02% at the classification stage. The conversion process yields an overall accuracy of 90%, with an inference time of 4.5 s per sentence image. These results demonstrate promising potential for practical implementation in diverse Arabic financial systems. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed legal amount recognition end-to-end framework. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 2
<p>Samples of legal amount sentences with English explanation for non-Arabic native speaker. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 3
<p>YOLOv5 model structure for Arabic handwritten word detection. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 4
<p>The proposed hybrid classification pipeline for Arabic handwritten word recognition. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 5
<p>Sample of legal amount image outputted from word detection phase. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 6
<p>Sample of applying LegalToCourtesy algorithm to calculate the courtesy amount value. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 7
<p>Training and validation convergence in terms of loss function for the YOLOv5s-based word extraction model.</p>
Full article ">Figure 8
<p>Evaluation prediction performance of the YOLOv5s-based word extraction model through training.</p>
Full article ">Figure 9
<p>Confusion matrix of the YOLOV5-based word extraction model.</p>
Full article ">Figure 10
<p>Sample of Arabic legal amount words detection results with English explanation for non-Arabic native speaker. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 11
<p>Performance assessment using confusion matrices for the Hybrid A model.</p>
Full article ">Figure 12
<p>Performance assessment using confusion matrices for the Hybrid B model.</p>
Full article ">Figure 13
<p>Samples of the proposed method results with correctly generated courtesy amounts. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 14
<p>Samples of the ability of the proposed approach to detect and classify in some complex cases: (<b>a</b>) word detection in overlapping letters case and (<b>b</b>) spelling mistake word classification. The English explanation is provided specifically for non-Arabic speakers.</p>
Full article ">Figure 15
<p>Samples of improperly generated courtesy amounts: (<b>a</b>) incorrect word recognition and (<b>b</b>) inaccurate word detection.</p>
Full article ">
Back to TopTop