[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (98)

Search Parameters:
Keywords = black pixel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3484 KiB  
Article
Gully Erosion Susceptibility Prediction Using High-Resolution Data: Evaluation, Comparison, and Improvement of Multiple Machine Learning Models
by Heyang Li, Jizhong Jin, Feiyang Dong, Jingyao Zhang, Lei Li and Yucheng Zhang
Remote Sens. 2024, 16(24), 4742; https://doi.org/10.3390/rs16244742 - 19 Dec 2024
Viewed by 519
Abstract
Gully erosion is one of the significant environmental issues facing the black soil regions in Northeast China, and its formation is closely related to various environmental factors. This study employs multiple machine learning models to assess gully erosion susceptibility in this region. The [...] Read more.
Gully erosion is one of the significant environmental issues facing the black soil regions in Northeast China, and its formation is closely related to various environmental factors. This study employs multiple machine learning models to assess gully erosion susceptibility in this region. The primary objective is to evaluate and optimize the top-performing model under high-resolution UAV data conditions, utilize the optimized best model to identify key factors influencing the occurrence of gully erosion from 11 variables, and generate a local gully erosion susceptibility map. Using 0.2 m resolution DEM and DOM data obtained from high-resolution UAVs, 2,554,138 pixels from 64 gully and 64 non-gully plots were analyzed and compiled into the research dataset. Twelve models, including Logistic Regression, K-Nearest Neighbors, Classification and Regression Trees, Random Forest, Boosted Regression Trees, Adaptive Boosting, Extreme Gradient Boosting, an Artificial Neural Network, a Convolutional Neural Network, as well as optimized XGBOOST, a CNN with a Multi-Head Attention mechanism, and an ANN with a Multi-Head Attention Mechanism, were utilized to evaluate gully erosion susceptibility in the Dahewan area. The performance of each model was evaluated using ROC curves, and the model fitting performance and robustness were validated through Accuracy and Cohen’s Kappa statistics, as well as RMSE and MAE indicators. The optimized XGBOOST model achieved the highest performance with an AUC-ROC of 0.9909, and through SHAP analysis, we identified roughness as the most significant factor affecting local gully erosion, with a relative importance of 0.277195. Additionally, the Gully Erosion Susceptibility Map generated by the optimized XGBOOST model illustrated the distribution of local gully erosion risks. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The schematic location of the study area; (<b>b</b>) a display of the study area; (<b>c</b>) a field photograph of the gully; and (<b>d</b>) a UAV-captured gully image with the highlighted gully area.</p>
Full article ">Figure 2
<p>Flowchart of the methodology used in this study.</p>
Full article ">Figure 3
<p>Maps of geo-environmental factors (GEFs): (<b>a</b>) Altitude, (<b>b</b>) slope, (<b>c</b>) Aspect, (<b>d</b>) Profile curvature, (<b>e</b>) Plan curvature, (<b>f</b>) Topographic Ruggedness Index, (<b>g</b>) Topographic Position Index, (<b>h</b>) roughness, (<b>i</b>) LS Factor, (<b>j</b>) Topographic Wetness Index, and (<b>k</b>) Stream Power Index.</p>
Full article ">Figure 4
<p>Multicollinearity analysis of the geo-environmental factors.</p>
Full article ">Figure 5
<p>Relative importance of different geo-environmental factors.</p>
Full article ">Figure 6
<p>Gully erosion susceptibility mapping using the optimized XGBOOST mode.</p>
Full article ">
19 pages, 12083 KiB  
Article
An XAI Approach to Melanoma Diagnosis: Explaining the Output of Convolutional Neural Networks with Feature Injection
by Flavia Grignaffini, Enrico De Santis, Fabrizio Frezza and Antonello Rizzi
Information 2024, 15(12), 783; https://doi.org/10.3390/info15120783 - 5 Dec 2024
Viewed by 664
Abstract
Computer-aided diagnosis (CAD) systems, which combine medical image processing with artificial intelligence (AI) to support experts in diagnosing various diseases, emerged from the need to solve some of the problems associated with medical diagnosis, such as long timelines and operator-related variability. The most [...] Read more.
Computer-aided diagnosis (CAD) systems, which combine medical image processing with artificial intelligence (AI) to support experts in diagnosing various diseases, emerged from the need to solve some of the problems associated with medical diagnosis, such as long timelines and operator-related variability. The most explored medical application is cancer detection, for which several CAD systems have been proposed. Among them, deep neural network (DNN)-based systems for skin cancer diagnosis have demonstrated comparable or superior performance to that of experienced dermatologists. However, the lack of transparency in the decision-making process of such approaches makes them “black boxes” and, therefore, not directly incorporable into clinical practice. Trying to explain and interpret the reasons for DNNs’ decisions can be performed by the emerging explainable AI (XAI) techniques. XAI has been successfully applied to DNNs for skin lesion image classification but never when additional information is incorporated during network training. This field is still unexplored; thus, in this paper, we aim to provide a method to explain, qualitatively and quantitatively, a convolutional neural network model with feature injection for melanoma diagnosis. The gradient-weighted class activation mapping and layer-wise relevance propagation methods were used to generate heat maps, highlighting the image regions and pixels that contributed most to the final prediction. In contrast, the Shapley additive explanations method was used to perform a feature importance analysis on the additional handcrafted information. To successfully integrate DNNs into the clinical and diagnostic workflow, ensuring their maximum reliability and transparency in whatever variant they are used is necessary. Full article
Show Figures

Figure 1

Figure 1
<p>General organization of XAI methods.</p>
Full article ">Figure 2
<p>CNN proposed in our previous work.</p>
Full article ">Figure 3
<p>Proposed architecture.</p>
Full article ">Figure 4
<p>Proposed interpretability workflow.</p>
Full article ">Figure 5
<p>Classification performance of the proposed model obtained by five-fold cross-validation.</p>
Full article ">Figure 6
<p>Local XAI methods.</p>
Full article ">Figure 7
<p>Overlapping local XAI methods.</p>
Full article ">Figure 8
<p>Some spurious correlations identified by Grad-CAM.</p>
Full article ">Figure 9
<p>Decomposition of the starting model into sub-models.</p>
Full article ">Figure 10
<p>Global XAI method. ‘CNN’: features automatically extracted from the network. ‘LBP’: handcrafted features.</p>
Full article ">
22 pages, 7112 KiB  
Article
A New Encryption Algorithm Utilizing DNA Subsequence Operations for Color Images
by Saeed Mirzajani, Seyed Shahabeddin Moafimadani and Majid Roohi
AppliedMath 2024, 4(4), 1382-1403; https://doi.org/10.3390/appliedmath4040073 - 4 Nov 2024
Viewed by 809
Abstract
The computer network has fundamentally transformed modern interactions, enabling the effortless transmission of multimedia data. However, the openness of these networks necessitates heightened attention to the security and confidentiality of multimedia content. Digital images, being a crucial component of multimedia communications, require robust [...] Read more.
The computer network has fundamentally transformed modern interactions, enabling the effortless transmission of multimedia data. However, the openness of these networks necessitates heightened attention to the security and confidentiality of multimedia content. Digital images, being a crucial component of multimedia communications, require robust protection measures, as their security has become a global concern. Traditional color image encryption/decryption algorithms, such as DES, IDEA, and AES, are unsuitable for image encryption due to the diverse storage formats of images, highlighting the urgent need for innovative encryption techniques. Chaos-based cryptosystems have emerged as a prominent research focus due to their properties of randomness, high sensitivity to initial conditions, and unpredictability. These algorithms typically operate in two phases: shuffling and replacement. During the shuffling phase, the positions of the pixels are altered using chaotic sequences or matrix transformations, which are simple to implement and enhance encryption. However, since only the pixel positions are modified and not the pixel values, the encrypted image’s histogram remains identical to the original, making it vulnerable to statistical attacks. In the replacement phase, chaotic sequences alter the pixel values. This research introduces a novel encryption technique for color images (RGB type) based on DNA subsequence operations to secure these images, which often contain critical information, from potential cyber-attacks. The suggested method includes two main components: a high-speed permutation process and adaptive diffusion. When implemented in the MATLAB software environment, the approach yielded promising results, such as NPCR values exceeding 98.9% and UACI values at around 32.9%, demonstrating its effectiveness in key cryptographic parameters. Security analyses, including histograms and Chi-square tests, were initially conducted, with passing Chi-square test outcomes for all channels; the correlation coefficient between adjacent pixels was also calculated. Additionally, entropy values were computed, achieving a minimum entropy of 7.0, indicating a high level of randomness. The method was tested on specific images, such as all-black and all-white images, and evaluated for resistance to noise and occlusion attacks. Finally, a comparison of the proposed algorithm’s NPCR and UAC values with those of existing methods demonstrated its superior performance and suitability. Full article
Show Figures

Figure 1

Figure 1
<p>DNA subsequence elongation and truncation processes.</p>
Full article ">Figure 2
<p>The schematic of the utilized procedure: (<b>a</b>) The schematic of the image encryption method, (<b>b</b>) The schematic of the decryption method.</p>
Full article ">Figure 3
<p>Encryption and decryption of images: (<b>a</b>–<b>d</b>): Plain images. (<b>e</b>–<b>h</b>): Respective encryption of images. (<b>i</b>–<b>l</b>): Respective decryption of images.</p>
Full article ">Figure 4
<p>(<b>a</b>) Original color image of Daryasar; (<b>b</b>–<b>d</b>) plain image histograms for R, G, and B, respectively; (<b>e</b>) cipher image; (<b>f</b>–<b>h</b>) cipher image histograms, respectively.</p>
Full article ">Figure 5
<p>Correlation histograms. (<b>a</b>,<b>c</b>,<b>e</b>) show the histograms for the original image, while (<b>b</b>,<b>d</b>,<b>f</b>) display the histograms for the encrypted image.</p>
Full article ">Figure 5 Cont.
<p>Correlation histograms. (<b>a</b>,<b>c</b>,<b>e</b>) show the histograms for the original image, while (<b>b</b>,<b>d</b>,<b>f</b>) display the histograms for the encrypted image.</p>
Full article ">Figure 6
<p>Encrypted images with correct and incorrect initial keys, and their differences from the original encrypted images: (<b>a</b>–<b>e</b>) depict five newly encrypted images using the specified keys, while (<b>f</b>–<b>j</b>) illustrate the differences between the incorrectly encrypted images and the original image.</p>
Full article ">Figure 7
<p>Evaluation with selected plain images for uniform color patterns: (<b>a</b>) image with all-white pixels, (<b>b</b>) encrypted version of the all-white image, (<b>c</b>) histogram of the red channel for the all-white image, (<b>d</b>) image with all-black pixels, (<b>e</b>) encrypted version of the all-black image, (<b>f</b>) histogram of the red channel for the all-black image.</p>
Full article ">Figure 8
<p>Outcomes of the noise attack evaluation for the “Guangzhou” image ((<b>a</b>,<b>b</b>): 10% noise attack, (<b>c</b>,<b>d</b>): 15% noise attack, (<b>e</b>,<b>f</b>): 20% noise attack).</p>
Full article ">
13 pages, 721 KiB  
Article
Comparison of On-Sky Wavelength Calibration Methods for Integral Field Spectrograph
by Jie Song, Baichuan Ren, Yuyu Tang, Jun Wei and Xiaoxian Huang
Electronics 2024, 13(20), 4131; https://doi.org/10.3390/electronics13204131 - 21 Oct 2024
Viewed by 652
Abstract
With advancements in technology, scientists are delving deeper in their explorations of the universe. Integral field spectrograph (IFS) play a significant role in investigating the physical properties of supermassive black holes at the centers of galaxies, the nuclei of galaxies, and the star [...] Read more.
With advancements in technology, scientists are delving deeper in their explorations of the universe. Integral field spectrograph (IFS) play a significant role in investigating the physical properties of supermassive black holes at the centers of galaxies, the nuclei of galaxies, and the star formation processes within galaxies, including under extreme conditions such as those present in galaxy mergers, ultra-low-metallicity galaxies, and star-forming galaxies with strong feedback. IFS transform the spatial field into a linear field using an image slicer and obtain the spectra of targets in each spatial resolution element through a grating. Through scientific processing, two-dimensional images for each target band can be obtained. IFS use concave gratings as dispersion systems to decompose the polychromatic light emitted by celestial bodies into monochromatic light, arranged linearly according to wavelength. In this experiment, the working environment of a star was simulated in the laboratory to facilitate the wavelength calibration of the space integral field spectrometer. Tools necessary for the calibration process were also explored. A mercury–argon lamp was employed as the light source to extract characteristic information from each pixel in the detector, facilitating the wavelength calibration of the spatial IFS. The optimal peak-finding method was selected by contrasting the center of weight, polynomial fitting, and Gaussian fitting methods. Ultimately, employing the 4FFT-LMG algorithm to fit Gaussian curves enabled the determination of the spectral peak positions, yielding wavelength calibration coefficients for a spatial IFS within the range of 360 nm to 600 nm. The correlation of the fitting results between the detector pixel positions and corresponding wavelengths was >99.99%. The calibration accuracy during wavelength calibration was 0.0067 nm, reaching a very high level. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the components of an integral field spectrograph. IFU—integral field unit; CCD—charge-coupled device.</p>
Full article ">Figure 2
<p>Schematic diagram of image slicer. The integral field unit consists of an image splitter and a pupil. The image splitter cuts the focal plane of the telescope, while the pupil rearranges the sub-image planes after cutting them into lines, arranging the two-dimensional images into one dimension and coupling them with the entrance slit of the spectrometer.</p>
Full article ">Figure 3
<p>Diagram of the wavelength calibration system. IFU—integral field unit; CCD—charge-coupled device.</p>
Full article ">Figure 4
<p>Flow chart of wavelength calibration factor calculation.</p>
Full article ">Figure 5
<p>The image after threshold segmentation of partial slices on the detector array. The vertical direction represents the spectral dimension, and the horizontal direction represents the geometric spatial dimension.</p>
Full article ">Figure 6
<p>Wavelength calibration curve for 473rd column pixel fitted using 4FFT-LMG methods. In the figure, the blue points represent the positions of several characteristic wavelengths of the mercury–argon lamp in the 473rd column of the spectrometer. The horizontal axis indicates pixel position, while the vertical axis represents wavelength information.</p>
Full article ">Figure 7
<p>The distribution of spectral calibration coefficient errors calculated using the 4FFT-LMG method is presented. The horizontal axis represents the wavelength errors corresponding to each spectral line in the wavelength calibration equations.</p>
Full article ">Figure 8
<p>Column 473 at 546.08 nm when using the LM algorithm to fit Gaussian curves with 4FFT-LMG. The orange points represent the measured data from the spectrometer, while the blue points correspond to those obtained through Fourier upsampling. The green line illustrates the Gaussian curve fitted using the 4FFT-LMG method. The correlation coefficient quantifies the degree to which the fitted Gaussian model aligns with the observed upsampled data. The root mean square error (RMSE) measures the average magnitude of the deviations between the observed data and the model predictions.</p>
Full article ">
11 pages, 4237 KiB  
Article
Optimized Driving Scheme for Three-Color Electrophoretic Displays Based on the Elimination of Red Ghost Images
by Mouhua Jiang, Zichuan Yi, Jiashuai Wang, Feng Li, Boyuan Lai, Liangyu Li, Li Wang, Liming Liu, Feng Chi and Guofu Zhou
Micromachines 2024, 15(10), 1260; https://doi.org/10.3390/mi15101260 - 15 Oct 2024
Viewed by 881
Abstract
Three-color electrophoretic display (EPD) is emerging as a display technology due to its extremely low energy consumption and excellent reflective properties. However, in the process of black and white color image transition, due to the different driving characteristics of red particles, the particles [...] Read more.
Three-color electrophoretic display (EPD) is emerging as a display technology due to its extremely low energy consumption and excellent reflective properties. However, in the process of black and white color image transition, due to the different driving characteristics of red particles, the particles within the three-color EPD cannot be ideally driven to the target position, resulting in the appearance of a red ghost image. For this reason, this study utilized the COMSOL 5.6 finite element simulation method to construct a three-dimensional simulation model to explore the motion characteristics of electrophoretic particles, and then proposed a new driving scheme. The driving scheme aimed to drive red particles to the target position and eliminate the red ghost image by optimizing the pixel erasing stage and employing a high-frequency oscillating voltage. The final experimental results showed that after adopting the proposed driving scheme, the red ghost image was reduced by 8.57% and the brightness of the white color image was increased by 17.50%. This method effectively improved the display performance of three-color EPDs and contributed to the better application of three-color EPDs in the field of high-reflectivity and high-quality display. Full article
(This article belongs to the Special Issue Photonic and Optoelectronic Devices and Systems, Second Edition)
Show Figures

Figure 1

Figure 1
<p>Principle of the driving scheme and the red ghost image of three-color EPDs. (<b>a</b>) Schematic diagram of three-color EPD structure; (<b>b</b>) traditional driving scheme diagram; (<b>c</b>) driving images of three-color EPD grayscale transition; (<b>d</b>) particle motion state diagram of the red ghost image.</p>
Full article ">Figure 2
<p>Three-color EPD simulation model diagram and force analysis.</p>
Full article ">Figure 3
<p>The simulated y-axis position curves of three-color EPD particle motion. (<b>a</b>) (i) showed the Y-axis position curve of three colored particles, (ii,iii) were the particle position maps at t = 0.25 s and t = 0.50 s under the traditional driving scheme; (<b>b</b>) (i) showed the Y-axis position curve of three colored particles, (ii,iii) were the particle position maps at t = 0.25 s and t = 0.50 s under the oscillation driving voltage proposed in this paper; (<b>c</b>) (i) showed the Y-axis position curve of three colored particles, (ii,iii) were the particle position maps at t = 0.25 s and t = 0.50 s under the oscillation driving voltage applied in advance.</p>
Full article ">Figure 4
<p>The driving scheme and optimization results of this study. (<b>a</b>) The driving scheme proposed in this paper; (<b>b</b>) red saturation varying with parameters of the erasing stage; (<b>c</b>) the relationship between red saturation and activation period; (<b>d</b>) the relationship between brightness and activation period when a white-colored image was displayed.</p>
Full article ">Figure 5
<p>Driving variation of three-color EPDs under two driving schemes. (<b>a</b>) The chromaticity changes in the traditional driving scheme [<a href="#B17-micromachines-15-01260" class="html-bibr">17</a>,<a href="#B22-micromachines-15-01260" class="html-bibr">22</a>,<a href="#B33-micromachines-15-01260" class="html-bibr">33</a>]; (<b>b</b>) the chromaticity changes in the proposed driving scheme; (<b>c</b>) diagram of red saturation during color transition; (<b>d</b>) diagram of brightness during color transition; (<b>e</b>) the driving process diagram of the traditional driving scheme; (<b>f</b>) the driving process diagram of the proposed driving scheme.</p>
Full article ">
16 pages, 5033 KiB  
Article
Sex Differences in Fat Distribution and Muscle Fat Infiltration in the Lower Extremity: A Retrospective Diverse-Ethnicity 7T MRI Study in a Research Institute Setting in the USA
by Talon Johnson, Jianzhong Su, Johnathan Andres, Anke Henning and Jimin Ren
Diagnostics 2024, 14(20), 2260; https://doi.org/10.3390/diagnostics14202260 - 10 Oct 2024
Viewed by 1263
Abstract
Background: Fat infiltration in skeletal muscle is related to declining muscle strength, whereas excess subcutaneous fat is implicated in the development of metabolic diseases. Methods: Using multi-slice axial T2-weighted (T2w) MR images, this retrospective study characterized muscle fat infiltration (MFI) and fat distribution [...] Read more.
Background: Fat infiltration in skeletal muscle is related to declining muscle strength, whereas excess subcutaneous fat is implicated in the development of metabolic diseases. Methods: Using multi-slice axial T2-weighted (T2w) MR images, this retrospective study characterized muscle fat infiltration (MFI) and fat distribution in the lower extremity of 107 subjects (64M/43F, age 11–79 years) with diverse ethnicities (including White, Black, Latino, and Asian subjects). Results: MRI data analysis shows that MFI, evaluated by the relative intensities of the pixel histogram profile in the calf muscle, tends to increase with both age and BMI. However, statistical significance was found only for the age correlation in women (p < 0.002), and the BMI correlation in men (p = 0.04). Sex disparities were also seen in the fat distribution, which was assessed according to subcutaneous fat thickness (SFT) and the fibula bone marrow cross-sectional area (BMA). SFT tends to decrease with age in men (p < 0.01), whereas SFT tends to increase with BMI only in women (p < 0.01). In contrast, BMA tends to increase with age in women (p < 0.01) and with BMI in men (p = 0.04). Additionally, MFI is positively correlated with BMA but not with SFT, suggesting that compromised bone structure may contribute to fat infiltration in the surrounding skeletal muscle. Conclusions: The findings of this study highlight a sex factor affecting MFI and fat distribution, which may offer valuable insights into effective strategies to prevent and treat MFI in women versus men. Full article
(This article belongs to the Special Issue Imaging of Musculoskeletal Diseases: New Advances and Future Trends)
Show Figures

Figure 1

Figure 1
<p>Segmentation of fibula bone marrow (green), subcutaneous fat (yellow), and calf muscle ROI (red) in T2w MR Image. Fat infiltration in calf muscle evaluated by an analysis of the pixel histogram after the correction of spatial intensity inhomogeneity based on the N4ITK algorithm.</p>
Full article ">Figure 2
<p>Group average of (<b>A</b>) subcutaneous fat thickness SFT and (<b>B</b>) bone marrow cross-sectional area BMA in men (<span class="html-italic">n</span> = 64) and women (<span class="html-italic">n</span> = 43). The symbol * denotes statistical significance.</p>
Full article ">Figure 3
<p>SFT linear univariate correlation with age (<b>A</b>) and BMI (<b>B</b>), and multivariate correlation with age and BMI for the entire group (<b>C</b>), and male (<b>D</b>) and female (<b>E</b>) subgroups.</p>
Full article ">Figure 4
<p>BMA linear univariate correlation with age (<b>A</b>) and BMI (<b>B</b>), and multivariate correlation with age and BMI for the entire group (<b>C</b>), and male (<b>D</b>) and female (<b>E</b>) subgroups.</p>
Full article ">Figure 5
<p>Analysis of pixel histogram for the characterization of the severity of fat infiltration in calf muscle. (<b>A</b>) Pixel intensity distribution profiles, showing mean pixel intensity (black dash line) and mode pixel intensity (magenta dash line). (<b>B</b>) Subject clustering based on the measurements of pixel mean intensity and mode intensity. Muscle fat infiltration (MFI) in 107 subjects clustered into four groups: normal (45/107, 26M/19F), mild MFI (45/107, 26M/19F), moderate MFI (15/107, 11M/4F), and severe MFI (2/107, 1M/1F). Note the trend in MFI is reflected by the increase in mean intensity, mode intensity, and linewidth (profile dispersion).</p>
Full article ">Figure 6
<p>Comparison of the averaged BMA (<b>A</b>,<b>C</b>) and SFT (<b>B</b>,<b>D</b>) for those three MFI groups (normal, mild, and moderate), categorized by MFI mean and mode indexes (<b>A</b>,<b>B</b>) and by mean index alone (<b>C</b>,<b>D</b>).</p>
Full article ">Figure 7
<p>Analysis of linear correlations between muscle fat infiltration (MFI) indexes with BMA and SFT. (<b>A</b>) BMA area vs. mean pixel intensity; (<b>B</b>) BMA area vs. mode pixel intensity; (<b>C</b>) SFT vs. mean pixel intensity; (<b>D</b>) SFT vs. mode pixel intensity.</p>
Full article ">Figure 8
<p>Analysis of linear correlation between muscle fat infiltration (MFI) indexes with demographic factors. (<b>A</b>) Age vs. mean pixel intensity; (<b>B</b>) BMI vs. mean pixel intensity; (<b>C</b>) age vs. mean pixel intensity; (<b>D</b>) BMI vs. mode pixel intensity.</p>
Full article ">
23 pages, 6251 KiB  
Article
Explainable Encoder–Prediction–Reconstruction Framework for the Prediction of Metasurface Absorption Spectra
by Yajie Ouyang, Yunhui Zeng and Xiaoxiang Liu
Nanomaterials 2024, 14(18), 1497; https://doi.org/10.3390/nano14181497 - 14 Sep 2024
Viewed by 1120
Abstract
The correlation between metasurface structures and their corresponding absorption spectra is inherently complex due to intricate physical interactions. Additionally, the reliance on Maxwell’s equations for simulating these relationships leads to extensive computational demands, significantly hindering rapid development in this area. Numerous researchers have [...] Read more.
The correlation between metasurface structures and their corresponding absorption spectra is inherently complex due to intricate physical interactions. Additionally, the reliance on Maxwell’s equations for simulating these relationships leads to extensive computational demands, significantly hindering rapid development in this area. Numerous researchers have employed artificial intelligence (AI) models to predict absorption spectra. However, these models often act as black boxes. Despite training high-performance models, it remains challenging to verify if they are fitting to rational patterns or merely guessing outcomes. To address these challenges, we introduce the Explainable Encoder–Prediction–Reconstruction (EEPR) framework, which separates the prediction process into feature extraction and spectra generation, facilitating a deeper understanding of the physical relationships between metasurface structures and spectra and unveiling the model’s operations at the feature level. Our model achieves a 66.23% reduction in average Mean Square Error (MSE), with an MSE of 2.843 × 104 compared to the average MSE of 8.421×104 for mainstream networks. Additionally, our model operates approximately 500,000 times faster than traditional simulations based on Maxwell’s equations, with a time of 3×103 seconds per sample, and demonstrates excellent generalization capabilities. By utilizing the EEPR framework, we achieve feature-level explainability and offer insights into the physical properties and their impact on metasurface structures, going beyond the pixel-level explanations provided by existing research. Additionally, we demonstrate the capability to adjust absorption by changing the metasurface at the feature level. These insights potentially empower designers to refine structures and enhance their trust in AI applications. Full article
(This article belongs to the Section Nanophotonics Materials and Devices)
Show Figures

Figure 1

Figure 1
<p>The EEPR framework. (<b>a</b>) Flowchart of the EEPR framework. (<b>b</b>) Overview of explanation at the feature level, where <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math> is the metasurface structure as the input of the three networks, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>M</mi> </mrow> <mrow> <mi>R</mi> </mrow> </msup> </mrow> </semantics></math> is the metasurface structure obtained by reconstruction with the ED Network or the ER Network, <math display="inline"><semantics> <mrow> <mi>S</mi> </mrow> </semantics></math> is the corresponding absorption spectrum predicted by the EP Network, and <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>M</mi> </mrow> <mrow> <mi>M</mi> </mrow> </msup> </mrow> </semantics></math> is the modified metasurface structure obtained by adjusting the embedding vectors based on the original metasurface structure (structure for analysis) <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>M</mi> </mrow> <mrow> <mi>O</mi> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Architecture of the EP Network. Conv: convolutional layer; BN: batch normalization layer; MaxPool: max pooling layer; AdaptiveAvgPool: adaptive average pooling layer; and ConvT: transposed convolutional layer.</p>
Full article ">Figure 3
<p>Comparison between predicted and ground truth spectra.</p>
Full article ">Figure 4
<p>The pixel-level explanation process.</p>
Full article ">Figure 5
<p>The meaning of the features extracted from the model and their effect on the structure: (<b>a</b>,<b>b</b>) show the ten dimensions with the largest absolute value of SHAP in the embedding vector of Struct A and Struct B, respectively; (<b>c</b>,<b>d</b>) show the absolute SHAP value heatmaps and the effect of modifying a dimension in the embedding vectors on Struct A and Struct B, respectively.</p>
Full article ">Figure 6
<p>Absorption spectra of the metasurface structure obtained by modifying the embedding vector. (<b>a</b>) The embedding vectors of the MIM structure are modified to obtain Struct C and Struct D and simulations are performed to verify the predicted spectra. (<b>b</b>) The embedding vectors of the hybrid dielectric structure are modified to obtain Struct E and Struct F and simulations are performed to verify the predicted spectra.</p>
Full article ">Figure 7
<p>Predicted spectra and ground truth spectra of Struct D: (<b>a</b>,<b>b</b>) show the predicted spectra and ground truth spectra of UNet and ResNet, respectively.</p>
Full article ">Figure 8
<p>Comparison of feature-level explanation and pixel-level explanation.</p>
Full article ">Figure 9
<p>Adjust absorption by changing the original struct (Struct A) at the feature level. (<b>a</b>) Modify the embedding vectors of the original struct (Struct A) to obtain Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) Absorption spectra of the original struct (Struct A), Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, and Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>,<b>d</b>) show the comparison of predicted spectra and ground truth spectra of modified structures.</p>
Full article ">Figure 10
<p>Adjust absorption by changing the original struct (Struct B) at the feature level. (<b>a</b>) Modify the embedding vectors of the original struct (Struct B) to obtain Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) Absorption spectra of the original struct (Struct B), Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, and Struct <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>,<b>d</b>) show the comparison of predicted spectra and ground truth spectra of modified structures.</p>
Full article ">Figure 11
<p>Pixel-level modifications on Struct A and Struct B.</p>
Full article ">
17 pages, 17092 KiB  
Article
Detection and Assessment of White Flowering Nectar Source Trees and Location of Bee Colonies in Rural and Suburban Environments Using Deep Learning
by Atanas Z. Atanasov, Boris I. Evstatiev, Asparuh I. Atanasov and Ivaylo S. Hristakov
Diversity 2024, 16(9), 578; https://doi.org/10.3390/d16090578 - 13 Sep 2024
Viewed by 706
Abstract
Environmental pollution with pesticides as a result of intensive agriculture harms the development of bee colonies. Bees are one of the most important pollinating insects on our planet. One of the ways to protect them is to relocate and build apiaries in populated [...] Read more.
Environmental pollution with pesticides as a result of intensive agriculture harms the development of bee colonies. Bees are one of the most important pollinating insects on our planet. One of the ways to protect them is to relocate and build apiaries in populated areas. An important condition for the development of bee colonies is the rich species diversity of flowering plants and the size of the areas occupied by them. In this study, a methodology for detecting and distinguishing white flowering nectar source trees and counting bee colonies is developed and demonstrated, applicable in populated environments. It is based on UAV-obtained RGB imagery and two convolutional neural networks—a pixel-based one for identification of flowering areas and an object-based one for beehive identification, which achieved accuracies of 93.4% and 95.2%, respectively. Based on an experimental study near the village of Yuper (Bulgaria), the productive potential of black locust (Robinia pseudoacacia) areas in rural and suburban environments was determined. The obtained results showed that the identified blooming area corresponds to 3.654 m2, out of 89.725 m2 that were scanned with the drone, and the number of identified beehives was 149. The proposed methodology will facilitate beekeepers in choosing places for the placement of new apiaries and planning activities of an organizational nature. Full article
(This article belongs to the Special Issue Ecology and Diversity of Bees in Urban Environments)
Show Figures

Figure 1

Figure 1
<p>Location of the experimental plot: (<b>a</b>) the village of Yuper; (<b>b</b>) the geographic location of the experimental area in the north-eastern part of Bulgaria.</p>
Full article ">Figure 2
<p>Summary of the proposed methodology for analysis of the honey production potential.</p>
Full article ">Figure 3
<p>Summary of the geo-referenced image in the Yuper region. The flight range of the bees is marked with the yellow circles. The areas <span class="html-italic">Robinia pseudoacacia</span> are marked with green.</p>
Full article ">Figure 4
<p>The merged images selected as reference data for recognizing blooming trees (marked in yellow).</p>
Full article ">Figure 5
<p>Training and validation loss of the DeepLabV3 CNN model for blooming areas identification.</p>
Full article ">Figure 6
<p>Image used as reference data for training the beehives recognition model (<b>a</b>) and closeup image of an area with the beehives (<b>b</b>). All beehives are marked with yellow rectangles.</p>
Full article ">Figure 7
<p>Training and validation loss of the Mask RCNN model for beehive counting.</p>
Full article ">Figure 8
<p>HQ map of the investigated area, generated using the UAV images. The yellow squares represent the locations of the UAV-obtained images.</p>
Full article ">Figure 9
<p>Examples of false positives during beehive identification and counting. The red marks represent the artificial objects, incorrectly identified as beehives.</p>
Full article ">Figure 10
<p>Identified beehives (marked in pink and green) in: (<b>a</b>) area 1; (<b>b</b>) area 2.</p>
Full article ">Figure 11
<p>Graphical results from the pixel-based identification of blooming trees (marked in blue).</p>
Full article ">Figure 12
<p>Control hive for monitoring the weight.</p>
Full article ">
12 pages, 3324 KiB  
Article
Detection and Tracking of Underwater Fish Using the Fair Multi-Object Tracking Model: A Comparative Analysis of YOLOv5s and DLA-34 Backbone Models
by Sang-Hyun Lee and Myeong-Hoon Oh
Appl. Sci. 2024, 14(16), 6888; https://doi.org/10.3390/app14166888 - 6 Aug 2024
Viewed by 1145
Abstract
Modern aquaculture utilizes computer vision technology to analyze underwater images of fish, contributing to optimized water quality and improved production efficiency. The purpose of this study is to efficiently perform underwater fish detection and tracking using multi-object tracking (MOT) technology. To achieve this, [...] Read more.
Modern aquaculture utilizes computer vision technology to analyze underwater images of fish, contributing to optimized water quality and improved production efficiency. The purpose of this study is to efficiently perform underwater fish detection and tracking using multi-object tracking (MOT) technology. To achieve this, the FairMOT model was employed to simultaneously implement pixel-level object detection and re-identification (Re-ID) functions, comparing two backbone models: FairMOT+YOLOv5s and FairMOT+DLA-34. The study constructed a dataset targeting the popular black porgy in Korean aquaculture, using underwater video data from five different environments collected from the internet. During the training process, the FairMOT+YOLOv5s model rapidly reduced train loss and demonstrated stable performance. The FairMOT+DLA-34 model showed better results in ID tracking performance, with an accuracy of 44.1%, an IDF1 of 11.0%, an MOTP of 0.393, and an IDSW of 1. In contrast, the FairMOT+YOLOv5s model recorded an accuracy of 43.8%, an IDF1 of 14.6%, an MOTP of 0.400, and an IDSW of 10. The results of this study indicate that the FairMOT+YOLOv5s model demonstrated higher IDF1 and MOTP scores compared to the FairMOT+DLA-34 model, while the FairMOT+DLA-34 model showed superior performance in ID tracking accuracy and had fewer ID switches. Full article
(This article belongs to the Special Issue Integrating Artificial Intelligence in Renewable Energy Systems)
Show Figures

Figure 1

Figure 1
<p>Structure of DLA-34 backbone.</p>
Full article ">Figure 2
<p>Structure of YOLOv5s backbone.</p>
Full article ">Figure 3
<p>Structure of FairMOT network.</p>
Full article ">Figure 4
<p>Process FairMOT mixed model.</p>
Full article ">Figure 5
<p>Data extraction in five environments.</p>
Full article ">Figure 6
<p>Comparison of training loss between FairMOT models DLA-34 and YOLOv5s.</p>
Full article ">Figure 7
<p>Comparison of test results between FairMOT-DLA-34 and FairMOT-YOLOv5s.</p>
Full article ">Figure 8
<p>ID detection comparison for FairMOT-DLA-34 and FairMOT-YOLOv5s.</p>
Full article ">
18 pages, 2185 KiB  
Article
Evaluation of Optimization Algorithms for Measurement of Suspended Solids
by Daniela Lopez-Betancur, Efrén González-Ramírez, Carlos Guerrero-Mendez, Tonatiuh Saucedo-Anaya, Martín Montes Rivera, Edith Olmos-Trujillo and Salvador Gomez Jimenez
Water 2024, 16(13), 1761; https://doi.org/10.3390/w16131761 - 21 Jun 2024
Cited by 2 | Viewed by 1941
Abstract
Advances in convolutional neural networks (CNNs) provide novel and alternative solutions for water quality management. This paper evaluates state-of-the-art optimization strategies available in PyTorch to date using AlexNet, a simple yet powerful CNN model. We assessed twelve optimization algorithms: Adadelta, Adagrad, Adam, AdamW, [...] Read more.
Advances in convolutional neural networks (CNNs) provide novel and alternative solutions for water quality management. This paper evaluates state-of-the-art optimization strategies available in PyTorch to date using AlexNet, a simple yet powerful CNN model. We assessed twelve optimization algorithms: Adadelta, Adagrad, Adam, AdamW, Adamax, ASGD, LBFGS, NAdam, RAdam, RMSprop, Rprop, and SGD under default conditions. The AlexNet model, pre-trained and coupled with a Multiple Linear Regression (MLR) model, was used to estimate the quantity black pixels (suspended solids) randomly distributed on a white background image, representing total suspended solids in liquid samples. Simulated images were used instead of real samples to maintain a controlled environment and eliminate variables that could introduce noise and optical aberrations, ensuring a more precise evaluation of the optimization algorithms. The performance of the CNN was evaluated using the accuracy, precision, recall, specificity, and F_Score metrics. Meanwhile, MLR was evaluated with the coefficient of determination (R2), mean absolute and mean square errors. The results indicate that the top five optimizers are Adagrad, Rprop, Adamax, SGD, and ASGD, with accuracy rates of 100% for each optimizer, and R2 values of 0.996, 0.959, 0.971, 0.966, and 0.966, respectively. Instead, the three worst performing optimizers were Adam, AdamW, and NAdam with accuracy rates of 22.2%, 11.1% and 11.1%, and R2 values of 0.000, 0.148, and 0.000, respectively. These findings demonstrate the significant impact of optimization algorithms on CNN performance and provide valuable insights for selecting suitable optimizers to water quality assessment, filling existing gaps in the literature. This motivates further research to test the best optimizer models using real data to validate the findings and enhance their practical applicability, explaining how the optimizers can be used with real data. Full article
Show Figures

Figure 1

Figure 1
<p>Model performance sequence used.</p>
Full article ">Figure 2
<p>Classes used for training process (note: the gray frame around the image for the class with 0 black pixels is purely illustrative, framing the size and highlighting that it is a completely white image; the images used for training do not include it).</p>
Full article ">Figure 3
<p>Additional classes used to validate the optimization algorithms.</p>
Full article ">Figure 4
<p>Confusion matrix for the best optimization algorithms.</p>
Full article ">Figure 5
<p>Optimization algorithms for classification task using accuracy metric.</p>
Full article ">Figure 6
<p>Optimization algorithms for classification task using coefficient of determination (<span class="html-italic">R</span><sup>2</sup>).</p>
Full article ">
17 pages, 5773 KiB  
Article
Colorimetric Evaluation of a Reintegration via Spectral Imaging—Case Study: Nasrid Tiling Panel from the Alhambra of Granada (Spain)
by Miguel Ángel Martínez-Domingo, Ana Belén López-Baldomero, Maria Tejada-Casado, Manuel Melgosa and Francisco José Collado-Montero
Sensors 2024, 24(12), 3872; https://doi.org/10.3390/s24123872 - 14 Jun 2024
Viewed by 780
Abstract
Color reintegration is a restoration treatment that involves applying paint or colored plaster to an object of cultural heritage to facilitate its perception and understanding. This study examines the impact of lighting on the visual appearance of one such restored piece: a tiled [...] Read more.
Color reintegration is a restoration treatment that involves applying paint or colored plaster to an object of cultural heritage to facilitate its perception and understanding. This study examines the impact of lighting on the visual appearance of one such restored piece: a tiled skirting panel from the Nasrid period (1238–1492), permanently on display at the Museum of the Alhambra (Spain). Spectral images in the range of 380–1080 nm were obtained using a hyperspectral image scanner. CIELAB and CIEDE2000 color coordinates at each pixel were computed assuming the CIE 1931 standard colorimetric observer and considering ten relevant illuminants proposed by the International Commission on Illumination (CIE): D65 plus nine white LEDs. Four main hues (blue, green, yellow, and black) can be distinguished in the original and reintegrated areas. For each hue, mean color difference from the mean (MCDM), CIEDE2000 average distances, volumes, and overlapping volumes were computed in the CIELAB space by comparing the original and the reintegrated zones. The study reveals noticeable average color differences between the original and reintegrated areas within tiles: 6.0 and 4.7 CIEDE2000 units for the yellow and blue tiles (with MCDM values of 3.7 and 4.5 and 5.8 and 7.2, respectively), and 16.6 and 17.8 CIEDE2000 units for the black and green tiles (with MCDM values of 13.2 and 12.2 and 10.9 and 11.3, respectively). The overlapping volume of CIELAB clouds of points corresponding to the original and reintegrated areas ranges from 35% to 50%, indicating that these areas would be perceived as different by observers with normal color vision for all four tiles. However, average color differences between the original and reintegrated areas changed with the tested illuminants by less than 2.6 CIEDE2000 units. Our current methodology provides useful quantitative results for evaluation of the color appearance of a reintegrated area under different light sources, helping curators and museum professionals to choose optimal lighting. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>Right</b>) The complete art piece. (<b>Left</b>) Magnification of the studied area of the piece.</p>
Full article ">Figure 2
<p>Spectral power distributions of the nine CIE LED illuminants and the standard CIE D65 illuminant.</p>
Full article ">Figure 3
<p>False RGB renderings of the black, blue, yellow, and green captured samples. The original and reintegrated areas are highlighted in dashed white and continuous red lines, respectively.</p>
Full article ">Figure 4
<p>Workflow followed in this study.</p>
Full article ">Figure 5
<p>Scheme of two theoretical <span class="html-italic">L*a*b*</span> clouds with identical mean color and volume, but different <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>C</mi> <mi>D</mi> <mi>M</mi> </mrow> </semantics></math> values. Specifically, in this example the <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>C</mi> <mi>D</mi> <mi>M</mi> </mrow> </semantics></math> for cloud 1 is lower than the <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>C</mi> <mi>D</mi> <mi>M</mi> </mrow> </semantics></math> for cloud 2.</p>
Full article ">Figure 6
<p>Average reflectance spectra of the original (continuous lines) and reintegrated (dashed lines) areas for the four tiles (yellow, blue, black, and green).</p>
Full article ">Figure 7
<p>Average CIEDE2000 color differences (original vs. reintegrated areas) for the four tiles (yellow, blue, black, and green) under the ten CIE illuminants.</p>
Full article ">Figure 8
<p>Percentage of <span class="html-italic">L*a*b*</span> overlapping volume between the original and reintegrated areas of the yellow, blue, black, and green tiles (distinguished by the colors of the bars) under the ten CIE illuminants.</p>
Full article ">Figure 9
<p>2D projections from the <span class="html-italic">L*a*b*</span> color space representing the color gamuts of the original (light) and reintegrated (dark) areas for the yellow (<b>upper row</b>) and blue (<b>lower row</b>) tiles, using the LED-RGB1 and LED-V1 illuminants, respectively.</p>
Full article ">
18 pages, 4781 KiB  
Article
Fiber-Optic System for Monitoring Pit Collapse Prevention
by Yelena Neshina, Ali Mekhtiyev, Valeriy Kalytka, Nurbol Kaliaskarov, Olga Galtseva and Ilyas Kazambayev
Appl. Sci. 2024, 14(11), 4678; https://doi.org/10.3390/app14114678 - 29 May 2024
Cited by 1 | Viewed by 1174
Abstract
Currently, there are many enterprises involved in extracting and processing of primary raw materials. The danger of working in this industry consists in the formation of cracks in rocks of the pit side slopes, which can lead to destruction. This article discusses the [...] Read more.
Currently, there are many enterprises involved in extracting and processing of primary raw materials. The danger of working in this industry consists in the formation of cracks in rocks of the pit side slopes, which can lead to destruction. This article discusses the existing systems for monitoring the pit collapse prevention. The most promising is the use of systems with fiber-optic sensors. However, use of these systems is associated with some difficulties due to high costs, low noise immunity, and in some cases, the requirement for additional equipment to improve the reliability of measurements. A completely new method of processing the data from a fiber-optic sensor that simplifies the design and reduces the cost of the device is proposed considering the experience of previous developments. The system uses artificial intelligence, which improves the data processing. The theoretical part is dedicated to the development of foundations, and the analysis of the nonlinear properties of the physical and mathematical model of optical processes associated with the propagation of an electromagnetic wave in a fiber-optic material was developed. The results of experimental and theoretical applied research, which are important for the development of fiber-optic systems for monitoring the pit collapse prevention, are presented. The dependences of optical losses and the number of pixels on the dis-placement were obtained. The accuracy of the method corresponds to the accuracy of the device by which it is calibrated and is 0.001 mm. The developed hardware-software complex is able to track the rate of changing the derivative of the light wave intensity in time, as well as changing the shape of the spot and transition of pixels from white to black. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

Figure 1
<p>Laboratory bench with the use of the developed HSC.</p>
Full article ">Figure 2
<p>Laboratory bench with the use of a reflectometer.</p>
Full article ">Figure 3
<p>Device for measuring displacements (minimum one division value).</p>
Full article ">Figure 4
<p>Graph obtained from the developed software, where 1 is the turnbuckle, 2 is the optical connector, 3 is the spring, 4 is the tension measurement instrument, 5 is the tension stud, 6 is the fiber -optic sensor of shift, 7 is the optical splitter, 8 is the optical power measurement instrument, 9 is the signal generator, 10 is the data processing device, 11 is the USB cable and 12 is the personal computer.</p>
Full article ">Figure 5
<p>Structural diagram of the monitoring system.</p>
Full article ">Figure 6
<p>The program’s appearance.</p>
Full article ">Figure 7
<p>Plot obtained from the developed software.</p>
Full article ">Figure 8
<p>Changing the number of spots with displacement of the optical fiber ends: (<b>a</b>) position 1; (<b>b</b>) position 2.</p>
Full article ">Figure 9
<p>Number of pixels dependence on displacement.</p>
Full article ">Figure 10
<p>Optical loss dependence on displacements.</p>
Full article ">Figure 11
<p>Implementation of a monitoring system in production.</p>
Full article ">
13 pages, 2904 KiB  
Article
The Effect of Water during the Compaction Process on Surface Characteristics of HMA Pavement
by Bingquan Dai, Lei Mao, Pan Pan, Xiaodi Hu and Ning Wang
Materials 2024, 17(9), 2146; https://doi.org/10.3390/ma17092146 - 3 May 2024
Viewed by 773
Abstract
During the compaction process of HMA pavement, it is common to spray cold water on the wheel of a road roller to prevent the mixture from sticking to the wheel, which might deteriorate the bonding strength between the asphalt binder and aggregate, and [...] Read more.
During the compaction process of HMA pavement, it is common to spray cold water on the wheel of a road roller to prevent the mixture from sticking to the wheel, which might deteriorate the bonding strength between the asphalt binder and aggregate, and consequently lead to surface polishing of the pavement. This paper aims to demonstrate whether the water used during the compaction process affects the surface performance of HMA pavement. In this study, the black pixel ratio and mass loss ratio were used to evaluate the water effect on the surface performance of asphalt pavement, considering the water consumption, molding temperature and long-term ageing process. The test results indicated that the water used during the compaction process would increase the risk of surface polishing of HMA pavement. This adverse effect became more significant if the HMA samples were prepared using greater water consumption, a greater molding temperature and a long-term ageing process. Moreover, there exists a certain correlation between the black pixel ratio and mass loss ratio, and their relationships were demonstrated by the experimental results in this study. It is recommended that further research concentrates on the influencing mechanism and the treatment strategy for the adverse effect caused by the water used during the compaction process. The use of more types of asphalt binders, aggregate and methodologies is also recommended in further studies. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

Figure 1
<p>Aggregate gradation.</p>
Full article ">Figure 2
<p>Image processing diagram.</p>
Full article ">Figure 3
<p>Surface images of the SGC samples molded at 120 °C with different levels of water consumption.</p>
Full article ">Figure 4
<p>The effect of water consumption on the black pixel ratio of HMA samples molded at 120 °C during the wet-wheel wearing test.</p>
Full article ">Figure 5
<p>The effect of molding temperature on the black pixel ratio of the HMA samples.</p>
Full article ">Figure 6
<p>The effect of long-term ageing on the black pixel ratio of HMA samples molded at 140 °C.</p>
Full article ">Figure 7
<p>The Effect of water consumption on the mass loss ratio of HMA samples molded at 120 °C.</p>
Full article ">Figure 8
<p>The effect of molding temperature on the mass loss ratio of the HMA samples.</p>
Full article ">Figure 9
<p>The effect of long-term ageing on the mass loss ratio of HMA samples molded at 120 °C.</p>
Full article ">Figure 10
<p>Correlation analysis of black pixel ratio and mass loss ratio for the tested HMA samples.</p>
Full article ">
20 pages, 4630 KiB  
Article
U-Net with Coordinate Attention and VGGNet: A Grape Image Segmentation Algorithm Based on Fusion Pyramid Pooling and the Dual-Attention Mechanism
by Xiaomei Yi, Yue Zhou, Peng Wu, Guoying Wang, Lufeng Mo, Musenge Chola, Xinyun Fu and Pengxiang Qian
Agronomy 2024, 14(5), 925; https://doi.org/10.3390/agronomy14050925 - 28 Apr 2024
Viewed by 1333
Abstract
Currently, the classification of grapevine black rot disease relies on assessing the percentage of affected spots in the total area, with a primary focus on accurately segmenting these spots in images. Particularly challenging are cases in which lesion areas are small and boundaries [...] Read more.
Currently, the classification of grapevine black rot disease relies on assessing the percentage of affected spots in the total area, with a primary focus on accurately segmenting these spots in images. Particularly challenging are cases in which lesion areas are small and boundaries are ill-defined, hampering precise segmentation. In our study, we introduce an enhanced U-Net network tailored for segmenting black rot spots on grape leaves. Leveraging VGG as the U-Net’s backbone, we strategically position the atrous spatial pyramid pooling (ASPP) module at the base of the U-Net to serve as a link between the encoder and decoder. Additionally, channel and spatial dual-attention modules are integrated into the decoder, alongside a feature pyramid network aimed at fusing diverse levels of feature maps to enhance the segmentation of diseased regions. Our model outperforms traditional plant disease semantic segmentation approaches like DeeplabV3+, U-Net, and PSPNet, achieving impressive pixel accuracy (PA) and mean intersection over union (MIoU) scores of 94.33% and 91.09%, respectively. Demonstrating strong performance across various levels of spot segmentation, our method showcases its efficacy in enhancing the segmentation accuracy of black rot spots on grapevines. Full article
Show Figures

Figure 1

Figure 1
<p>Image annotation status: (<b>a</b>) original image; (<b>b</b>) image marking results. Black represents the background, and red represents the lesions.</p>
Full article ">Figure 2
<p>Data augmentation: (<b>a</b>) original image; (<b>b</b>) flipped and added noise; (<b>c</b>) flipped and reduced brightness; (<b>d</b>) added noise and reduced brightness; (<b>e</b>) flipped and shifted and reduced brightness; (<b>f</b>) flipped and shifted.</p>
Full article ">Figure 3
<p>U-Net structure.</p>
Full article ">Figure 4
<p>CVU-Net network structure. The orange color block represents the location in which the ASPP module is added, and the yellow color block represents the location in which the attention mechanism is added.</p>
Full article ">Figure 5
<p>Backbone feature extraction network structure: (<b>a</b>) backbone network feature extraction model; (<b>b</b>) backbone feature extraction partial implementation approach.</p>
Full article ">Figure 6
<p>SENet structure.</p>
Full article ">Figure 7
<p>CA structure.</p>
Full article ">Figure 8
<p>Enhancement of the structure of the part of the feature extraction network. (<b>a</b>) Enhanced feature extraction partial model; (<b>b</b>) enhancement of the feature extraction component implementation approach.</p>
Full article ">Figure 9
<p>ASPP structure.</p>
Full article ">Figure 10
<p>Model average intersection ratio versus learning rate and number of iterations.</p>
Full article ">Figure 11
<p>Segmentation effect of different algorithms: (<b>a</b>) original image; (<b>b</b>) ground truth; (<b>c</b>) U-Net; (<b>d</b>) PSPNet; (<b>e</b>) DeeplabV3+; (<b>f</b>) CVU-Net. The green boxes represent areas where there is a large difference between the different methods.</p>
Full article ">Figure 12
<p>Comparison of segmentation accuracy of each model for graded lesions.</p>
Full article ">
19 pages, 4397 KiB  
Article
Sh-DeepLabv3+: An Improved Semantic Segmentation Lightweight Network for Corn Straw Cover Form Plot Classification
by Yueyong Wang, Xuebing Gao, Yu Sun, Yuanyuan Liu, Libin Wang and Mengqi Liu
Agriculture 2024, 14(4), 628; https://doi.org/10.3390/agriculture14040628 - 18 Apr 2024
Cited by 3 | Viewed by 1455
Abstract
Straw return is one of the main methods for protecting black soil. Efficient and accurate straw return detection is important for the sustainability of conservation tillage. In this study, a rapid straw return detection method is proposed for large areas. An optimized Sh-DeepLabv3+ [...] Read more.
Straw return is one of the main methods for protecting black soil. Efficient and accurate straw return detection is important for the sustainability of conservation tillage. In this study, a rapid straw return detection method is proposed for large areas. An optimized Sh-DeepLabv3+ model based on the aforementioned detection method and the characteristics of straw return in Jilin Province was then used to classify plots into different straw return cover types. The model used Mobilenetv2 as the backbone network to reduce the number of model parameters, and the channel-wise feature pyramid module based on channel attention (CA-CFP) and a low-level feature fusion module (LLFF) were used to enhance the segmentation of the plot details. In addition, a composite loss function was used to solve the problem of class imbalance in the dataset. The results show that the extraction accuracy is optimal when a 2048 × 2048-pixel scale image is used as the model input. The total parameters of the improved model are 3.79 M, and the mean intersection over union (MIoU) is 96.22%, which is better than other comparative models. After conducting a calculation of the form–grade mapping relationship, the error value of the area prediction was found to be less than 8%. The results show that the proposed rapid straw return detection method based on Sh-DeepLabv3+ can provide greater support for straw return detection. Full article
(This article belongs to the Special Issue Smart Mechanization and Automation in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Field research images.</p>
Full article ">Figure 2
<p>Model training and image detection process.</p>
Full article ">Figure 3
<p>Diagram of the Sh-DeepLabv3+ network model’s structure.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison of the different models’ accuracy. (<b>b</b>) Accuracy of the various straw cover forms on the test set.</p>
Full article ">Figure 5
<p>Comparison of the recognition and segmentation effects of the different models.</p>
Full article ">Figure 5 Cont.
<p>Comparison of the recognition and segmentation effects of the different models.</p>
Full article ">Figure 6
<p>Region 1 prediction process diagram. (<b>a</b>) Stitched image; (<b>b</b>) straw cover form prediction image; (<b>c</b>) straw crush form plot extraction; (<b>d</b>) multi-threshold segmentation result; and (<b>i</b>,<b>ii</b>) multi-threshold segmentation detail map.</p>
Full article ">Figure 7
<p>Sh-DeepLabv3+-based confusion matrix for the four regions.</p>
Full article ">
Back to TopTop