[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (195)

Search Parameters:
Keywords = GRNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 7893 KiB  
Article
Meteorological Visibility Estimation Using Landmark Object Extraction and the ANN Method
by Wai-Lun Lo, Kwok-Wai Wong, Richard Tai-Chiu Hsung, Henry Shu-Hung Chung and Hong Fu
Sensors 2025, 25(3), 951; https://doi.org/10.3390/s25030951 - 5 Feb 2025
Viewed by 453
Abstract
Visibility can be interpreted as the largest distance of an object that can be recognized or detected under a bright environment that can be used as an environmental indicator for weather conditions and air pollution. The accuracy of the classical approach of visibility [...] Read more.
Visibility can be interpreted as the largest distance of an object that can be recognized or detected under a bright environment that can be used as an environmental indicator for weather conditions and air pollution. The accuracy of the classical approach of visibility calculation, in which meteorological laws and image feature extraction from digital images are used, depends on the quality and noise disturbances of the image. Therefore, artificial intelligence (AI) and digital image approaches have been proposed for visibility estimation in the past. Image features for the whole digital image are generated by pre-trained convolutional neural networks, and the Artificial Neural Network (ANN) is designed for correlation between image features and visibilities. Instead of using the information of the whole digital images, past research has been proposed to identify effective subregions from which image features are generated. A generalized regression neural network (GRNN) was designed to correlate the image features with the visibilities. Past research results showed that this method is more accurate than the classical approach of using handcrafted features. However, the selection of effective subregions of digital images is not fully automated and is based on manual selection by expert judgments. In this paper, we proposed an automatic effective subregion selection method using landmark object extraction techniques. Image features are generated from these LMO subregions, and the ANN is designed to approximate the mapping between LMO regions’ feature values and visibility values. The experimental results show that this approach can minimize the reductant information for ANN training and improve the accuracy of visibility estimation as compared to the single image approach. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>): Illustration of the visibility database systems. (<b>b</b>): Proposed system structure.</p>
Full article ">Figure 2
<p>Illustration of the variation of clearness with visibilities. The properties of HOG and the intensity of the subregion change significantly when visibility decreases.</p>
Full article ">Figure 3
<p>Distribution of data samples and the Biral SWS-100 Visibility Meter.</p>
Full article ">Figure 4
<p>(<b>a</b>): Original image. (<b>b</b>): Edge image. (<b>c</b>): Located static regions.</p>
Full article ">Figure 5
<p>(<b>a</b>) Mean and variance for different subregions of the processed image, highlighting the corresponding effective subregions (red bounding boxes) for different visibility values: (<b>i</b>) 0–10 km, (ii) 10–20 km, (<b>iii</b>) 20–30 km, (<b>iv</b>) 30–40 km, and (<b>v</b>) 40–50 km. (<b>b</b>) Variation of edges’ intensity with visibility.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Mean and variance for different subregions of the processed image, highlighting the corresponding effective subregions (red bounding boxes) for different visibility values: (<b>i</b>) 0–10 km, (ii) 10–20 km, (<b>iii</b>) 20–30 km, (<b>iv</b>) 30–40 km, and (<b>v</b>) 40–50 km. (<b>b</b>) Variation of edges’ intensity with visibility.</p>
Full article ">Figure 6
<p>(<b>a</b>): Estimated visibilities obtained by applying ResNet-18 architecture with the proposed method. (<b>b</b>): The results of estimated visibilities obtained by applying ResNet-18 architecture with the single image method. (<b>c</b>): Estimated visibilities obtained by applying ResNet-50 architecture with the proposed method. (<b>d</b>): The results of estimated visibilities obtained by applying ResNet-50 architecture with the single image method. (<b>e</b>): Estimated visibilities obtained by applying ResNet-101 architecture with the proposed method. (<b>f</b>): The results of estimated visibilities obtained by applying ResNet-101 architecture with the single image method. (<b>g</b>): Estimated visibilities obtained by applying EfficientNet-B0 architecture with the proposed method. (<b>h</b>): The results of estimated visibilities obtained by applying EfficientNet-B0 architecture with the single image method. (<b>i</b>): Estimated visibilities obtained by applying EfficientNet-B7 architecture with the proposed method. (<b>j</b>): The results of estimated visibilities obtained by applying EfficientNet-B7 architecture with the single image method. (<b>k</b>): Estimated visibilities obtained by applying ViT-B16 architecture with the proposed method. (<b>l</b>): The results of estimated visibilities obtained by applying ViT-B16 architecture with the single image method. (<b>m</b>): Estimated visibilities obtained by applying ViT-B32 architecture with the proposed method. (<b>n</b>): The results of estimated visibilities obtained by applying ViT-B32 architecture with the single image method. (<b>o</b>): The results of estimated visibilities obtained by applying the CLIP model with the single image method.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>): Estimated visibilities obtained by applying ResNet-18 architecture with the proposed method. (<b>b</b>): The results of estimated visibilities obtained by applying ResNet-18 architecture with the single image method. (<b>c</b>): Estimated visibilities obtained by applying ResNet-50 architecture with the proposed method. (<b>d</b>): The results of estimated visibilities obtained by applying ResNet-50 architecture with the single image method. (<b>e</b>): Estimated visibilities obtained by applying ResNet-101 architecture with the proposed method. (<b>f</b>): The results of estimated visibilities obtained by applying ResNet-101 architecture with the single image method. (<b>g</b>): Estimated visibilities obtained by applying EfficientNet-B0 architecture with the proposed method. (<b>h</b>): The results of estimated visibilities obtained by applying EfficientNet-B0 architecture with the single image method. (<b>i</b>): Estimated visibilities obtained by applying EfficientNet-B7 architecture with the proposed method. (<b>j</b>): The results of estimated visibilities obtained by applying EfficientNet-B7 architecture with the single image method. (<b>k</b>): Estimated visibilities obtained by applying ViT-B16 architecture with the proposed method. (<b>l</b>): The results of estimated visibilities obtained by applying ViT-B16 architecture with the single image method. (<b>m</b>): Estimated visibilities obtained by applying ViT-B32 architecture with the proposed method. (<b>n</b>): The results of estimated visibilities obtained by applying ViT-B32 architecture with the single image method. (<b>o</b>): The results of estimated visibilities obtained by applying the CLIP model with the single image method.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>): Estimated visibilities obtained by applying ResNet-18 architecture with the proposed method. (<b>b</b>): The results of estimated visibilities obtained by applying ResNet-18 architecture with the single image method. (<b>c</b>): Estimated visibilities obtained by applying ResNet-50 architecture with the proposed method. (<b>d</b>): The results of estimated visibilities obtained by applying ResNet-50 architecture with the single image method. (<b>e</b>): Estimated visibilities obtained by applying ResNet-101 architecture with the proposed method. (<b>f</b>): The results of estimated visibilities obtained by applying ResNet-101 architecture with the single image method. (<b>g</b>): Estimated visibilities obtained by applying EfficientNet-B0 architecture with the proposed method. (<b>h</b>): The results of estimated visibilities obtained by applying EfficientNet-B0 architecture with the single image method. (<b>i</b>): Estimated visibilities obtained by applying EfficientNet-B7 architecture with the proposed method. (<b>j</b>): The results of estimated visibilities obtained by applying EfficientNet-B7 architecture with the single image method. (<b>k</b>): Estimated visibilities obtained by applying ViT-B16 architecture with the proposed method. (<b>l</b>): The results of estimated visibilities obtained by applying ViT-B16 architecture with the single image method. (<b>m</b>): Estimated visibilities obtained by applying ViT-B32 architecture with the proposed method. (<b>n</b>): The results of estimated visibilities obtained by applying ViT-B32 architecture with the single image method. (<b>o</b>): The results of estimated visibilities obtained by applying the CLIP model with the single image method.</p>
Full article ">Figure 7
<p>Examples of images with estimated and actual ground truth (bracketed) values. (<b>a</b>) 7.42 (7.89). (<b>b</b>) 17.01 (17.75). (<b>c</b>) 26.38 (26.83). (<b>d</b>) 36.96 (36.91). (<b>e</b>) 50.00 (49.87).</p>
Full article ">
20 pages, 7029 KiB  
Article
Three-Dimensional Reconstruction, Phenotypic Traits Extraction, and Yield Estimation of Shiitake Mushrooms Based on Structure from Motion and Multi-View Stereo
by Xingmei Xu, Jiayuan Li, Jing Zhou, Puyu Feng, Helong Yu and Yuntao Ma
Agriculture 2025, 15(3), 298; https://doi.org/10.3390/agriculture15030298 - 30 Jan 2025
Viewed by 569
Abstract
Phenotypic traits of fungi and their automated extraction are crucial for evaluating genetic diversity, breeding new varieties, and estimating yield. However, research on the high-throughput, rapid, and non-destructive extraction of fungal phenotypic traits using 3D point clouds remains limited. In this study, a [...] Read more.
Phenotypic traits of fungi and their automated extraction are crucial for evaluating genetic diversity, breeding new varieties, and estimating yield. However, research on the high-throughput, rapid, and non-destructive extraction of fungal phenotypic traits using 3D point clouds remains limited. In this study, a smart phone is used to capture multi-view images of shiitake mushrooms (Lentinula edodes) from three different heights and angles, employing the YOLOv8x model to segment the primary image regions. The segmented images were reconstructed in 3D using Structure from Motion (SfM) and Multi-View Stereo (MVS). To automatically segment individual mushroom instances, we developed a CP-PointNet++ network integrated with clustering methods, achieving an overall accuracy (OA) of 97.45% in segmentation. The computed phenotype correlated strongly with manual measurements, yielding R2 > 0.8 and nRMSE < 0.09 for the pileus transverse and longitudinal diameters, R2 = 0.53 and RMSE = 3.26 mm for the pileus height, R2 = 0.79 and nRMSE = 0.12 for stipe diameter, and R2 = 0.65 and RMSE = 4.98 mm for the stipe height. Using these parameters, yield estimation was performed using PLSR, SVR, RF, and GRNN machine learning models, with GRNN demonstrating superior performance (R2 = 0.91). This approach was also adaptable for extracting phenotypic traits of other fungi, providing valuable support for fungal breeding initiatives. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Overall workflow for 3D reconstruction, point cloud segmentation, phenotypic calculation, and yield prediction of shiitake mushrooms. A novel CP-PointNet++ point cloud segmentation network was proposed based on CBAM and PConv.</p>
Full article ">Figure 2
<p>Data collection and annotation. (<b>a</b>) Illustration of the data collection process. (<b>b</b>) Collected RGB images. (<b>c</b>) Example of image annotation.</p>
Full article ">Figure 3
<p>3D reconstruction and data preprocessing process: (<b>a</b>) extraction and matching of shiitake mushroom feature points. (<b>b</b>) SfM: structure from motion. (<b>c</b>) MVS: multi-view stereo. (<b>d</b>) Data preprocessing.</p>
Full article ">Figure 4
<p>CP-PointNet++ segmentation network structure. The CBAM module was added to the MLP layers to enhance the model’s feature extraction ability, and the standard convolutions in the MLP were replaced with PConv.</p>
Full article ">Figure 5
<p>Schematic diagram of the CBAM module.</p>
Full article ">Figure 6
<p>Schematic diagram of the PConv module.</p>
Full article ">Figure 7
<p>(<b>a</b>) Pileus point cloud; (<b>b</b>) shiitake mushrooms OBB bounding box; (<b>c</b>) stipe point cloud; (<b>d</b>) pileus transverse and longitudinal diameters; (<b>e</b>) pileus height; (<b>f</b>) stipe diameter; and (<b>g</b>) stipe height.</p>
Full article ">Figure 8
<p>Display of segmentation results for a portion of shiitake mushrooms spawn point cloud. The first row shows the shiitake mushrooms spawn point cloud; the second row presents the point cloud data segmented by CP-PointNet++, where pink, blue, and gray represent the pileus, stipe, and spawn categories, respectively; and the third row shows multiple shiitake mushrooms spawn instances output by the clustering algorithm.</p>
Full article ">Figure 9
<p>Comparison of the calculated phenotypic parameters and measured values of shiitake mushrooms: (<b>a</b>) pileus transverse diameter; (<b>b</b>) pileus longitudinal diameter; (<b>c</b>) pileus thickness; (<b>d</b>) stipe diameter; (<b>e</b>) stipe height.</p>
Full article ">Figure 10
<p>Analysis of phenotypic calculation errors. Histograms of errors for the sorted pileus transverse diameter (<b>a</b>), pileus longitudinal diameter (<b>b</b>), pileus thickness (<b>c</b>), stipe diameter (<b>d</b>), and stipe height (<b>e</b>), and the incomplete shiitake mushroom point cloud (<b>f</b>).</p>
Full article ">Figure 11
<p>Some deformed shiitake mushrooms. (<b>a</b>–<b>c</b>) Deformed mushrooms with difficulty in manually measuring transverse diameter. (<b>d</b>–<b>f</b>) Deformed mushrooms with difficulty in manually measuring longitudinal diameter.</p>
Full article ">Figure 12
<p>Correlation of yield estimation results for each model: (<b>a</b>) PLSR; (<b>b</b>) RF; (<b>c</b>) SVR; (<b>d</b>) GRNN.</p>
Full article ">
19 pages, 5298 KiB  
Article
Predictive Model of Granular Fertilizer Spreading Deposition Distribution Based on GA-GRNN Neural Network
by Lilian Liu, Guobin Wang, Yubin Lan, Xinyu Xue, Suming Ding, Huizheng Wang and Cancan Song
Drones 2025, 9(1), 16; https://doi.org/10.3390/drones9010016 - 27 Dec 2024
Viewed by 573
Abstract
In this paper, we investigate the particle deposition distribution characteristics in granular fertilizer spreading, establish a relationship model between operational parameters and particle deposition distribution, and design an unmanned aerial vehicle (UAV) fertilizer particle deposition prediction system based on neural network decision making, [...] Read more.
In this paper, we investigate the particle deposition distribution characteristics in granular fertilizer spreading, establish a relationship model between operational parameters and particle deposition distribution, and design an unmanned aerial vehicle (UAV) fertilizer particle deposition prediction system based on neural network decision making, which provides a decision-making basis for the variable fertilizer application model under multifactorial interactions. The particle deposition distribution data under different operating parameters were obtained by EDEM simulation and data superposition methods, and a generalized regression neural network (GRNN) based on a genetic algorithm (GA) was used to establish the prediction model of particle deposition, which was validated by bench test. The results show that the prediction accuracy and training effect of the GA-GRNN model are better than those of the GRNN, with a coefficient of determination of 0.839, and that the results of the GA-GRNN model are closer to the actual data when predicting the effective amplitude of the deposition amount, which is more accurate. The bench-scale validation test shows that the simulation is basically consistent with the actual measured deposition amount, and the deposition curve is normally distributed with a lateral error of about 3%. The results validate the reliability of the data superposition method for particle deposition distribution and the feasibility of the GA-GRNN model in multifactor prediction, which provides a theoretical basis and practical guidance for precision fertilizer application operations using agricultural UAVs. Full article
Show Figures

Figure 1

Figure 1
<p>DJI T30 spreader. (<b>a</b>) Physical drawing of the spreader, (<b>b</b>) three-dimensional model drawing of the spreader.</p>
Full article ">Figure 2
<p>Schematic diagram of the simulation process.</p>
Full article ">Figure 3
<p>UAV particle deposition prediction software interface. 1. Data import; 2. sedimentation data visualization; 3. parameter adjustment; 4. sedimentation data after stacking; 5. sedimentation curves.</p>
Full article ">Figure 4
<p>Schematic diagram of superposition.</p>
Full article ">Figure 5
<p>Schematic diagram of the static test setup. 1. Material box; 2. support frame; 3. spreader; 4. centrifugal disk; 5. DC motor; 6.24 V battery.</p>
Full article ">Figure 6
<p>Test schematic. Dashed arrows indicate the direction of advance of the drone.</p>
Full article ">Figure 7
<p>GRNN prediction model topology diagram.</p>
Full article ">Figure 8
<p>GA-GRNN flowchart. The numbers in the figure represent the steps carried out by the network during the run of GA-GRNN.</p>
Full article ">Figure 9
<p>Data prediction page. 1. Data import; 2 individual output result visualization; 3. iterative graph; 4. overall output result visualization; 5 prediction result visualization.</p>
Full article ">Figure 10
<p>Iterative graph of the average adaptation of the GA-GRNN neural network.</p>
Full article ">Figure 11
<p>Comparison of predicted particle deposition values. (<b>a</b>) GRNN; (<b>b</b>) GA-GRNN. (<b>a</b>). a is the GRNN prediction for groups 1–4. b is the GRNN prediction for groups 5–8. c is the GRNN prediction for groups 9–12. d is the GRNN prediction for groups 13–16. (<b>b</b>). a is the GA-GRNN prediction for groups 1–4. b is the GA-GRNN prediction for groups 5–8. c is the GA-GRNN prediction for groups 9–12. d is the GA-GRNN prediction for groups 13–16.</p>
Full article ">Figure 11 Cont.
<p>Comparison of predicted particle deposition values. (<b>a</b>) GRNN; (<b>b</b>) GA-GRNN. (<b>a</b>). a is the GRNN prediction for groups 1–4. b is the GRNN prediction for groups 5–8. c is the GRNN prediction for groups 9–12. d is the GRNN prediction for groups 13–16. (<b>b</b>). a is the GA-GRNN prediction for groups 1–4. b is the GA-GRNN prediction for groups 5–8. c is the GA-GRNN prediction for groups 9–12. d is the GA-GRNN prediction for groups 13–16.</p>
Full article ">Figure 12
<p>Comparison of the simulation test and actual test. (<b>a</b>). Comparison of GA-GRNN predictions with bench test results for group a conditions. (<b>b</b>). Comparison of GA-GRNN predictions with bench test results for group b conditions. (<b>c</b>). Comparison of GA-GRNN predictions with bench test results for group c conditions. (<b>d</b>). Comparison of GA-GRNN predictions with bench test results for group d conditions.</p>
Full article ">
20 pages, 6292 KiB  
Article
Comparative Analysis of Supervised Learning Techniques for Forecasting PV Current in South Africa
by Ely Ondo Ekogha and Pius A. Owolawi
Forecasting 2025, 7(1), 1; https://doi.org/10.3390/forecast7010001 - 26 Dec 2024
Viewed by 798
Abstract
The fluctuations in solar irradiance and temperature throughout the year require an accurate methodology for forecasting the generated current of a PV system based on its specifications. The optimal technique must effectively manage rapid weather fluctuations while maintaining high accuracy in forecasting the [...] Read more.
The fluctuations in solar irradiance and temperature throughout the year require an accurate methodology for forecasting the generated current of a PV system based on its specifications. The optimal technique must effectively manage rapid weather fluctuations while maintaining high accuracy in forecasting the performance of a PV panel. This work presents a comparative examination of supervised learning algorithms optimized with particle swarm optimization for estimating photovoltaic output current. The empirical formula’s measured currents are compared with outputs from various neural networks techniques, including feedforward neural networks (FFNNs), the general regression network known as GRNN, cascade forward neural networks also known as CFNNs, and adaptive fuzzy inference systems known as ANFISs, all optimized for enhanced accuracy using the particle swarm optimization (PSO) method. The ground data utilized for these models comprises hourly irradiations and temperatures from 2023, sourced from several places in South Africa. The accuracy levels indicated by statistical error margins from the root mean square error (RMSE), mean bias error (MBE), and mean absolute percentage error (MAPE) imply a universal enhancement in the algorithms’ precision upon optimization. Full article
(This article belongs to the Section Power and Energy Forecasting)
Show Figures

Figure 1

Figure 1
<p>ANN architecture.</p>
Full article ">Figure 2
<p>University of Pretoria station.</p>
Full article ">Figure 3
<p>CSIR station.</p>
Full article ">Figure 4
<p>Venda station.</p>
Full article ">Figure 5
<p>Zululand university station.</p>
Full article ">Figure 6
<p>ANFIS diagram.</p>
Full article ">Figure 7
<p>SAURAN station’s locations.</p>
Full article ">Figure 8
<p>PSO particle’s position diagram.</p>
Full article ">Figure 9
<p>Monthly irradiance and temperature in 2023.</p>
Full article ">Figure 10
<p>ANFIS training error in CSIR.</p>
Full article ">Figure 11
<p>Poor-season ANFIS prediction.</p>
Full article ">Figure 12
<p>CFNN architecture diagram.</p>
Full article ">Figure 13
<p>CFNN best validation performance.</p>
Full article ">Figure 14
<p>CFNN regression plot diagram.</p>
Full article ">Figure 15
<p>High-season CFNN prediction.</p>
Full article ">Figure 16
<p>High-season GRNN prediction.</p>
Full article ">Figure 17
<p>Six-day prediction trace of the four methods in CSIR.</p>
Full article ">Figure 18
<p>Six-day prediction traces of the four methods in Zululand.</p>
Full article ">Figure 19
<p>A zoomed view of traces in CSIR.</p>
Full article ">Figure 20
<p>PSO optimization process.</p>
Full article ">Figure 21
<p>Optimized CFNN validation performance.</p>
Full article ">
13 pages, 1755 KiB  
Article
A Hybrid of Box-Jenkins ARIMA Model and Neural Networks for Forecasting South African Crude Oil Prices
by Johannes Tshepiso Tsoku, Daniel Metsileng and Tshegofatso Botlhoko
Int. J. Financial Stud. 2024, 12(4), 118; https://doi.org/10.3390/ijfs12040118 - 28 Nov 2024
Viewed by 936
Abstract
The current study aims to model the South African crude oil prices using the hybrid of Box-Jenkins autoregressive integrated moving average (ARIMA) and Neural Networks (NNs). This study introduces a hybrid approach to forecasting methods aimed at resolving the issues of lack of [...] Read more.
The current study aims to model the South African crude oil prices using the hybrid of Box-Jenkins autoregressive integrated moving average (ARIMA) and Neural Networks (NNs). This study introduces a hybrid approach to forecasting methods aimed at resolving the issues of lack of precision in forecasting. The proposed methodology includes two models, namely, hybridisation of ARIMA with artificial neural network (ANN)-based Extreme Learning Machine (ELM) and ARIMA with general regression neural network (GRNN) to model both linear and nonlinear simultaneously. The models were compared with the base ARIMA model. The study utilised monthly time series data spanning from January 2021 to March 2023. The formal stationarity test confirmed that the crude oil price series is integrated of order one, I(1). For the linear process, the ARIMA (2,1,2) model was identified as the best fit for the series and successfully passed all diagnostic tests. The ARIMA-ANN-based ELM hybrid model outperformed both the individual ARIMA model and the ARIMA-GRNN hybrid. However, the ARIMA model also showed better performance than the ARIMA-GRNN hybrid, highlighting its strong competitiveness compared to the ARIMA-ANN-based ELM model. The hybrid models are recommended for use by policy makers and practitioners in general. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the structure of ELM. Source: <a href="#B37-ijfs-12-00118" class="html-bibr">Zhang et al.</a> (<a href="#B37-ijfs-12-00118" class="html-bibr">2017</a>).</p>
Full article ">Figure 2
<p>Schematic diagram of a GRNN architecture. Source: <a href="#B9-ijfs-12-00118" class="html-bibr">Cigizoglu</a> (<a href="#B9-ijfs-12-00118" class="html-bibr">2005</a>).</p>
Full article ">Figure 3
<p>Time series plot of the crude oil price.</p>
Full article ">Figure 4
<p>Plots of the ACF and PACF.</p>
Full article ">
26 pages, 6319 KiB  
Article
A Multi-Mode Pressure Stabilization Control Method for Pump–Valve Cooperation in Liquid Supply System
by Peng Xu and Ziming Kou
Electronics 2024, 13(22), 4512; https://doi.org/10.3390/electronics13224512 - 17 Nov 2024
Viewed by 803
Abstract
In order to solve the problems of frequent pressure fluctuations caused by frequent action of the unloading valve of the pump station and serious hydraulic shock due to the variable amount of fluid used in the hydraulic support system of the coal mining [...] Read more.
In order to solve the problems of frequent pressure fluctuations caused by frequent action of the unloading valve of the pump station and serious hydraulic shock due to the variable amount of fluid used in the hydraulic support system of the coal mining face and the irregularity of the load suffered by the system, a pump–valve cooperative multi-mode stabilizing control method based on a digital unloading valve was proposed. Firstly, a prototype of a digital unloading valve under high-pressure and high water-based conditions was developed, and a digital control scheme was proposed to control the pilot valve by a servo motor to adjust the system pressure in real time. Then, an experimental platform for simulating the hydraulic bracket and a co-simulation model was constructed, and the validity of the co-simulation model was verified through experiments. Secondly, a collaborative multi-mode pressure stabilization control method for the pump valve based on a GRNN (General Regression Neural Network) was established to control the flow and pressure output of the emulsion pumping station according to the actual working conditions. Finally, numerical research and experimental verification were carried out for different working conditions to prove the effectiveness of this method. The results showed that the proposed pressure stabilization control method could adaptively adjust the working state of the digital unloading valve and the liquid supply flow of the emulsion pump station according to the working condition of the hydraulic support, effectively reducing the frequency and amplitude of the system pressure fluctuations and making the system pressure more stable. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the working principle of the digital unloading valve.</p>
Full article ">Figure 2
<p>Schematic diagram of the digital unloading valve control system.</p>
Full article ">Figure 3
<p>Principle block diagram of the PID control system.</p>
Full article ">Figure 4
<p>Schematic diagram of the experimental system. 1: emulsion pump; 2: safety valve; 3: digital unloading valve; 4: pressure sensor; 5: energy accumulator; 6: directional valve; 7: flow meter; 8: directional valve; 9: actuator cylinder; 10: loading cylinder; 11: displacement sensor; 12: loading pump; A: emulsion pumping station; B: simulated hydraulic support system; and C: control system.</p>
Full article ">Figure 5
<p>The experimental platform.</p>
Full article ">Figure 6
<p>Co-simulation model of the digital unloading valve. (<b>a</b>) AMESim model of digital unloading valve. (<b>b</b>) Simulink control model.</p>
Full article ">Figure 7
<p>Co-simulation model of the hydraulic support system.</p>
Full article ">Figure 8
<p>Experimental equipment and principle of the hydraulic experimental system. 1: oil tank; 2: electric motor; 3: emulsion pump; 4: safety valve; 5,9: pressure sensor; 6: unloading valve; 7: main valve; 8: pilot valve; 8a: servo motors; 10: directional valve; 11, 12: flow meter; and 13: measuring instrument.</p>
Full article ">Figure 9
<p>Simulation data, experimental results, and relative error of inlet pressure. (<b>a</b>) The pressure curve under the step signal. (<b>b</b>) Pressure curve under ramp signal.</p>
Full article ">Figure 10
<p>Simulation data, experimental results, and relative error of system pressure.</p>
Full article ">Figure 11
<p>Flow chart for pressure stabilization control based on the generalized regression network.</p>
Full article ">Figure 12
<p>Schematic diagram of the valve pump cooperative control system.</p>
Full article ">Figure 13
<p>The structure of the GRNN model.</p>
Full article ">Figure 14
<p>Regression analysis diagram of neural network training (<span class="html-italic">Q</span><sub>p</sub>).</p>
Full article ">Figure 15
<p>Regression analysis diagram of neural network training (<span class="html-italic">P</span><sub>s</sub>).</p>
Full article ">Figure 16
<p>Pressure fluctuation curve of rated fluid supply scheme.</p>
Full article ">Figure 17
<p>Pressure curve of steady-pressure fluid supply scheme.</p>
Full article ">Figure 18
<p>Pressure curve of steady-pressure fluid supply scheme based on a digital unloading valve.</p>
Full article ">Figure 19
<p>Load signals of the raising and descending stages.</p>
Full article ">Figure 20
<p>Online updating of steady-pressure fluid supply pressure curve.</p>
Full article ">Figure 21
<p>Pressure curve of steady-pressure fluid supply experiment.</p>
Full article ">
15 pages, 11880 KiB  
Article
An Objective Evaluation Method for Image Sharpness Under Different Illumination Imaging Conditions
by Huan He, Benchi Jiang, Chenyang Shi, Yuelin Lu and Yandan Lin
Photonics 2024, 11(11), 1032; https://doi.org/10.3390/photonics11111032 - 1 Nov 2024
Viewed by 1009
Abstract
Blurriness is troublesome in digital images when captured under different illumination imaging conditions. To obtain an accurate blurred image quality assessment (IQA), a machine learning-based objective evaluation method for image sharpness under different illumination imaging conditions is proposed. In this method, the visual [...] Read more.
Blurriness is troublesome in digital images when captured under different illumination imaging conditions. To obtain an accurate blurred image quality assessment (IQA), a machine learning-based objective evaluation method for image sharpness under different illumination imaging conditions is proposed. In this method, the visual saliency, color difference, and gradient information are selected as the image features, and the relevant feature information of these three aspects is extracted from the image as the feature value for the blurred image evaluation under different illumination imaging conditions. Then, a particle swarm optimization-based general regression neural network (PSO-GRNN) is established to train the above extracted feature values, and the final blurred image evaluation result is determined. The proposed method was validated based on three databases, i.e., BID, CID2013, and CLIVE, which contain real blurred images under different illumination imaging conditions. The experimental results showed that the proposed method has good performance in evaluating the quality of images under different imaging conditions. Full article
(This article belongs to the Special Issue New Perspectives in Optical Design)
Show Figures

Figure 1

Figure 1
<p>Images of the same content under different lighting conditions and the corresponding vs. maps. (<b>a</b>,<b>b</b>) are images of the same content under different lighting conditions [<a href="#B23-photonics-11-01032" class="html-bibr">23</a>], while (<b>c</b>,<b>d</b>) are the corresponding vs. maps.</p>
Full article ">Figure 2
<p>(<b>a</b>) and (<b>b</b>) are two CD pseudo-color maps of different images in CID2013.</p>
Full article ">Figure 3
<p>Blurred images of the same content under different lighting conditions and corresponding gradient maps: (<b>a</b>,<b>b</b>) are blurry images of the same content under different lighting conditions [<a href="#B23-photonics-11-01032" class="html-bibr">23</a>], while (<b>c</b>,<b>d</b>) are corresponding gradient maps.</p>
Full article ">Figure 4
<p>Flowchart for extracting image feature values.</p>
Full article ">Figure 5
<p>The network structure of GRNN.</p>
Full article ">Figure 6
<p>Overall framework diagram of PSO-GRNN algorithm.</p>
Full article ">Figure 7
<p>Box chart results of different scenarios on CID2013.</p>
Full article ">Figure 8
<p>Scatter plot and fitting curve of BID, CID2013, and CLIVE databases.</p>
Full article ">
17 pages, 10332 KiB  
Article
Mapping the Normalized Difference Vegetation Index for the Contiguous U.S. Since 1850 Using 391 Tree-Ring Plots
by Hang Li, Ichchha Thapa, Shuang Xu and Peisi Yang
Remote Sens. 2024, 16(21), 3973; https://doi.org/10.3390/rs16213973 - 25 Oct 2024
Viewed by 1115
Abstract
The forests and grasslands in the U.S. are vulnerable to global warming and extreme weather events. Current satellites do not provide historical vegetation density images over the long term (more than 50 years), which has restricted the documentation of key ecological processes and [...] Read more.
The forests and grasslands in the U.S. are vulnerable to global warming and extreme weather events. Current satellites do not provide historical vegetation density images over the long term (more than 50 years), which has restricted the documentation of key ecological processes and their resultant responses over decades due to the absence of large-scale and long-term monitoring studies. We performed point-by-point regression and collected data from 391 tree-ring plots to reconstruct the annual normalized difference vegetation index (NDVI) time-series maps for the contiguous U.S. from 1850 to 2010. Among three machine learning approaches for regressions—Support Vector Machine (SVM), General Regression Neural Network (GRNN), and Random Forest (RF)—we chose GRNN regression to simulate the annual NDVI with lowest Root Mean Square Error (RMSE) and highest adjusted R2. From the Little Ice Age to the present, the NDVI increased by 6.73% across the contiguous U.S., except during some extreme events such as the Dust Bowl drought, during which the averaged NDVI decreased, particularly in New Mexico. The NDVI trend was positive in the Northern Forest, Tropical Humid Forest, Northern West Forest Mountains, Marin West Coast Forests, and Mediterranean California, while other ecoregions showed a negative trend. At the state level, Washington and Louisiana had significantly positive correlations with temperature (p < 0.05). Washington had a significantly negative correlation with precipitation (p < 0.05), whereas Oklahoma had a significantly positive correlation (p < 0.05) with precipitation. This study provides insights into the spatial distribution of paleo-vegetation and its climate drivers. This study is the first to attempt a national-scale reconstruction of the NDVI over such a long period (151 years) using tree rings and machine learning. Full article
Show Figures

Figure 1

Figure 1
<p>Land cover and tree stands distribution of our study areas. Notes: The land cover classification is the reclassified MCD12C1 MODIS product in 2010. The tree stands are from our collection and the ITRDB. The solid lines in the map are the state boundaries. The 10 ecoregions are the following: 1. Northern Forests, 2. Eastern Temperate Forests, 3. Tropical Humid Forests, 4. Great Plains, 5. Northwestern Forested Mountains, 6. North American Deserts, 7. Marine West Coast Forests, 8. Mediterranean California, 9. Temperate Sierras, 10. Southern Semi-arid Highlands.</p>
Full article ">Figure 2
<p>The relationship between tree ring and the NDVI diagram and our study workflow. Note: the scanned tree-ring photo is from our plot whose location can be found in <a href="#remotesensing-16-03973-f001" class="html-fig">Figure 1</a>. (<b>A</b>,<b>B</b>) show the relationship between tree ring and the NDVI and our research work flow.</p>
Full article ">Figure 3
<p>Sensitivity test for all vegetation pixels with three radii (825 km, 1100 km, and 1375 km). Note: When search radii are 825, 1100, and 1375 km in the 1850 sub-period, in some extreme cases, some vegetation pixels in southern Texas with the least surrounding plots have 0, 1, and 3 plots, respectively. (<b>A–C</b>) are the sensitivity tests for all vegetation pixels with three radii.)</p>
Full article ">Figure 4
<p>Scatter plots of three approaches in two sub-periods. Note: the scatter plots show the model performances in 1985 but the metrics are the average of the five folds. The GRNN in both periods is the best, so the metrics were highlighted in red. (<b>A</b>–<b>F</b>) showed the performances of three approaches in dry year and normal year.</p>
Full article ">Figure 5
<p>The averaged NDVI map and NDVI change rates from first 10 years to last 10 years. Note: The reclassified change rates were divided into nine levels: &gt;20%, −20~−10%; −10~5%; −2.5~2.5%; 2.5 ~5%; 5~10%; 10~20% and &gt;20%. The first 10 years and last 10 years indicate 1850–1959 and 2001–2010, respectively. (<b>A</b>,<b>B</b>) are the averaged NDVI map and the change rate map.</p>
Full article ">Figure 6
<p>Contiguous NDVI map in 1850, 1933, 1965, and 2010. Notes: national PDSI values have been available since 1895. The three big circles in 1850, 1933, and 2010 indicate the substantial NDVI changes. (<b>A–D</b>) are the NDVI maps in 1850, 1933, 1965 and 2010. (<b>E</b>) displayed the annual NDVI values from 1850 to 2010.</p>
Full article ">Figure 7
<p>NDVI changes in all ecoregions and the whole U.S using 5-year-interval NDVI. Note: the p values in four ecoregions are less than 0.05. The y and x in the linear regressions are NDVI values and year, respectively. Brown and green lines indicate increasing and decreasing tendencies, respectively.</p>
Full article ">Figure 8
<p>Correlations between five-year NDVI and two drivers (temperature and precipitation). Note: Washington, D.C. is small and has a lack of vegetation, so we excluded it. The star symbols indicate that the correlation values meet the 0.05 significant level. (<b>A</b>,<b>B</b>) are the correlation maps of temperature and precipitation, respectively.</p>
Full article ">
38 pages, 16115 KiB  
Article
Neural Approach to Coordinate Transformation for LiDAR–Camera Data Fusion in Coastal Observation
by Ilona Garczyńska-Cyprysiak, Witold Kazimierski and Marta Włodarczyk-Sielicka
Sensors 2024, 24(20), 6766; https://doi.org/10.3390/s24206766 - 21 Oct 2024
Viewed by 1540
Abstract
The paper presents research related to coastal observation using a camera and LiDAR (Light Detection and Ranging) mounted on an unmanned surface vehicle (USV). Fusion of data from these two sensors can provide wider and more accurate information about shore features, utilizing the [...] Read more.
The paper presents research related to coastal observation using a camera and LiDAR (Light Detection and Ranging) mounted on an unmanned surface vehicle (USV). Fusion of data from these two sensors can provide wider and more accurate information about shore features, utilizing the synergy effect and combining the advantages of both systems. Fusion is used in autonomous cars and robots, despite many challenges related to spatiotemporal alignment or sensor calibration. Measurements from various sensors with different timestamps have to be aligned, and the measurement systems need to be calibrated to avoid errors related to offsets. When using data from unstable, moving platforms, such as surface vehicles, it is more difficult to match sensors in time and space, and thus, data acquired from different devices will be subject to some misalignment. In this article, we try to overcome these problems by proposing the use of a point matching algorithm for coordinate transformation for data from both systems. The essence of the paper is to verify algorithms based on selected basic neural networks, namely the multilayer perceptron (MLP), the radial basis function network (RBF), and the general regression neural network (GRNN) for the alignment process. They are tested with real recorded data from the USV and verified against numerical methods commonly used for coordinate transformation. The results show that the proposed approach can be an effective solution as an alternative to numerical calculations, due to process improvement. The image data can provide information for identifying characteristic objects, and the obtained accuracies for platform dynamics in the water environment are satisfactory (root mean square error—RMSE—smaller than 1 m in many cases). The networks provided outstanding results for the training set; however, they did not perform as well as expected, in terms of the generalization capability of the model. This leads to the conclusion that processing algorithms cannot overcome the limitations of matching point accuracy. Further research will extend the approach to include information on the position and direction of the vessel. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

Figure 1
<p>Display of the point cloud in Cloudompare: (<b>a</b>) side view and (<b>b</b>) top view.</p>
Full article ">Figure 2
<p>Tests for the expert method with a non-calibrated camera (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 3
<p>Measurements location and system: (<b>a</b>) the measurement area, which is the city of Gdynia, the seaport docks; (<b>b</b>) the mapping of data by the autonomous HydroDron-1 vessel. The red part shows the data collected by the LiDAR vessel installed, and yellow color presents the camera’s range of view; (<b>c</b>) close-up view of the sensors used, with the lens mounted on the left and the location of the LiDAR on the right.</p>
Full article ">Figure 4
<p>Diagram of time and position synchronization for data acquisition sensors.</p>
Full article ">Figure 5
<p>The figure shows, in parts (<b>a</b>–<b>d</b>), the frames from the camera with the selected points; in part (<b>e</b>) the coordinates of the points from the camera; in part (<b>f</b>), a plot of the identical points extracted from the LiDAR point cloud; in part (<b>g</b>), projection of base coordinates—LiDAR and coordinates transformed from the camera.</p>
Full article ">Figure 5 Cont.
<p>The figure shows, in parts (<b>a</b>–<b>d</b>), the frames from the camera with the selected points; in part (<b>e</b>) the coordinates of the points from the camera; in part (<b>f</b>), a plot of the identical points extracted from the LiDAR point cloud; in part (<b>g</b>), projection of base coordinates—LiDAR and coordinates transformed from the camera.</p>
Full article ">Figure 6
<p>Integration of camera and LiDAR data (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 7
<p>RMSE for non-standardized (raw) and standardized datasets.</p>
Full article ">Figure 8
<p>RMSE for various neural networks for examples 1–6.</p>
Full article ">Figure 9
<p>RMSE for various neural networks for examples 7–11.</p>
Full article ">Figure 10
<p>RMSE for various neural networks for examples 12–16.</p>
Full article ">Figure 11
<p>RMSE for selected methods for the training set in comparative research.</p>
Full article ">Figure 12
<p>RMSE for selected methods for the test set in comparative research.</p>
Full article ">Figure 13
<p>RMSE for selected methods for the train set in comparative research.</p>
Full article ">Figure 14
<p>Mean RMSE for selected methods for training and test the test sets.</p>
Full article ">Figure 15
<p>Mean RMSE for selected methods for training and test sets in case groups—non-calibrated camera, long distance (cases 1–6); calibrated camera, short distance (cases 7–11); calibrated camera long distances (cases 12–16).</p>
Full article ">Figure 16
<p>Mean RMSE for selected methods for training and test sets in case groups: non-calibrated camera, long distance (cases 1–6); calibrated camera, short distance (cases 7–11); calibrated camera long distances (cases 12–16)—validation used during training.</p>
Full article ">
31 pages, 6280 KiB  
Article
Proposing Optimized Random Forest Models for Predicting Compressive Strength of Geopolymer Composites
by Feng Bin, Shahab Hosseini, Jie Chen, Pijush Samui, Hadi Fattahi and Danial Jahed Armaghani
Infrastructures 2024, 9(10), 181; https://doi.org/10.3390/infrastructures9100181 - 9 Oct 2024
Cited by 4 | Viewed by 1704
Abstract
This paper explores advanced machine learning approaches to enhance the prediction accuracy of compressive strength (CoS) in geopolymer composites (GePC). Geopolymers, as sustainable alternatives to Ordinary Portland Cement (OPC), offer significant environmental benefits by utilizing industrial by-products such as fly ash and ground [...] Read more.
This paper explores advanced machine learning approaches to enhance the prediction accuracy of compressive strength (CoS) in geopolymer composites (GePC). Geopolymers, as sustainable alternatives to Ordinary Portland Cement (OPC), offer significant environmental benefits by utilizing industrial by-products such as fly ash and ground granulated blast furnace slag (GGBS). The accurate prediction of their compressive strength is crucial for optimizing their mix design and reducing experimental efforts. We present a comparative analysis of two hybrid models, Harris Hawks Optimization with Random Forest (HHO-RF) and Sine Cosine Algorithm with Random Forest (SCA-RF), against traditional regression methods and classical models like the Extreme Learning Machine (ELM), General Regression Neural Network (GRNN), and Radial Basis Function (RBF). Using a comprehensive dataset derived from various scientific publications, we focus on key input variables including the fine aggregate, GGBS, fly ash, sodium hydroxide (NaOH) molarity, and others. Our results indicate that the SCA-RF model achieved a superior performance with a root mean square error (RMSE) of 1.562 and a coefficient of determination (R2) of 0.987, compared to the HHO-RF model, which obtained an RMSE of 1.742 and an R2 of 0.982. Both hybrid models significantly outperformed traditional methods, demonstrating their higher accuracy and reliability in predicting the compressive strength of GePC. This research underscores the potential of hybrid machine learning models in advancing sustainable construction materials through precise predictive modeling, paving the way for more environmentally friendly and efficient construction practices. Full article
Show Figures

Figure 1

Figure 1
<p>Various phases in HHO algorithm.</p>
Full article ">Figure 2
<p>Flowchart of HHO algorithm.</p>
Full article ">Figure 3
<p>The method of updating an answer towards or away from the optimal option.</p>
Full article ">Figure 4
<p>The sine and cosine declining patterns.</p>
Full article ">Figure 5
<p>Architecture of RF algorithm.</p>
Full article ">Figure 6
<p>Violin plot of CoSGePC parameters: (<b>a</b>) Ni2SiO3, (<b>b</b>) Gravel 4/10, (<b>c</b>) GGBS, (<b>d</b>) Gravel 10/20, (<b>e</b>) FAg, (<b>f</b>) FA, (<b>g</b>) WS, (<b>h</b>) NaOH, (<b>i</b>) NaOH molarity, and (<b>j</b>) CoSGePC.</p>
Full article ">Figure 6 Cont.
<p>Violin plot of CoSGePC parameters: (<b>a</b>) Ni2SiO3, (<b>b</b>) Gravel 4/10, (<b>c</b>) GGBS, (<b>d</b>) Gravel 10/20, (<b>e</b>) FAg, (<b>f</b>) FA, (<b>g</b>) WS, (<b>h</b>) NaOH, (<b>i</b>) NaOH molarity, and (<b>j</b>) CoSGePC.</p>
Full article ">Figure 7
<p>Heatmap of CoSGePC parameters.</p>
Full article ">Figure 8
<p>Convergence plot of HHO-RF model.</p>
Full article ">Figure 9
<p>Convergence plot of SCA-RF model.</p>
Full article ">Figure 10
<p>Prediction error analysis of the models.</p>
Full article ">Figure 10 Cont.
<p>Prediction error analysis of the models.</p>
Full article ">Figure 11
<p>Comprehensive rankings of CoSGePC estimation techniques.</p>
Full article ">Figure 12
<p>Taylor diagram to show performance of developed models in training (<b>above</b>) and testing (<b>below</b>) phases.</p>
Full article ">Figure 12 Cont.
<p>Taylor diagram to show performance of developed models in training (<b>above</b>) and testing (<b>below</b>) phases.</p>
Full article ">Figure 13
<p>The effective parameters and their importance.</p>
Full article ">
24 pages, 14371 KiB  
Article
An Enhanced Transportation System for People of Determination
by Uma Perumal, Fathe Jeribi and Mohammed Hameed Alhameed
Sensors 2024, 24(19), 6411; https://doi.org/10.3390/s24196411 - 3 Oct 2024
Viewed by 970
Abstract
Visually Impaired Persons (VIPs) have difficulty in recognizing vehicles used for navigation. Additionally, they may not be able to identify the bus to their desired destination. However, the bus bay in which the designated bus stops has not been analyzed in the existing [...] Read more.
Visually Impaired Persons (VIPs) have difficulty in recognizing vehicles used for navigation. Additionally, they may not be able to identify the bus to their desired destination. However, the bus bay in which the designated bus stops has not been analyzed in the existing literature. Thus, a guidance system for VIPs that identifies the correct bus for transportation is presented in this paper. Initially, speech data indicating the VIP’s destination are pre-processed and converted to text. Next, utilizing the Arctan Gradient-activated Recurrent Neural Network (ArcGRNN) model, the number of bays at the location is detected with the help of a Global Positioning System (GPS), input text, and bay location details. Then, the optimal bay is chosen from the detected bays by utilizing the Experienced Perturbed Bacteria Foraging Triangular Optimization Algorithm (EPBFTOA), and an image of the selected bay is captured and pre-processed. Next, the bus is identified utilizing a You Only Look Once (YOLO) series model. Utilizing the Sub-pixel Shuffling Convoluted Encoder–ArcGRNN Decoder (SSCEAD) framework, the text is detected and segmented for the buses identified in the image. From the segmented output, the text is extracted, based on the destination and route of the bus. Finally, regarding the similarity value with respect to the VIP’s destination, a decision is made utilizing the Multi-characteristic Non-linear S-Curve-Fuzzy Rule (MNC-FR). This decision informs the bus conductor about the VIP, such that the bus can be stopped appropriately to pick them up. During testing, the proposed system selected the optimal bay in 247,891 ms, which led to deciding the bus stop for the VIP with a fuzzification time of 34,197 ms. Thus, the proposed model exhibits superior performance over those utilized in prevailing works. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed model.</p>
Full article ">Figure 2
<p>Architecture of ArcGRNN.</p>
Full article ">Figure 3
<p>Graphical representation of the proposed decision generation approach.</p>
Full article ">Figure 4
<p>Comparison of the proposed bus bay detection method with existing methods.</p>
Full article ">Figure 5
<p>Graphical analysis of FPR and FNR for bus bay detection.</p>
Full article ">Figure 6
<p>Training time evaluation.</p>
Full article ">Figure 7
<p>Graphical depiction of similarity scores obtained by text detection approaches.</p>
Full article ">Figure 8
<p>Performance evaluation of the proposed SSCEAD.</p>
Full article ">
19 pages, 8388 KiB  
Article
Development of Machine Learning and Deep Learning Prediction Models for PM2.5 in Ho Chi Minh City, Vietnam
by Phuc Hieu Nguyen, Nguyen Khoi Dao and Ly Sy Phu Nguyen
Atmosphere 2024, 15(10), 1163; https://doi.org/10.3390/atmos15101163 - 29 Sep 2024
Cited by 2 | Viewed by 1750
Abstract
The application of machine learning and deep learning in air pollution management is becoming increasingly crucial, as these technologies enhance the accuracy of pollution prediction models, facilitating timely interventions and policy adjustments. They also facilitate the analysis of large datasets to identify pollution [...] Read more.
The application of machine learning and deep learning in air pollution management is becoming increasingly crucial, as these technologies enhance the accuracy of pollution prediction models, facilitating timely interventions and policy adjustments. They also facilitate the analysis of large datasets to identify pollution sources and trends, ultimately contributing to more effective and targeted environmental protection strategies. Ho Chi Minh City (HCMC), a major metropolitan area in southern Vietnam, has experienced a significant rise in air pollution levels, particularly PM2.5, in recent years, creating substantial risks to both public health and the environment. Given the challenges posed by air quality issues, it is essential to develop robust methodologies for predicting PM2.5 concentrations in HCMC. This study seeks to develop and evaluate multiple machine learning and deep learning models for predicting PM2.5 concentrations in HCMC, Vietnam, utilizing PM2.5 and meteorological data over 911 days, from 1 January 2021 to 30 June 2023. Six algorithms were applied: random forest (RF), extreme gradient boosting (XGB), support vector regression (SVR), artificial neural network (ANN), generalized regression neural network (GRNN), and convolutional neural network (CNN). The results indicated that the ANN is the most effective algorithm for predicting PM2.5 concentrations, with an index of agreement (IOA) value of 0.736 and the lowest prediction errors during the testing phase. These findings imply that the ANN algorithm could serve as an effective tool for predicting PM2.5 concentrations in urban environments, particularly in HCMC. This study provides valuable insights into the factors that affect PM2.5 concentrations in HCMC and emphasizes the capacity of AI methodologies in reducing atmospheric pollution. Additionally, it offers valuable insights for policymakers and health officials to implement targeted interventions aimed at reducing air pollution and improving public health. Full article
(This article belongs to the Special Issue Atmospheric Pollution in Highly Polluted Areas)
Show Figures

Figure 1

Figure 1
<p>Workflow for developing a PM<sub>2.5</sub> prediction model.</p>
Full article ">Figure 2
<p>Meteorological and PM<sub>2.5</sub> data in HCMC from 1 January 2021 to 30 June 2023.</p>
Full article ">Figure 3
<p>Distribution of meteorological and PM<sub>2.5</sub> data in HCMC: (<b>a</b>) temperature, (<b>b</b>) humidity, (<b>c</b>) evaporation, (<b>d</b>) wind speed, (<b>e</b>) sunshine hours, (<b>f</b>) rainfall, and (<b>g</b>) PM<sub>2.5</sub> concentration.</p>
Full article ">Figure 4
<p>Training and testing results from the optimal random forest model: (<b>a</b>) training result and (<b>b</b>) testing result.</p>
Full article ">Figure 5
<p>Training and testing results from the optimal XGB model: (<b>a</b>) training result and (<b>b</b>) testing result.</p>
Full article ">Figure 6
<p>Training and testing results from the optimal SVR model: (<b>a</b>) training result and (<b>b</b>) testing result.</p>
Full article ">Figure 7
<p>Training and testing results from the optimal ANN model: (<b>a</b>) training result and (<b>b</b>) testing result.</p>
Full article ">Figure 8
<p>Training and testing results from the optimal GRNN model: (<b>a</b>) training result and (<b>b</b>) testing result.</p>
Full article ">Figure 9
<p>Training and testing results from the optimal CNN model: (<b>a</b>) training result and (<b>b</b>) testing result.</p>
Full article ">
13 pages, 9028 KiB  
Article
Rapid Real-Time Prediction Techniques for Ammonia and Nitrite in High-Density Shrimp Farming in Recirculating Aquaculture Systems
by Fudi Chen, Tianlong Qiu, Jianping Xu, Jiawei Zhang, Yishuai Du, Yan Duan, Yihao Zeng, Li Zhou, Jianming Sun and Ming Sun
Fishes 2024, 9(10), 386; https://doi.org/10.3390/fishes9100386 - 28 Sep 2024
Cited by 2 | Viewed by 1258
Abstract
Water quality early warning is a key aspect in industrial recirculating aquaculture systems for high-density shrimp farming. The concentrations of ammonia nitrogen and nitrite in the water significantly impact the cultured animals and are challenging to measure in real-time, posing a substantial challenge [...] Read more.
Water quality early warning is a key aspect in industrial recirculating aquaculture systems for high-density shrimp farming. The concentrations of ammonia nitrogen and nitrite in the water significantly impact the cultured animals and are challenging to measure in real-time, posing a substantial challenge to water quality early warning technology. This study aims to collect data samples using low-cost water quality sensors during the industrial recirculating aquaculture process and to construct predictive values for ammonia nitrogen and nitrite, which are difficult to obtain through sensors in the aquaculture environment, using data prediction techniques. This study employs various machine learning algorithms, including General Regression Neural Network (GRNN), Deep Belief Network (DBN), Long Short-Term Memory (LSTM), and Support Vector Machine (SVM), to build predictive models for ammonia nitrogen and nitrite. The accuracy of the models is determined by comparing the predicted values with the actual values, and the performance of the models is evaluated using Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE) metrics. Ultimately, the optimized GRNN-based predictive model for ammonia nitrogen concentration (MAE = 0.5915, MAPE = 28.95%, RMSE = 0.7765) and the nitrite concentration predictive model (MAE = 0.1191, MAPE = 29.65%, RMSE = 0.1904) were selected. The models can be integrated into an Internet of Things system to analyze the changes in ammonia nitrogen and nitrite concentrations over time through aquaculture management and routine water quality conditions, thereby achieving the application of recirculating aquaculture system water environment early warning technology. Full article
(This article belongs to the Special Issue Advances in Recirculating and Sustainable Aquaculture Systems)
Show Figures

Figure 1

Figure 1
<p>The experimental RAS: (<b>A</b>) the schematic of the image acquisition system; (<b>B</b>) the high-density shrimp RAS in Dalian Huixin Titanium Equipment Development Co., Ltd. (Dalian, China).</p>
Full article ">Figure 2
<p>Artificial Neural Network Algorithm Structure Diagram: (<b>A</b>) Classic artificial neural network structure; (<b>B</b>) LSTM structure diagram.</p>
Full article ">Figure 3
<p>Results of TAN predicting model based on the training data.</p>
Full article ">Figure 4
<p>Results of TAN predicting model based on the testing data.</p>
Full article ">Figure 5
<p>Results of nitrite nitrogen predicting model based on the training data.</p>
Full article ">Figure 6
<p>Results of nitrite nitrogen predicting model based on the testing data.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>h</b>) Scatter plot distribution of TAN prediction data for GRNN, LSTM, DBN, and SVM models.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>h</b>) Scatter plot distribution of NO<sub>2</sub>-N prediction data for GRNN, LSTM, DBN, and SVM models.</p>
Full article ">
21 pages, 7121 KiB  
Article
Prediction and Online Control for Process Parameters of Vanadium Nitrogen Alloys Production Based on Digital Twin
by Zhe Wang, Zifeng Xu, Zenggui Gao, Keqi Zhang and Lilan Liu
Sustainability 2024, 16(17), 7545; https://doi.org/10.3390/su16177545 - 30 Aug 2024
Viewed by 1066
Abstract
The production of vanadium nitrogen alloys (VNs) is a chemical reaction process carried out in a closed pusher plate kiln, making real-time monitoring of key parameters challenging. Traditional methods for controlling process parameters are insufficient to meet the demands of production control. And [...] Read more.
The production of vanadium nitrogen alloys (VNs) is a chemical reaction process carried out in a closed pusher plate kiln, making real-time monitoring of key parameters challenging. Traditional methods for controlling process parameters are insufficient to meet the demands of production control. And the current production line heavily depends on workers’ experience and operates with a relatively low level of automation. In order to solve the above problems, this paper proposes a method for monitoring, predicting, and online controlling the production process parameters of VNs based on digital twins. Firstly, the process parameter affecting quality in the production process is experimentally selected as the target for prediction and control. Then, the ISSA-GRNN (Improved Sparrow Search Algorithm-Generalized Regression Neural Networks) fusion prediction model is constructed to predict the optimal values and intervals for the process parameter of movement interval. Finally, a digital twin system is developed to integrate the fusion prediction model to achieve real-time monitoring and online control of the production line. And the superiority of the algorithm and the feasibility of online control are verified through experiments. This paper achieves accurate prediction and online control of parameters in the VNs production process and has reduced reliance on workers’ production experience. Additionally, it has effectively lowered energy consumption and failure rates, facilitated the transition from traditional kiln production to intelligent production, and thereby supported sustainable development. Full article
Show Figures

Figure 1

Figure 1
<p>VNs Production Line Structure: (<b>a</b>) The pusher plate kiln; (<b>b</b>) the propulsion system; (<b>c</b>) the electrical control system.</p>
Full article ">Figure 2
<p>VN production process.</p>
Full article ">Figure 3
<p>Nitrogen content of each group of products under the control of different process parameters.</p>
Full article ">Figure 4
<p>Architecture of the digital twin online control.</p>
Full article ">Figure 5
<p>Pusher plate kiln 3D model.</p>
Full article ">Figure 6
<p>GRNN Structure.</p>
Full article ">Figure 7
<p>Distribution of initial solution dimensions before and after improvement.</p>
Full article ">Figure 8
<p>ISSA-GRNN computing process.</p>
Full article ">Figure 9
<p>System Function Module.</p>
Full article ">Figure 10
<p>Digital Twin System.</p>
Full article ">Figure 11
<p>Online control interface for process parameters.</p>
Full article ">Figure 12
<p>Plot of the number of iterations against fitness value.</p>
Full article ">Figure 13
<p>Regression plot of prediction error for test samples.</p>
Full article ">Figure 14
<p>Comparison of four network model predictions with ideal values.</p>
Full article ">Figure 15
<p>Nitrogen content of the product before and after the online control box diagram.</p>
Full article ">
23 pages, 9849 KiB  
Article
Comparative Analysis of Machine Learning Models for Predicting the Mechanical Behavior of Bio-Based Cellular Composite Sandwich Structures
by Danial Sheini Dashtgoli, Seyedahmad Taghizadeh, Lorenzo Macconi and Franco Concli
Materials 2024, 17(14), 3493; https://doi.org/10.3390/ma17143493 - 15 Jul 2024
Cited by 5 | Viewed by 2510
Abstract
The growing demand for sustainable materials has significantly increased interest in biocomposites, which are made from renewable raw materials and have excellent mechanical properties. The use of machine learning (ML) can improve our understanding of their mechanical behavior while saving costs and time. [...] Read more.
The growing demand for sustainable materials has significantly increased interest in biocomposites, which are made from renewable raw materials and have excellent mechanical properties. The use of machine learning (ML) can improve our understanding of their mechanical behavior while saving costs and time. In this study, the mechanical behavior of innovative biocomposite sandwich structures under quasi-static out-of-plane compression was investigated using ML algorithms to analyze the effects of geometric variations on load-bearing capacities. A comprehensive dataset of experimental mechanical tests focusing on compression loading was employed, evaluating three ML models—generalized regression neural networks (GRNN), extreme learning machine (ELM), and support vector regression (SVR). Performance indicators such as R-squared (R2), mean absolute error (MAE), and root mean square error (RMSE) were used to compare the models. It was shown that the GRNN model with an RMSE of 0.0301, an MAE of 0.0177, and R2 of 0.9999 in the training dataset, and an RMSE of 0.0874, MAE of 0.0489, and R2 of 0.9993 in the testing set had a higher predictive accuracy. In contrast, the ELM model showed moderate performance, while the SVR model had the lowest accuracy with RMSE, MAE, and R2 values of 0.5769, 0.3782, and 0.9700 for training, and RMSE, MAE, and R2 values of 0.5980, 0.3976 and 0.9695 for testing, suggesting that it has limited effectiveness in predicting the mechanical behavior of the biocomposite structures. The nonlinear load-displacement behavior, including critical peaks and fluctuations, was effectively captured by the GRNN model for both the training and test datasets. The progressive improvement in model performance from SVR to ELM to GRNN was illustrated, highlighting the increasing complexity and capability of machine learning models in capturing detailed nonlinear relationships. The superior performance and generalization ability of the GRNN model were confirmed by the Taylor diagram and Williams plot, with the majority of testing samples falling within the applicability domain, indicating strong generalization to new, unseen data. The results demonstrate the potential of using advanced ML models to accurately predict the mechanical behavior of biocomposites, enabling more efficient and cost-effective development and optimization processes in the field of sustainable materials. Full article
(This article belongs to the Special Issue Machine Learning Techniques in Materials Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>Manufacturing and compressive testing of sandwich panels (data utilized in this ML investigation).</p>
Full article ">Figure 2
<p>Workflow of utilizing ML to predict the mechanical behavior of bio-based sandwich structures.</p>
Full article ">Figure 3
<p>Frequency distributions of various parameters.</p>
Full article ">Figure 4
<p>GRNN architecture.</p>
Full article ">Figure 5
<p>ELM architecture.</p>
Full article ">Figure 6
<p>Load-displacement behavior for actual and GRNN predictions on the training dataset of ample groups A (<b>a</b>), B (<b>b</b>), and C (<b>c</b>).</p>
Full article ">Figure 7
<p>Load-displacement behavior for actual and GRNN predictions on the test dataset. Samples groups (<b>a</b>) A, (<b>b</b>) B, (<b>c</b>) C.</p>
Full article ">Figure 8
<p>Load-displacement behavior for actual and ELM predictions on the training dataset. (<b>a</b>) Group A, (<b>b</b>) Group B, (<b>c</b>) Group C.</p>
Full article ">Figure 9
<p>Load-displacement behavior for actual and ELM predictions on the test dataset. (<b>a</b>) Group A, (<b>b</b>) Group B, (<b>c</b>) Group C.</p>
Full article ">Figure 10
<p>Load-displacement behavior for actual and SVR predictions on the training dataset. (<b>a</b>) Group A, (<b>b</b>) Group B, (<b>c</b>) Group C.</p>
Full article ">Figure 11
<p>Load-displacement behavior for actual and SVR predictions on the test dataset. (<b>a</b>) Group A, (<b>b</b>) Group B, (<b>c</b>) Group C.</p>
Full article ">Figure 12
<p>Cross-plots for (<b>a</b>) GRNN (train), (<b>b</b>) GRNN (test), (<b>c</b>) ELM (train), (<b>d</b>) ELM (test), (<b>e</b>) SVR (train), and (<b>f</b>) SVR (test).</p>
Full article ">Figure 12 Cont.
<p>Cross-plots for (<b>a</b>) GRNN (train), (<b>b</b>) GRNN (test), (<b>c</b>) ELM (train), (<b>d</b>) ELM (test), (<b>e</b>) SVR (train), and (<b>f</b>) SVR (test).</p>
Full article ">Figure 13
<p>Taylor diagram for (<b>a</b>) training dataset, (<b>b</b>) testing dataset.</p>
Full article ">Figure 14
<p>AUC analysis for (<b>a</b>) Training dataset, (<b>b</b>) Testing dataset.</p>
Full article ">Figure 15
<p>Williams plot illustrating the performance of GRNN.</p>
Full article ">Figure 16
<p>Detailed error analysis for the GRNN model.</p>
Full article ">Figure 17
<p>Relative error distribution for the GRNN model.</p>
Full article ">
Back to TopTop