[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (546)

Search Parameters:
Keywords = contour mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6234 KiB  
Article
Data-Efficient Bone Segmentation Using Feature Pyramid- Based SegFormer
by Naohiro Masuda, Keiko Ono, Daisuke Tawara, Yusuke Matsuura and Kentaro Sakabe
Sensors 2025, 25(1), 81; https://doi.org/10.3390/s25010081 - 26 Dec 2024
Viewed by 251
Abstract
The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability [...] Read more.
The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability to incorporate positional information. As orthopedic surgery increasingly requires precise automatic diagnosis, we explored SegFormer, an enhanced Vision Transformer model that better handles spatial awareness in segmentation tasks. However, SegFormer’s effectiveness is typically limited by its need for extensive training data, which is particularly challenging in medical imaging, where obtaining labeled ground truths (GTs) is a costly and resource-intensive process. In this paper, we propose two models and their combination to enable accurate feature extraction from smaller datasets by improving SegFormer. Specifically, these include the data-efficient model, which deepens the hierarchical encoder by adding convolution layers to transformer blocks and increases feature map resolution within transformer blocks, and the FPN-based model, which enhances the decoder through a Feature Pyramid Network (FPN) and attention mechanisms. Testing our model on spine images from the Cancer Imaging Archive and our own hand and wrist dataset, ablation studies confirmed that our modifications outperform the original SegFormer, U-Net, and Mask2Former. These enhancements enable better image feature extraction and more precise object contour detection, which is particularly beneficial for medical imaging applications with limited training data. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>SegFormer architecture.</p>
Full article ">Figure 2
<p>Proposed model architecture.</p>
Full article ">Figure 3
<p>Data-efficient encoder architecture.</p>
Full article ">Figure 4
<p>Model-wise IoU for spine images.</p>
Full article ">Figure 5
<p>Model-wise IoU for hand and Wrist images.</p>
Full article ">Figure 6
<p>Model-wise IoU for femur images.</p>
Full article ">Figure A1
<p>Datasets.</p>
Full article ">Figure A1 Cont.
<p>Datasets.</p>
Full article ">Figure A2
<p>Spine segmentation.</p>
Full article ">Figure A2 Cont.
<p>Spine segmentation.</p>
Full article ">Figure A3
<p>Hand and wrist segmentation.</p>
Full article ">Figure A3 Cont.
<p>Hand and wrist segmentation.</p>
Full article ">Figure A4
<p>Femur segmantation.</p>
Full article ">Figure A4 Cont.
<p>Femur segmantation.</p>
Full article ">
19 pages, 4040 KiB  
Article
Fractional Solitons in Optical Twin-Core Couplers with Kerr Law Nonlinearity and Local M-Derivative Using Modified Extended Mapping Method
by Noorah Mshary, Hamdy M. Ahmed and Wafaa B. Rabie
Fractal Fract. 2024, 8(12), 755; https://doi.org/10.3390/fractalfract8120755 - 23 Dec 2024
Viewed by 505
Abstract
This study focuses on optical twin-core couplers, which facilitate light transmission between two closely aligned optical fibers. These couplers operate based on the principle of coupling, allowing signals in one core to interact with those in the other. The Kerr effect, which describes [...] Read more.
This study focuses on optical twin-core couplers, which facilitate light transmission between two closely aligned optical fibers. These couplers operate based on the principle of coupling, allowing signals in one core to interact with those in the other. The Kerr effect, which describes how a material’s refractive index changes in response to the intensity of light, induces the nonlinear behavior essential for generating solitons—self-sustaining wave packets that preserve their shape and speed. In our research, we employ fractional derivatives to investigate how fractional-order variations influence wave propagation and soliton dynamics. By utilizing the modified extended mapping method (MEMM), we derive solitary wave solutions for the equations governing the behavior of optical twin-core couplers under Kerr nonlinearity. This methodology produces novel fractional traveling wave solutions, including dark, bright, singular, and combined bright–dark solitons, as well as hyperbolic, Jacobi elliptic function (JEF), periodic, and singular periodic solutions. To enhance understanding, we present physical interpretations through contour plots and include both 2D and 3D graphical representations of the results. Full article
Show Figures

Figure 1

Figure 1
<p>Dark soliton solution of Equation (<a href="#FD72-fractalfract-08-00755" class="html-disp-formula">72</a>).</p>
Full article ">Figure 2
<p>Bright soliton solution of Equation (<a href="#FD76-fractalfract-08-00755" class="html-disp-formula">76</a>).</p>
Full article ">Figure 3
<p>Singular periodic solution of Equation (<a href="#FD80-fractalfract-08-00755" class="html-disp-formula">80</a>).</p>
Full article ">
18 pages, 5533 KiB  
Article
EGNet: 3D Semantic Segmentation Through Point–Voxel–Mesh Data for Euclidean–Geodesic Feature Fusion
by Qi Li, Yu Song, Xiaoqian Jin, Yan Wu, Hang Zhang and Di Zhao
Sensors 2024, 24(24), 8196; https://doi.org/10.3390/s24248196 - 22 Dec 2024
Viewed by 299
Abstract
With the advancement of service robot technology, the demand for higher boundary precision in indoor semantic segmentation has increased. Traditional methods of extracting Euclidean features using point cloud and voxel data often neglect geodesic information, reducing boundary accuracy for adjacent objects and consuming [...] Read more.
With the advancement of service robot technology, the demand for higher boundary precision in indoor semantic segmentation has increased. Traditional methods of extracting Euclidean features using point cloud and voxel data often neglect geodesic information, reducing boundary accuracy for adjacent objects and consuming significant computational resources. This study proposes a novel network, the Euclidean–geodesic network (EGNet), which uses point cloud–voxel–mesh data to characterize detail, contour, and geodesic features, respectively. The EGNet performs feature fusion through Euclidean and geodesic branches. In the Euclidean branch, the features extracted from point cloud data compensate for the detail features lost by voxel data. In the geodesic branch, geodesic features from mesh data are extracted using inter-domain fusion and aggregation modules. These geodesic features are then combined with contextual features from the Euclidean branch, and the simplified trajectory map of the grid is used for up-sampling to produce the final semantic segmentation results. The Scannet and Matterport datasets were used to demonstrate the effectiveness of the EGNet through visual comparisons with other models. The results demonstrate the effectiveness of integrating Euclidean and geodesic features for improved semantic segmentation. This approach can inspire further research combining these feature types for enhanced segmentation accuracy. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The yellow point on the curtain serves as the focal point for all color shades that represent the distance within the neighborhood. The shades of color represent the Euclidean distance between each point and the focal point in point cloud data (blue), shown in (<b>a</b>). The shades of color represent the path length between each point and the focal point (red) in 3D mesh data, shown in (<b>b</b>). The data structures of each are displayed in the yellow bounding boxes.</p>
Full article ">Figure 2
<p>EGNet architecture. In the Euclidean branch, we use a feature extractor similar to U-Net structure for extracting Euclidean features from voxels to capture fine features. Inspired by PointNet++ structure, we incorporate a point-based MLP into the Euclidean branch. In the geodesic branch, the self-domain attention module is used to effectively aggregate the vertices of the original mesh. The features of the mesh vertices are fused with the features of sparse vertices from Euclidean branch using the cross-domain attention module.</p>
Full article ">Figure 3
<p>Mesh simplification. Mesh_l0 to Mesh_l3 is part of the mesh simplification process, with the yellow label indicating the trajectory map of a point from Mesh_l0 to Mesh_l1.</p>
Full article ">Figure 4
<p>Results of the ScanNet v2 validation. We have highlighted the main differences with yellow bounding boxes. Observing the segmentation results of the door in the first instance, our method demonstrates more accurate boundary segmentation. In the second instance, despite the poor quality of the environmental scan, although the MinkowskiNet method also identified the unannotated bed, a closer inspection reveals that our method provides a clearer segmentation boundary between the bed and the desk.</p>
Full article ">Figure 5
<p>Visualization of Matterport3D. We have highlighted the main differences with yellow bounding boxes. In the first example, our method shows almost no errors compared to the ground-truth labels and achieves more accurate segmentation regions than other methods. Additionally, our method successfully identifies the flowerpot (other furniture) on the table. In the second example, despite errors in the ground-truth labels, our method achieves more accurate target classification compared to other methods.</p>
Full article ">Figure 6
<p>Detailed local regions from the ScanNetV2 validation set, with key differences marked by yellow bounding boxes. The first example focuses on a kitchen corner, where EGNet produces smoother segmentation results compared to the ground-truth annotations. The second and third examples depict bedroom scenes, for which our method successfully distinguishes cabinets from sofas even in areas with blurred boundaries. Notably, in the third example, there was a significant improvement in door recognition, and our method correctly classifies a shelf that had been mislabeled as part of the floor.</p>
Full article ">Figure 7
<p>Visualization results of the ablation study. The data in the third column show that the accuracy of edge segmentation is significantly improved with the introduction of geodesic branching. The effectiveness of our proposed cross-domain attention module is demonstrated by comparing the results of the data in the fourth and fifth columns.</p>
Full article ">
23 pages, 6972 KiB  
Article
A Multi-Source Circular Geodesic Voting Model for Image Segmentation
by Shuwang Zhou, Minglei Shu and Chong Di
Entropy 2024, 26(12), 1123; https://doi.org/10.3390/e26121123 - 22 Dec 2024
Viewed by 259
Abstract
Image segmentation is a crucial task in artificial intelligence fields such as computer vision and medical imaging. While convolutional neural networks (CNNs) have achieved notable success by learning representative features from large datasets, they often lack geometric priors and global object information, limiting [...] Read more.
Image segmentation is a crucial task in artificial intelligence fields such as computer vision and medical imaging. While convolutional neural networks (CNNs) have achieved notable success by learning representative features from large datasets, they often lack geometric priors and global object information, limiting their accuracy in complex scenarios. Variational methods like active contours provide geometric priors and theoretical interpretability but require manual initialization and are sensitive to hyper-parameters. To overcome these challenges, we propose a novel segmentation approach, named PolarVoting, which combines the minimal path encoding rich geometric features and CNNs which can provide efficient initialization. The introduced model involves two main steps: firstly, we leverage the PolarMask model to extract multiple source points for initialization, and secondly, we construct a voting score map which implicitly contains the segmentation mask via a modified circular geometric voting (CGV) scheme. This map embeds global geometric information for finding accurate segmentation. By integrating neural network representation with geometric priors, the PolarVoting model enhances segmentation accuracy and robustness. Extensive experiments on various datasets demonstrate that the proposed PolarVoting method outperforms both PolarMask and traditional single-source CGV models. It excels in challenging imaging scenarios characterized by intensity inhomogeneity, noise, and complex backgrounds, accurately delineating object boundaries and advancing the state of image segmentation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Illustration for a polar representation of a mask. The light green dot denotes the center point <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> </semantics></math> and the dark green dots are the sampled boundary points <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>i</mi> <mo>≤</mo> <mi>n</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Result in challenging scenarios.</p>
Full article ">Figure 3
<p>Overview of the proposed PolarVoting framework. (<b>a</b>) The original image, where the red dots represent the sampled farthest points. (<b>b</b>) Updating of vertices: the red dots denote contour points obtained from PolarMask, while the green dots represent the updated points. (<b>c</b>) Construction of multiple adaptive cuts. (<b>d</b>) Visualization of minimal paths using different metrics. (<b>e</b>) Visualization of the voting score map. (<b>f</b>) The final segmentation contour, represented by the red line.</p>
Full article ">Figure 4
<p>Qualitative comparison results on nature images, where the green lines denote the segmentation contours. Column 1 displays the original images with ground truth segmentation contours indicated by green lines. Columns 2–4 show the segmented results produced by the PolarMask, circular geodesic voting, and multi-source circular geodesic voting methods, respectively.</p>
Full article ">Figure 5
<p>Qualitative comparison results on CT images. The green lines denote the segmentation contours. Column 1 displays the original images. Columns 2–4 show the segmented results produced by the PolarMask, circular geodesic voting, and multi-source circular geodesic voting methods, respectively.</p>
Full article ">Figure 6
<p>Box plots of the Dice scores for the PolarMask, circular geodesic voting, and multi-source circular geodesic voting methods on the test set of CT images. The green triangles represent the mean Dice score.</p>
Full article ">Figure 7
<p>Box plots of the Dice scores for different numbers of source points for multi-source circular geodesic voting methods on the test set of CT images. The green triangles represent the mean Dice score.</p>
Full article ">Figure 8
<p>Box plots of the execution time for different numbers of source points for multi-source circular geodesic voting methods on the test set of CT Images. The green triangles represent the mean execution time.</p>
Full article ">Figure 9
<p>Variance distribution of Dice scores for different initial placements of source points in the PolarVoting model on the test set of CT images.</p>
Full article ">Figure 10
<p>Performance of the PolarVoting model under noise conditions on CT images. The green lines represent the segmentation contours. Column 1 shows the original image alongside its segmentation result. Columns 2–4 present images affected by Gaussian noise with variances of <math display="inline"><semantics> <mrow> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.02</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>, respectively, along with the corresponding segmentation results.</p>
Full article ">Figure 11
<p>Performance of the PolarVoting model under blur conditions on CT images. The green lines represent the segmentation contours. Column 1 shows the original image alongside its segmentation result. Columns 2–4 display images with Gaussian blur levels of <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, respectively, along with the corresponding segmentation results.</p>
Full article ">Figure 12
<p>Performance of the PolarVoting model under brightness variations on CT images. The green lines represent the segmentation contours. Column 1 shows the original image alongside its segmentation result. Columns 2–4 display images with brightness adjustment factors of <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>1.3</mn> </mrow> </semantics></math>, respectively, along with the corresponding segmentation results.</p>
Full article ">
21 pages, 5972 KiB  
Article
DCA-YOLOv8: A Novel Framework Combined with AICI Loss Function for Coronary Artery Stenosis Detection
by Hualin Duan, Sanli Yi and Yanyou Ren
Sensors 2024, 24(24), 8134; https://doi.org/10.3390/s24248134 - 20 Dec 2024
Viewed by 370
Abstract
Coronary artery stenosis detection remains a challenging task due to the complex vascular structure, poor quality of imaging pictures, poor vessel contouring caused by breathing artifacts and stenotic lesions that often appear in a small region of the image. In order to improve [...] Read more.
Coronary artery stenosis detection remains a challenging task due to the complex vascular structure, poor quality of imaging pictures, poor vessel contouring caused by breathing artifacts and stenotic lesions that often appear in a small region of the image. In order to improve the accuracy and efficiency of detection, a new deep-learning technique based on a coronary artery stenosis detection framework (DCA-YOLOv8) is proposed in this paper. The framework consists of a histogram equalization and canny edge detection preprocessing (HEC) enhancement module, a double coordinate attention (DCA) feature extraction module and an output module that combines a newly designed loss function, named adaptive inner-CIoU (AICI). This new framework is called DCA-YOLOv8. The experimental results show that the DCA-YOLOv8 framework performs better than existing object detection algorithms in coronary artery stenosis detection, with precision, recall, F1-score and mean average precision (mAP) at 96.62%, 95.06%, 95.83% and 97.6%, respectively. In addition, the framework performs better in the classification task, with accuracy at 93.2%, precision at 92.94%, recall at 93.5% and F1-score at 93.22%. Despite the limitations of data volume and labeled data, the proposed framework is valuable in applications for assisting the cardiac team in making decisions by using coronary angiography results. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Example of coronary artery stenosis detection.</p>
Full article ">Figure 2
<p>DCA-YOLOv8 overall structural frame diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of YOLOv8 structure.</p>
Full article ">Figure 4
<p>Schematic diagram of DCA module structure.</p>
Full article ">Figure 5
<p>Preprocessing enhancement effect diagram: (<b>i</b>) Histogram equalized image, (<b>ii</b>) Canny edge extraction image, (<b>iii</b>) HEC-processed image.</p>
Full article ">Figure 6
<p>CIoU loss function calculation.</p>
Full article ">Figure 7
<p>Schematic diagram of inner-IoU structure.</p>
Full article ">Figure 8
<p>Gamma transformation and CLAHE-processed visualization images. (<b>i</b>) Original image, (<b>ii</b>) Gamma transform image, (<b>iii</b>) CLAHE-processed image.</p>
Full article ">Figure 9
<p>Box loss regression with three different loss functions.</p>
Full article ">Figure 10
<p>Coronary angiography image detection results without DCA and HEC modules in (<b>a1</b>–<b>a3</b>), with DCA and HEC modules in (<b>b1</b>–<b>b3</b>). The green box represents the final result of the framework’s detection of stenosis.</p>
Full article ">Figure 11
<p>Ablation experiments with or without the use of the AICI loss function in coronary stenosis detection. Results of assays without the AICI loss function in (<b>a1</b>–<b>a3</b>) and results of assays with the AICI loss function in (<b>b1</b>–<b>b3</b>). The green box represents the final result of the framework’s detection of stenosis.</p>
Full article ">
21 pages, 7173 KiB  
Article
RGB-Guided Depth Feature Enhancement for RGB–Depth Salient Object Detection
by Zhihong Zeng, Jiahao He, Yue Zhan, Haijun Liu and Xiaoheng Tan
Electronics 2024, 13(24), 4915; https://doi.org/10.3390/electronics13244915 - 12 Dec 2024
Viewed by 500
Abstract
RGB-D (depth) Salient Object Detection (SOD) seeks to identify and segment the most visually compelling objects within a given scene. Depth data, known for their strong discriminative capability in spatial localization, provide an advantage in achieving accurate RGB-D SOD. However, recent research in [...] Read more.
RGB-D (depth) Salient Object Detection (SOD) seeks to identify and segment the most visually compelling objects within a given scene. Depth data, known for their strong discriminative capability in spatial localization, provide an advantage in achieving accurate RGB-D SOD. However, recent research in this field has encountered significant challenges due to the poor visual qualities and disturbing cues in raw depth maps. This issue results in indistinct or ambiguous depth features, which consequently weaken the performance of RGB-D SOD. To address this problem, we propose a novel pseudo depth feature generation-based RGB-D SOD Network, named PDFNet, which can generate some new and more distinctive pseudo depth features as an extra supplement source to enhance the raw depth features. Specifically, we first introduce an RGB-guided pseudo depth feature generation subnet to synthesize more distinctive pseudo depth features for raw depth feature enhancement, since the discriminative power of depth features plays a pivotal role in providing effective contour and spatial cues. Then, we propose a cross-modal fusion mamba (CFM) to effectively merge RGB features, raw depth features, and generated pseudo depth features. We adopt a channel selection strategy within the CFM module to align the pseudo depth features with raw depth features, thereby enhancing the depth features. We test the proposed PDFNet on six commonly used RGB-D SOD benchmark datasets. Extensive experimental results validate that the proposed approach achieves superior performance. For example, compared to the previous cutting-edge method, AirSOD, our method improves the F-measure by 2%, 1.7%, 1.1%, and 2.2% on the STERE, DUTLF-D, NLPR, and NJU2K datasets, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of raw and generated pseudo depth features (where “feat.” is the abbreviation for features). Some of the raw depth maps are of poor quality (as shown in row 2), which may lead to ineffective or suboptimal feature representation learning (as indicated in row 3). Our model can synthesize more distinctive pseudo depth features (as indicated in row 4), providing crucial spatial cues, such as the contour of the salient object.</p>
Full article ">Figure 2
<p>The overall architecture of the proposed PDFNet. It comprises an RGB-guided pseudo depth feature generation subnet embedded in the encoder, a cross-modal fusion mamba, and a decoder.</p>
Full article ">Figure 3
<p>Diagram of the PDF subnet. <b>Center</b>: Pipeline of the PDF subnet. “Em” denotes the embedding procedure. C&amp;R denotes the <span class="html-italic">Concatenate and Resample</span> operation. <math display="inline"><semantics> <msup> <mi mathvariant="bold-italic">t</mi> <mi mathvariant="bold-italic">l</mi> </msup> </semantics></math> denotes the token from different transformer stage. <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">D</mi> <mi mathvariant="bold-italic">l</mi> </msub> </semantics></math> represents the generated pseudo depth features. <b>Left</b>: Details of the transformer block. <b>Right</b>: Details of the depth estimation block.</p>
Full article ">Figure 4
<p>The diagram of cross-modal fusion mamba. <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">F</mi> <mi mathvariant="bold-italic">i</mi> <mi mathvariant="bold-italic">r</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">F</mi> <mi mathvariant="bold-italic">i</mi> <mi mathvariant="bold-italic">d</mi> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">D</mi> <mi mathvariant="bold-italic">l</mi> </msub> </semantics></math> denote the <span class="html-italic">i</span>th <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>l</mi> <mo>=</mo> <mi>i</mi> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>≤</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> RGB features, raw depth features and pseudo depth features, respectively.</p>
Full article ">Figure 5
<p>The visual analysis of generated pseudo depth in different scenarios. “GT”, “Raw depth”, and “Pseudo depth” mean the ground-truth label, raw depth map, and generated pseudo depth, respectively.</p>
Full article ">Figure 6
<p>Comparison of PR Curves for PDFNet and other methods on four datasets: (<b>a</b>) PR curve on the NJU2K dataset; (<b>b</b>) PR curve on the NLPR dataset; (<b>c</b>) PR curve on the STERE dataset; (<b>d</b>) PR curve on the DES dataset.</p>
Full article ">Figure 7
<p>Visual comparison of the proposed method with different state-of-the-art methods. “Col.” and “GT” denote the abbreviation of column and ground truth, respectively. The 1st and 2nd columns denote scenes where the target occupies a large space. The 3rd and 4th columns indicate the targeted object with sharp boundaries. The 5th and 6th columns represent complex backgrounds. The 7th and 8th columns represent a small object and low contrast scene, respectively. The 9th and 10th columns indicate the poor visual quality of depth maps which may provide very limited effective depth cues.</p>
Full article ">Figure 8
<p>Visual comparison of intermediate features with and without PDF subnet. “Raw feat.” and “Pseudo feat.” denote the raw and generated pseudo depth features, respectively. “Raw Only” represents the prediction results without the injection of generated pseudo depth features. “Raw + Pseudo.” represents the final prediction with the injection of generated pseudo depth features.</p>
Full article ">Figure 9
<p>The necessity of fusing raw depth with generated pseudo depth information. “Fused feat.” denotes that the raw and pseudo depth features are merged together. “Pseudo only” denotes the detection result without the raw depth features.</p>
Full article ">Figure 10
<p>Examples of failure cases (<b>a</b>–<b>c</b>). ‘GT’ means the ground truth label. The comparative test maps are from SSF [<a href="#B50-electronics-13-04915" class="html-bibr">50</a>], DCMF [<a href="#B60-electronics-13-04915" class="html-bibr">60</a>], and D3Net [<a href="#B18-electronics-13-04915" class="html-bibr">18</a>], respectively.</p>
Full article ">
20 pages, 13662 KiB  
Article
Unmanned Aerial Vehicle (UAV) Hyperspectral Imagery Mining to Identify New Spectral Indices for Predicting the Field-Scale Yield of Spring Maize
by Yue Zhang, Yansong Wang, Hang Hao, Ziqi Li, Yumei Long, Xingyu Zhang and Chenzhen Xia
Sustainability 2024, 16(24), 10916; https://doi.org/10.3390/su162410916 - 12 Dec 2024
Viewed by 772
Abstract
A nondestructive approach for accurate crop yield prediction at the field scale is vital for precision agriculture. Considerable progress has been made in the use of the spectral index (SI) derived from unmanned aerial vehicle (UAV) hyperspectral images to predict crop yields before [...] Read more.
A nondestructive approach for accurate crop yield prediction at the field scale is vital for precision agriculture. Considerable progress has been made in the use of the spectral index (SI) derived from unmanned aerial vehicle (UAV) hyperspectral images to predict crop yields before harvest. However, few studies have explored the most sensitive wavelengths and SIs for crop yield prediction, especially for different nitrogen fertilization levels and soil types. This study aimed to investigate the appropriate wavelengths and their combinations to explore the ability of new SIs derived from UAV hyperspectral images to predict yields during the growing season of spring maize. In this study, the hyperspectral canopy reflectance measurement method, a field-based high-throughput method, was evaluated in three field experiments (Wang-Jia-Qiao (WJQ), San-Ke-Shu (SKS), and Fu-Jia-Jie (FJJ)) since 2009 with different soil types (alluvial soil, black soil, and aeolian sandy soil) and various nitrogen (N) fertilization levels (0, 168, 240, 270, and 312 kg/ha) in Lishu County, Northeast China. The measurements of canopy spectral reflectance and maize yield were conducted at critical growth stages of spring maize, including the jointing, silking, and maturity stages, in 2019 and 2020. The best wavelengths and new SIs, including the difference spectral index, ratio spectral index, and normalized difference spectral index forms, were obtained from the contour maps constructed by the coefficient of determination (R2) from the linear regression models between the yield and all possible SIs screened from the 450 to 950 nm wavelengths. The new SIs and eight selected published SIs were subsequently used to predict maize yield via linear regression models. The results showed that (1) the most sensitive wavelengths were 640–714 nm at WJQ, 450–650 nm and 750–950 nm at SKS, and 450–700 nm and 750–950 nm at FJJ; (2) the new SIs established here were different across the three experimental fields, and their performance in maize yield prediction was generally better than that of the published SIs; and (3) the new SIs presented different responses to various N fertilization levels. This study demonstrates the potential of exploring new spectral characteristics from remote sensing technology for predicting the field-scale crop yield in spring maize cropping systems before harvest. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area (<b>a</b>), UAV hyperspectral images (<b>b</b>–<b>d</b>) and the nitrogen application rates (<b>e</b>) of three experimental fields (WJQ, SKS, and FJJ).</p>
Full article ">Figure 2
<p>Mean canopy reflectance spectra curves of spring maize under different N treatments across three growth stages in the three experimental fields. (<b>a</b>): WJQ, (<b>b</b>): SKS, (<b>c</b>): FJJ.</p>
Full article ">Figure 3
<p>Contour maps for the linear model between the difference spectral index (DSI), ratio spectral index (RSI), normalized difference spectral index (NDSI), and maize yield for the WJQ experimental field. (<b>a</b>–<b>c</b>): DSI, RSI, and NDSI forms at the jointing stage; (<b>d</b>–<b>f</b>): DSI, RSI, and NDSI forms at the silking stage; (<b>g</b>–<b>i</b>): DSI, RSI, and NDSI forms at the maturity stage.</p>
Full article ">Figure 4
<p>Contour maps for the linear model between the difference spectral index (DSI), ratio spectral index (RSI), normalized difference spectral index (NDSI), and maize yield for the SKS experimental field. (<b>a</b>–<b>c</b>): DSI, RSI, and NDSI forms at the jointing stage; (<b>d</b>–<b>f</b>): DSI, RSI, and NDSI forms at the silking stage; (<b>g</b>–<b>i</b>): DSI, RSI, and NDSI forms at the maturity stage.</p>
Full article ">Figure 5
<p>Contour maps for the linear model between the difference spectral index (DSI), ratio spectral index (RSI), normalized difference spectral index (NDSI), and maize yield for the FJJ experimental field. (<b>a</b>–<b>c</b>): DSI, RSI, and NDSI forms at the jointing stage; (<b>d</b>–<b>f</b>): DSI, RSI, and NDSI forms at the silking stage; (<b>g</b>–<b>i</b>): DSI, RSI, and NDSI forms at the maturity stage.</p>
Full article ">Figure 6
<p>Scatter plots of the measured yield (kg/ha) versus the yield (kg/ha) predicted by the new SIs: (<b>a</b>) NDSI (690, 710) at WJQ, (<b>b</b>) RSI (906, 546) at SKS, and (<b>c</b>) DSI (698, 922) at FJJ.</p>
Full article ">Figure 7
<p>The response of the maize yield to different N application rates on the three experimental fields.</p>
Full article ">Figure 8
<p>The response of the new SIs to different N treatments on the three experimental fields. (<b>a</b>−<b>c</b>): DSI, RSI, and NDSI forms for WJQ; (<b>d</b>−<b>f</b>): DSI, RSI, and NDSI forms for SKS; and (<b>g</b>−<b>i</b>): DSI, RSI, and NDSI forms for FJJ, respectively.</p>
Full article ">Figure 8 Cont.
<p>The response of the new SIs to different N treatments on the three experimental fields. (<b>a</b>−<b>c</b>): DSI, RSI, and NDSI forms for WJQ; (<b>d</b>−<b>f</b>): DSI, RSI, and NDSI forms for SKS; and (<b>g</b>−<b>i</b>): DSI, RSI, and NDSI forms for FJJ, respectively.</p>
Full article ">Figure 8 Cont.
<p>The response of the new SIs to different N treatments on the three experimental fields. (<b>a</b>−<b>c</b>): DSI, RSI, and NDSI forms for WJQ; (<b>d</b>−<b>f</b>): DSI, RSI, and NDSI forms for SKS; and (<b>g</b>−<b>i</b>): DSI, RSI, and NDSI forms for FJJ, respectively.</p>
Full article ">
15 pages, 3524 KiB  
Article
Effective Detection of Cloud Masks in Remote Sensing Images
by Yichen Cui, Hong Shen and Chan-Tong Lam
Sensors 2024, 24(23), 7730; https://doi.org/10.3390/s24237730 - 3 Dec 2024
Viewed by 487
Abstract
Effective detection of the contours of cloud masks and estimation of their distribution can be of practical help in studying weather changes and natural disasters. Existing deep learning methods are unable to extract the edges of clouds and backgrounds in a refined manner [...] Read more.
Effective detection of the contours of cloud masks and estimation of their distribution can be of practical help in studying weather changes and natural disasters. Existing deep learning methods are unable to extract the edges of clouds and backgrounds in a refined manner when detecting cloud masks (shadows) due to their unpredictable patterns, and they are also unable to accurately identify small targets such as thin and broken clouds. For these problems, we propose MDU-Net, a multiscale dual up-sampling segmentation network based on an encoder–decoder–decoder. The model uses an improved residual module to capture the multi-scale features of clouds more effectively. MDU-Net first extracts the feature maps using four residual modules at different scales, and then sends them to the context information full flow module for the first up-sampling. This operation refines the edges of clouds and shadows, enhancing the detection performance. Subsequently, the second up-sampling module concatenates feature map channels to fuse contextual spatial information, which effectively reduces the false detection rate of unpredictable targets hidden in cloud shadows. On a self-made cloud and cloud shadow dataset based on the Landsat8 satellite, MDU-Net achieves scores of 95.61% in PA and 84.97% in MIOU, outperforming other models in both metrics and result images. Additionally, we conduct experiments to test the model’s generalization capability on the landcover.ai dataset to show that it also achieves excellent performance in the visualization results. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The structure of MDU-Net.</p>
Full article ">Figure 2
<p>The structure of the residual module. (<b>a</b>) Downsampling Residual Module. (<b>b</b>) Standard Residual Module.</p>
Full article ">Figure 3
<p>The structure of dual up-sampling module.</p>
Full article ">Figure 4
<p>The backgrounds of self-made dataset. (<b>a</b>) Water areas. (<b>b</b>) Cities. (<b>c</b>) Vegetation. (<b>d</b>) Deserts.</p>
Full article ">Figure 5
<p>Visualization results of different models in rural and vegetated regions. (<b>a</b>) Original image, (<b>b</b>) Label image, (<b>c</b>) FCN, (<b>d</b>) UNet, (<b>e</b>) MultiResUNet, (<b>f</b>) PSPNet, (<b>g</b>) RSAGUNet, (<b>h</b>) AFMUNet, (<b>i</b>) MDU-Net.</p>
Full article ">Figure 6
<p>Prediction pictures of different algorithms in saline and snow-covered areas. (<b>a</b>) Original image, (<b>b</b>) Label image, (<b>c</b>) FCN, (<b>d</b>) UNet, (<b>e</b>) MultiResUNet, (<b>f</b>) PSPNet, (<b>g</b>) RSAGUNet, (<b>h</b>) AFMUNet, (<b>i</b>) MDU-Net.</p>
Full article ">Figure 7
<p>Visualization results of different models on landcover.ai dataset: (<b>a</b>) Original image, (<b>b</b>) FCN, (<b>c</b>) MultiResUNet, (<b>d</b>) ResUNet, (<b>e</b>) UNet, (<b>f</b>) PSPNet, (<b>g</b>) RSAGUNet, (<b>h</b>) AFMUNet, (<b>i</b>) MDU-Net.</p>
Full article ">
27 pages, 22290 KiB  
Article
Real-Time Environmental Contour Construction Using 3D LiDAR and Image Recognition with Object Removal
by Tzu-Jung Wu, Rong He and Chao-Chung Peng
Remote Sens. 2024, 16(23), 4513; https://doi.org/10.3390/rs16234513 - 1 Dec 2024
Viewed by 870
Abstract
In recent years, due to the significant advancements in hardware sensors and software technologies, 3D environmental point cloud modeling has gradually been applied in the automation industry, autonomous vehicles, and construction engineering. With the high-precision measurements of 3D LiDAR, its point clouds can [...] Read more.
In recent years, due to the significant advancements in hardware sensors and software technologies, 3D environmental point cloud modeling has gradually been applied in the automation industry, autonomous vehicles, and construction engineering. With the high-precision measurements of 3D LiDAR, its point clouds can clearly reflect the geometric structure and features of the environment, thus enabling the creation of high-density 3D environmental point cloud models. However, due to the enormous quantity of high-density 3D point clouds, storing and processing these 3D data requires a considerable amount of memory and computing time. In light of this, this paper proposes a real-time 3D point cloud environmental contour modeling technique. The study uses the point cloud distribution from the 3D LiDAR body frame point cloud to establish structured edge features, thereby creating a 3D environmental contour point cloud map. Additionally, unstable objects such as vehicles will appear during the mapping process; these specific objects will be regarded as not part of the stable environmental model in this study. To address this issue, the study will further remove these objects from the 3D point cloud through image recognition and LiDAR heterogeneous matching, resulting in a higher quality 3D environmental contour point cloud map. This 3D environmental contour point cloud not only retains the recognizability of the environmental structure but also solves the problems of massive data storage and processing. Moreover, the method proposed in this study can achieve real-time realization without requiring the 3D point cloud to be organized in a structured order, making it applicable to unorganized 3D point cloud LiDAR sensors. Finally, the feasibility of the proposed method in practical applications is also verified through actual experimental data. Full article
(This article belongs to the Special Issue Remote Sensing in Environmental Modelling)
Show Figures

Figure 1

Figure 1
<p>The flow diagram of the proposed process. This workflow results in a 3D environmental contour generated from a single LiDAR frame.</p>
Full article ">Figure 2
<p>The process of mapping the bounding box to the point cloud. (<b>a</b>) The image with car detection bounding boxes by YOLO v8; (<b>b</b>) The projection of point cloud on the image plane; (<b>c</b>) The result of defining the region of the bounding boxes in 3D point cloud. The blue points indicate the points projected on by the bounding boxes and the red points represent the point cloud in camera’s FOV but not within the bounding boxes detected by YOLO.</p>
Full article ">Figure 3
<p>The result of defining the region of the bounding boxes in 3D point cloud. The blue points indicate the points projected on by the bounding boxes and the red points represent the point cloud in camera’s FOV but not within the bounding boxes.</p>
Full article ">Figure 4
<p>Schematic of DBSCAN algorithm.</p>
Full article ">Figure 5
<p>The demonstration of the number of detected points varies with distance. (<b>a</b>) Simulation on car width; (<b>b</b>) Simulation on car height. The red star represents the LiDAR sensor’s position, red lines denote the width and height lines at different distances, respectively, and the blue points along the lines mark the detected points.</p>
Full article ">Figure 6
<p>The result of the curve fitting. (<b>a</b>) The curve fitting parameter of the points on the width side; (<b>b</b>) The curve fitting parameter of the points on the height side.</p>
Full article ">Figure 7
<p>Number of points detected on 2D car rear with different distances.</p>
Full article ">Figure 8
<p>The Schematic of the ratio of the unusable area.</p>
Full article ">Figure 9
<p>The results of the Adaptive DBSCAN algorithm for vehicle detection compared to the image recognition results. Grey points represent areas outside the detected bounding boxes, while black points indicate those within the bounding boxes but are classified as outliers by the DBSCAN algorithm. Color points highlight the different vehicles successfully identified by the algorithm. The results are displayed for Frame 1 in (<b>a</b>) point cloud and (<b>b</b>) color image, Frame 80 in (<b>c</b>) point cloud and (<b>d</b>) color image, and Frame 200 in (<b>e</b>) point cloud and (<b>f</b>) color image, respectively.</p>
Full article ">Figure 10
<p>The demonstration of the impact of the difference between the LiDAR’s FOV and the camera’s FOV on object removal. (<b>a</b>) Points removed within the camera’s FOV; (<b>b</b>) Points mapped within the LiDAR’s FOV.</p>
Full article ">Figure 11
<p>The demonstration of the mapping process without and with object bounding box tracking. (<b>a</b>,<b>b</b>) Mapping result of the process without object bounding box tracking; (<b>c</b>,<b>d</b>) Mapping result of process with object bounding box tracking. The pink points indicate points within the bounding boxes that are not eliminated in the first stage due to the difference in the FOV of the sensors.</p>
Full article ">Figure 12
<p>Schematic of the merging bounding box strategy.</p>
Full article ">Figure 13
<p>The Implementation of the algorithm to the car in (<b>a</b>) Top view and (<b>b</b>) 3-dimensional view. The red stars indicate the eight corner points of the merged bounding box, which are used in the MVEE algorithm to build the ellipsoid.</p>
Full article ">Figure 14
<p>Schematic of the principal components.</p>
Full article ">Figure 15
<p>Demonstration of the experiment for testing the curvature of the angle at different distances.</p>
Full article ">Figure 16
<p>Global map of high dense point cloud in (<b>a</b>) Top view; (<b>b</b>) 3-dimensional view.</p>
Full article ">Figure 17
<p>Bounding box based on image recognition and DBSCAN algorithm in the global map in (<b>a</b>) Top view and (<b>b</b>) 3-dimensional view.</p>
Full article ">Figure 18
<p>The results of DBSCAN and image recognition. (<b>a</b>) Point Cloud and (<b>b</b>) Image of Frame 32.</p>
Full article ">Figure 19
<p>The result of applying MVEE for detected cars in (<b>a</b>) Top view and (<b>b</b>) 3-dimensional view. These ellipsoids represent the functionalized occupied region.</p>
Full article ">Figure 20
<p>The comparison of the (<b>a</b>) KITTI tracking dataset with vehicle ground truth annotation (represented by green boxes), (<b>b</b>) the vehicle recognition results by the applied YOLO v8 model (with pink circles representing vehicles annotated by KITTI dataset but not detected by YOLO v8 model), and (<b>c</b>) the projection of the LiDAR point cloud onto the corresponding image (with pink circles indicating vehicles annotated by KITTI dataset but not projected on by the LiDAR point cloud).</p>
Full article ">Figure 21
<p>Global map after removing the point clouds located in the ellipsoids: (<b>a</b>) Top view and (<b>b</b>) 3-dimensional view.</p>
Full article ">Figure 22
<p>Global contour map before global object removal in (<b>a</b>) Top view; (<b>b</b>) 3-dimensional view.</p>
Full article ">Figure 23
<p>Global contour map after global object removal (<b>a</b>) Top view; (<b>b</b>) 3-dimensional view.</p>
Full article ">Figure 24
<p>Detailed view of specific environmental contour features, where (<b>a</b>) and (<b>b</b>) are captured from different angle of view.</p>
Full article ">
18 pages, 6063 KiB  
Article
Development of Artificial Intelligent-Based Methodology to Prepare Input for Estimating Vehicle Emissions
by Elif Yavuz, Alihan Öztürk, Nedime Gaye Nur Balkanlı, Şeref Naci Engin and S. Levent Kuzu
Appl. Sci. 2024, 14(23), 11175; https://doi.org/10.3390/app142311175 - 29 Nov 2024
Viewed by 462
Abstract
Machine learning has significantly advanced traffic surveillance and management, with YOLO (You Only Look Once) being a prominent Convolutional Neural Network (CNN) algorithm for vehicle detection. This study utilizes YOLO version 7 (YOLOv7) combined with the Kalman-based SORT (Simple Online and Real-time Tracking) [...] Read more.
Machine learning has significantly advanced traffic surveillance and management, with YOLO (You Only Look Once) being a prominent Convolutional Neural Network (CNN) algorithm for vehicle detection. This study utilizes YOLO version 7 (YOLOv7) combined with the Kalman-based SORT (Simple Online and Real-time Tracking) algorithm as one of the models used in our experiments for real-time vehicle identification. We developed the “ISTraffic” dataset. We have also included an overview of existing datasets in the domain of vehicle detection, highlighting their shortcomings: existing vehicle detection datasets often have incomplete annotations and limited diversity, but our “ISTraffic” dataset addresses these issues with detailed and extensive annotations for higher accuracy and robustness. The ISTraffic dataset is meticulously annotated, ensuring high-quality labels for every visible object, including those that are truncated, obscured, or extremely small. With 36,841 annotated examples and an average of 32.7 annotations per image, it offers extensive coverage and dense annotations, making it highly valuable for various object detection and tracking applications. The detailed annotations enhance detection capabilities, enabling the development of more accurate and reliable models for complex environments. This comprehensive dataset is versatile, suitable for applications ranging from autonomous driving to surveillance, and has significantly improved object detection performance, resulting in higher accuracy and robustness in challenging scenarios. Using this dataset, our study achieved significant results with the YOLOv7 model. The model demonstrated high accuracy in detecting various vehicle types, even under challenging conditions. The results highlight the effectiveness of the dataset in training robust vehicle detection models and underscore its potential for future research and development in this field. Our comparative analysis evaluated YOLOv7 against its variants, YOLOv7x and YOLOv7-tiny, using both the “ISTraffic” dataset and the COCO (Common Objects in Context) benchmark. YOLOv7x outperformed others with a [email protected] of 0.87, precision of 0.89, and recall of 0.84, showing a 35% performance improvement over COCO. Performance varied under different conditions, with daytime yielding higher accuracy compared to night-time and rainy weather, where vehicle headlights affected object contours. Despite effective vehicle detection and counting, tracking high-speed vehicles remains a challenge. Additionally, the algorithm’s deep learning estimates of emissions (CO, NO, NO2, NOx, PM2.5, and PM10) were 7.7% to 10.1% lower than ground-truth. Full article
Show Figures

Figure 1

Figure 1
<p>Overall architecture reflecting the collaborative relationships between different methods.</p>
Full article ">Figure 2
<p>Examples of the training samples from the labelled dataset.</p>
Full article ">Figure 3
<p>Distribution of vehicles within the validation and training datasets.</p>
Full article ">Figure 4
<p>The mAP@0.5, mAP@0.5:0.95, precision, and recall performance metrics of the models.</p>
Full article ">Figure 4 Cont.
<p>The mAP@0.5, mAP@0.5:0.95, precision, and recall performance metrics of the models.</p>
Full article ">Figure 5
<p>A visual representation of the classes, assigned names, and tracks of the vehicles in the video streams.</p>
Full article ">Figure 6
<p>The ground-truth detected vehicles and the accuracy percentages (%) for each vehicle class in the traffic surveillance video streams.</p>
Full article ">Figure 7
<p>Precision-Recall curve.</p>
Full article ">Figure 8
<p>Calculated pollutant emissions.</p>
Full article ">
27 pages, 28012 KiB  
Article
A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement
by Boyang You and Barmak Honarvar Shakibaei Asli
Big Data Cogn. Comput. 2024, 8(11), 164; https://doi.org/10.3390/bdcc8110164 - 20 Nov 2024
Viewed by 669
Abstract
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. [...] Read more.
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. The feature detection methods scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and KAZE are compared across six datasets, with SIFT proving the most effective (matching rate higher than 0.12). Using K-nearest-neighbor matching and random sample consensus (RANSAC), refined feature point matching and 3D spatial representation are achieved via antipodal geometry. Then, the Poisson surface reconstruction algorithm converts the point cloud into a mesh model. Additionally, texture images are enhanced by leveraging a visual geometry group (VGG) network-based deep learning approach. Content images from a dataset provide geometric contours via higher-level VGG layers, while textures from style images are extracted using the lower-level layers. These are fused to create texture-transferred images, where the image quality assessment (IQA) metrics SSIM and PSNR are used to evaluate texture-enhanced images. Finally, texture mapping integrates the enhanced textures with the mesh model, improving the scene representation with enhanced texture. The method presented in this paper surpassed a LiDAR-based reconstruction approach by 20% in terms of point cloud density and number of model facets, while the hardware cost was only 1% of that associated with LiDAR. Full article
Show Figures

Figure 1

Figure 1
<p>Samples from Dataset 1 (Source: <a href="https://github.com/Abhishek-Aditya-bs/MultiView-3D-Reconstruction/tree/main/Datasets" target="_blank">https://github.com/Abhishek-Aditya-bs/MultiView-3D-Reconstruction/tree/main/Datasets</a> accessed on 18 November 2024) and samples from Dataset 2.</p>
Full article ">Figure 2
<p>Demonstration of Dataset 3.</p>
Full article ">Figure 3
<p>Diagram of SFM algorithm.</p>
Full article ">Figure 4
<p>Camera imaging model.</p>
Full article ">Figure 5
<p>Coplanarity condition of photogrammetry.</p>
Full article ">Figure 6
<p>Process of surface reconstruction.</p>
Full article ">Figure 7
<p>Demonstration of isosurface.</p>
Full article ">Figure 8
<p>Demonstration of VGG network.</p>
Full article ">Figure 9
<p>Demonstration of Gram matrix.</p>
Full article ">Figure 10
<p>Style transformation architecture.</p>
Full article ">Figure 11
<p>Texture mapping process.</p>
Full article ">Figure 12
<p>Demonstration of the three kinds of feature descriptors used on Dataset 1 and Dataset 2.</p>
Full article ">Figure 13
<p>Matching rate fitting of three kinds of image descriptors.</p>
Full article ">Figure 14
<p>SIFT point matching for <span class="html-italic">CNC1</span> object under different thresholds.</p>
Full article ">Figure 15
<p>SIFT point matching for <span class="html-italic">Fountain</span> object under different thresholds.</p>
Full article ">Figure 16
<p>Matching result of Dataset 2 using RANSAC method.</p>
Full article ">Figure 17
<p>Triangulation presentation of feature points obtained from objects in Dataset 1.</p>
Full article ">Figure 18
<p>Triangulation presentation of feature points obtained from objects in Dataset 2.</p>
Full article ">Figure 19
<p>Point cloud data of objects in Dataset 1.</p>
Full article ">Figure 20
<p>Point cloud data of objects in Dataset 2.</p>
Full article ">Figure 21
<p>Normal vector presentation of the points set obtained from objects in Dataset 1.</p>
Full article ">Figure 22
<p>Normal vector of the points set obtained from objects in Dataset 2.</p>
Full article ">Figure 23
<p>Poisson surface reconstruction results of objects in Dataset 1.</p>
Full article ">Figure 24
<p>Poisson surface reconstruction results of objects in Dataset 2.</p>
Full article ">Figure 25
<p>Style transfer result of <span class="html-italic">Statue</span> object.</p>
Full article ">Figure 26
<p>Style transfer result of <span class="html-italic">Fountain</span> object.</p>
Full article ">Figure 27
<p>Style transfer result of <span class="html-italic">Castle</span> object.</p>
Full article ">Figure 28
<p>Style transfer result of <span class="html-italic">CNC1</span> object.</p>
Full article ">Figure 29
<p>Style transfer result of <span class="html-italic">CNC2</span> object.</p>
Full article ">Figure 30
<p>Style transfer result of <span class="html-italic">Robot</span> object.</p>
Full article ">Figure 31
<p>Training loss in style transfer for <b>CNC1</b> object.</p>
Full article ">Figure 32
<p>IQA assessment for <b>CNC1</b> images after style transfer.</p>
Full article ">Figure 33
<p>Results of texture mapping for Dataset 1.</p>
Full article ">Figure 34
<p>Results of texture mapping for Dataset 2.</p>
Full article ">Figure A1
<p>Results of Camera calibration.</p>
Full article ">
16 pages, 4286 KiB  
Article
Risk Assessment of Water Inrush from Coal Seam Floor with a PCA–RST Algorithm in Chenmanzhuang Coal Mine, China
by Weifu Gao, Yining Cao and Xufeng Dong
Water 2024, 16(22), 3269; https://doi.org/10.3390/w16223269 - 14 Nov 2024
Viewed by 618
Abstract
During coal mining, sudden inrushes of water from the floor pose significant risks, seriously affecting mine safety. This study utilizes the 3602 working face of the Chenmanzhuang coal mine as a case study, and the original influencing factors were downscaled using principal component [...] Read more.
During coal mining, sudden inrushes of water from the floor pose significant risks, seriously affecting mine safety. This study utilizes the 3602 working face of the Chenmanzhuang coal mine as a case study, and the original influencing factors were downscaled using principal component analysis (PCA) to obtain four key evaluation factors: water inflow, aquiclude thickness, water pressure, and exposed limestone thickness. The rough set theory (RST) was applied to determine the weights of the four main influencing factors as 0.2, 0.24, 0.36, and 0.2; furthermore, 19 groups of comprehensive values were calculated using the weighting method, and a water inrush risk assessment was conducted for several blocks within the working face. The results are presented as a contour map, highlighting various risk levels and identifying the water inrush danger zone on the coal seam floor. The study concludes that water inrush poses a threat in the western part of the working face, while the eastern area remains relatively safe. The accuracy and reliability of the model are demonstrated, providing a solid basis and guidance for predicting water inrush. Full article
Show Figures

Figure 1

Figure 1
<p>The location and geological structure of the Chenmanzhuang coal mine.</p>
Full article ">Figure 2
<p>Composite histogram of coal mine floor.</p>
Full article ">Figure 3
<p>Technology roadmap.</p>
Full article ">Figure 4
<p>Schematic figure.</p>
Full article ">Figure 5
<p>Water pressure contour map.</p>
Full article ">Figure 6
<p>Aquiclude thickness contour map.</p>
Full article ">Figure 7
<p>Comprehensive evaluation value contour map.</p>
Full article ">Figure 8
<p>Water inrush coefficient contour map.</p>
Full article ">
12 pages, 4726 KiB  
Article
Effect of Nozzle Type on Combustion Characteristics of Ammonium Dinitramide-Based Energetic Propellant
by Jianhui Han, Luyun Jiang, Jifei Ye, Junling Song, Haichao Cui, Baosheng Du and Gaoping Feng
Aerospace 2024, 11(11), 935; https://doi.org/10.3390/aerospace11110935 - 11 Nov 2024
Viewed by 520
Abstract
The present study explores the influence of diverse nozzle geometries on the combustion characteristics of ADN-based energetic propellants. The pressure contour maps reveal a rapid initial increase in the average pressure of ADN-based propellants across the three different nozzles. Subsequently, the pressure tapers [...] Read more.
The present study explores the influence of diverse nozzle geometries on the combustion characteristics of ADN-based energetic propellants. The pressure contour maps reveal a rapid initial increase in the average pressure of ADN-based propellants across the three different nozzles. Subsequently, the pressure tapers off gradually as time elapses. Notably, during the crucial initial period of 0–5 μs, the straight nozzle exhibited the most significant pressure surge at 30.2%, substantially outperforming the divergent (6.67%) and combined nozzles (15.5%). The combustion product variation curves indicate that the contents of reactants ADN and CH3OH underwent a steep decline, whereas the product N2O displayed a biphasic behavior, initially rising and subsequently declining. In contrast, the CO2 concentration remained on a steady ascent throughout the entire combustion process, which concluded within 10 μs. Our findings suggest that the straight nozzle facilitated the more expeditious generation of high-temperature and high-pressure combustion gases for ADN-based propellants, expediting reaction kinetics and enhancing combustion efficiency. This is attributed to the reduced intermittent interactions between the nozzle wall and shock waves, which are encountered in the divergent and combined nozzles. In conclusion, the superior combustion characteristics of ADN-based propellants in the straight nozzle, compared to the divergent and combined nozzles, underscore its potential in informing the design of advanced propulsion systems and guiding the development of innovative energetic propellants. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

Figure 1
<p>Simplified schematic diagrams of three different nozzle models.</p>
Full article ">Figure 2
<p>Distribution contour maps of temperature (<span class="html-italic">T</span>), pressure (<span class="html-italic">p</span>), mass fraction <span class="html-italic">y</span><sub>ADN</sub> and <span class="html-italic">y</span><sub>N2O</sub> during the ignition process of ADN at <span class="html-italic">t</span> = 0, 1, 2, 5, and 10 μs in the (<b>a</b>) straight nozzle; (<b>b</b>) divergent nozzle; and (<b>c</b>) combined nozzle.</p>
Full article ">Figure 2 Cont.
<p>Distribution contour maps of temperature (<span class="html-italic">T</span>), pressure (<span class="html-italic">p</span>), mass fraction <span class="html-italic">y</span><sub>ADN</sub> and <span class="html-italic">y</span><sub>N2O</sub> during the ignition process of ADN at <span class="html-italic">t</span> = 0, 1, 2, 5, and 10 μs in the (<b>a</b>) straight nozzle; (<b>b</b>) divergent nozzle; and (<b>c</b>) combined nozzle.</p>
Full article ">Figure 3
<p>Curves of pressure (<b>a</b>) and adiabatic temperature (<b>b</b>) over time within the three types of nozzles. (Black: straight nozzle; orange: divergent nozzle; red: combined nozzle.)</p>
Full article ">Figure 4
<p>Pressure and adiabatic temperature curves along the central axis of three types of nozzles: (<b>a</b>) pressure in the straight nozzle; (<b>b</b>) adiabatic temperature in the straight nozzle; (<b>c</b>) pressure in the divergent nozzle; (<b>d</b>) adiabatic temperature in the divergent nozzle; (<b>e</b>) pressure in the combined nozzle; (<b>f</b>) adiabatic temperature in the combined nozzle.</p>
Full article ">Figure 5
<p>Curves illustrating the changes in mass fraction of reactants and products in the ADN combustion characteristics for different nozzle types: (<b>a</b>) ADN; (<b>b</b>) CH<sub>3</sub>OH; (<b>c</b>) N<sub>2</sub>O; (<b>d</b>) OH; (<b>e</b>) CO<sub>2</sub>; (<b>f</b>) CO. (Red: straight nozzle; blue: divergent nozzle; black: combined nozzle.)</p>
Full article ">
24 pages, 9726 KiB  
Article
The Kernel Density Estimation Technique for Spatio-Temporal Distribution and Mapping of Rain Heights over South Africa: The Effects on Rain-Induced Attenuation
by Yusuf Babatunde Lawal, Pius Adewale Owolawi, Chunling Tu, Etienne Van Wyk and Joseph Sunday Ojo
Atmosphere 2024, 15(11), 1354; https://doi.org/10.3390/atmos15111354 - 11 Nov 2024
Viewed by 838
Abstract
The devastating effects of rain-induced attenuation on communication links operating above 10 GHz during rainy events can significantly degrade signal quality, leading to interruptions in service and reduced data throughput. Understanding the spatial and seasonal distribution of rain heights is crucial for predicting [...] Read more.
The devastating effects of rain-induced attenuation on communication links operating above 10 GHz during rainy events can significantly degrade signal quality, leading to interruptions in service and reduced data throughput. Understanding the spatial and seasonal distribution of rain heights is crucial for predicting these attenuation effects and for network performance optimization. This study utilized ten years of atmospheric temperature and geopotential height data at seven pressure levels (1000, 850, 700, 500, 300, 200, and 100 hPa) obtained from the Copernicus Climate Data Store (CDS) to deduce rain heights across nine stations in South Africa. The kernel density estimation (KDE) method was applied to estimate the temporal variation of rain height. A comparison of the measured and estimated rain heights shows a correlation coefficient of 0.997 with a maximum percentage difference of 5.3%. The results show that rain height ranges from a minimum of 3.5 km during winter in Cape Town to a maximum of about 5.27 km during the summer in Polokwane. The spatial variation shows a location-dependent seasonal trend, with peak rain heights prevailing at the low-latitude stations. The seasonal variability indicates that higher rain heights dominate in the regions (Polokwane, Pretoria, Nelspruit, Mahikeng) where there is frequent occurrence of rainfall during the winter season and vice versa. Contour maps of rain heights over the four seasons (autumn, spring, winter, and summer) were also developed for South Africa. The estimated seasonal rain heights show that rain-induced attenuations were grossly underestimated by the International Telecommunication Union (ITU) recommended rain heights at most of the stations during autumn, spring, and summer but fairly overestimated during winter. Durban had a peak attenuation of 15.9 dB during the summer, while Upington recorded the smallest attenuation of about 7.7 dB during winter at a 0.01% time exceedance. Future system planning and adjustments of existing infrastructure in the study stations could be improved by integrating these localized, seasonal radio propagation data in link budget design. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

Figure 1
<p>A map of South Africa showing the nine study stations.</p>
Full article ">Figure 2
<p>(<b>a</b>) Daily vertical profile of atmospheric temperature and (<b>b</b>) monthly mean temperature for March 2015 at Durban.</p>
Full article ">Figure 3
<p>(<b>a</b>) Daily vertical profile of atmospheric temperature and (<b>b</b>) monthly mean temperature for July 2015 at Durban.</p>
Full article ">Figure 4
<p>Probability density functions of rain height during summer for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 4 Cont.
<p>Probability density functions of rain height during summer for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 5
<p>Probability density functions of rain heights during autumn for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 5 Cont.
<p>Probability density functions of rain heights during autumn for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 6
<p>Probability density functions of rain heights during winter for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 6 Cont.
<p>Probability density functions of rain heights during winter for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 7
<p>Probability density functions of rain heights during spring for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 7 Cont.
<p>Probability density functions of rain heights during spring for (<b>a</b>) Polokwane, (<b>b</b>) Nelpruit, (<b>c</b>) Pretoria, (<b>d</b>) Mahikeng, (<b>e</b>) Upington, (<b>f</b>) Bloemfontein, (<b>g</b>) Durban, (<b>h</b>) Port Elizabeth, and (<b>i</b>) Cape Town.</p>
Full article ">Figure 8
<p>The developed South African rain height contour map for the summer season.</p>
Full article ">Figure 9
<p>The developed South African rain height contour map for the autumn season.</p>
Full article ">Figure 10
<p>The developed South African rain height contour map for the winter season.</p>
Full article ">Figure 11
<p>The developed South African rain height contour map for the spring season.</p>
Full article ">Figure 12
<p>(<b>a</b>–<b>f</b>). Comparison of rain height effects on rain-induced attenuation at a Ku-Band frequency for (<b>a</b>) Polokwane, (<b>b</b>) Pretoria, (<b>c</b>) Nelspruit, (<b>d</b>) Upington, (<b>e</b>) Durban, and (<b>f</b>) Cape Town.</p>
Full article ">
14 pages, 1145 KiB  
Article
Superposition and Interaction Dynamics of Complexitons, Breathers, and Rogue Waves in a Landau–Ginzburg–Higgs Model for Drift Cyclotron Waves in Superconductors
by Hicham Saber, Muntasir Suhail, Amer Alsulami, Khaled Aldwoah, Alaa Mustafa and Mohammed Hassan
Axioms 2024, 13(11), 763; https://doi.org/10.3390/axioms13110763 - 4 Nov 2024
Viewed by 723
Abstract
This article implements the Hirota bilinear (HB) transformation technique to the Landau–Ginzburg–Higgs (LGH) model to explore the nonlinear evolution behavior of the equation, which describes drift cyclotron waves in superconductivity. Utilizing the Cole–Hopf transform, the HB equation is derived, and symbolic manipulation combined [...] Read more.
This article implements the Hirota bilinear (HB) transformation technique to the Landau–Ginzburg–Higgs (LGH) model to explore the nonlinear evolution behavior of the equation, which describes drift cyclotron waves in superconductivity. Utilizing the Cole–Hopf transform, the HB equation is derived, and symbolic manipulation combined with various auxiliary functions (AFs) are employed to uncover a diverse set of analytical solutions. The study reveals novel results, including multi-wave complexitons, breather waves, rogue waves, periodic lump solutions, and their interaction phenomena. Additionally, a range of traveling wave solutions, such as dark, bright, periodic waves, and kink soliton solutions, are developed using an efficient expansion technique. The nonlinear dynamics of these solutions are illustrated through 3D and contour maps, accompanied by detailed explanations of their physical characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>The visualization of solution (<a href="#FD6-axioms-13-00763" class="html-disp-formula">6</a>) with assumed parameters <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>m</mi> <mn>3</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>m</mi> <mn>4</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>2</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>3</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 2
<p>The visualization of solution (<a href="#FD9-axioms-13-00763" class="html-disp-formula">9</a>) with assumed parameters <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>m</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>3</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>5</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>6</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>.</mo> </mrow> </semantics></math> (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 3
<p>The visualization of solution (<a href="#FD12-axioms-13-00763" class="html-disp-formula">11</a>) with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mo>=</mo> <msqrt> <mn>3</mn> </msqrt> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>4</mn> </msub> <mo>=</mo> <msqrt> <mn>2</mn> </msqrt> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>n</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1.1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>5.18</mn> <mo>.</mo> </mrow> </semantics></math> (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 4
<p>The visualization of solution (<a href="#FD14-axioms-13-00763" class="html-disp-formula">14</a>) with parameters, <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>5</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>6</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>7</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>8</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>s</mi> <mn>9</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 5
<p>The visualization of solution (<a href="#FD17-axioms-13-00763" class="html-disp-formula">17</a>) with parameters, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>3</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>A</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 6
<p>The visualization of solution (<a href="#FD25-axioms-13-00763" class="html-disp-formula">25</a>) with parameters, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>0</mn> </msub> </semantics></math> = 0.3, <math display="inline"><semantics> <msub> <mi>h</mi> <mn>1</mn> </msub> </semantics></math> = 0.2, <math display="inline"><semantics> <msub> <mi>l</mi> <mn>1</mn> </msub> </semantics></math> = −2, <math display="inline"><semantics> <msub> <mi>l</mi> <mn>2</mn> </msub> </semantics></math> = 1, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics></math> = 0.3, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>2</mn> </msub> </semantics></math> = 3. (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 7
<p>The visualization of solution (<a href="#FD26-axioms-13-00763" class="html-disp-formula">26</a>) with parameters, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>0</mn> </msub> </semantics></math> = −6.5, <math display="inline"><semantics> <msub> <mi>h</mi> <mn>1</mn> </msub> </semantics></math> = 2, <math display="inline"><semantics> <msub> <mi>l</mi> <mn>1</mn> </msub> </semantics></math> = −2, <math display="inline"><semantics> <msub> <mi>l</mi> <mn>2</mn> </msub> </semantics></math> = 1, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics></math> = 3, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>2</mn> </msub> </semantics></math> = 0.3. (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 8
<p>The visualization of solution (<a href="#FD29-axioms-13-00763" class="html-disp-formula">29</a>) with parameters, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>4.3</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>l</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>l</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics></math> = 0.3, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>3</mn> <mo>.</mo> </mrow> </semantics></math> (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">Figure 9
<p>The visualization of solution (<a href="#FD31-axioms-13-00763" class="html-disp-formula">31</a>) with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>6.5</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>l</mi> <mn>1</mn> </msub> </mrow> </semantics></math> = 0.5, <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mn>2</mn> </msub> <mo>=</mo> <msqrt> <mn>3</mn> </msqrt> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <msub> <mi>a</mi> <mn>1</mn> </msub> </semantics></math> = 0.1, <math display="inline"><semantics> <msub> <mi>a</mi> <mn>2</mn> </msub> </semantics></math> = 0.1. (<b>a</b>) 3D behavior in spatial and temporal coordinates. (<b>b</b>) Contour plot in 2D.</p>
Full article ">
Back to TopTop